Daoze commited on
Commit
76e4577
·
verified ·
1 Parent(s): cf64f7a

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/-qPznNJmVxx/Initial_manuscript_md/Initial_manuscript.md +497 -0
  2. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/-qPznNJmVxx/Initial_manuscript_tex/Initial_manuscript.tex +317 -0
  3. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/3MrDT4bTycn/Initial_manuscript_md/Initial_manuscript.md +305 -0
  4. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/3MrDT4bTycn/Initial_manuscript_tex/Initial_manuscript.tex +243 -0
  5. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/BBrgJZF4pfc/Initial_manuscript_md/Initial_manuscript.md +293 -0
  6. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/BBrgJZF4pfc/Initial_manuscript_tex/Initial_manuscript.tex +349 -0
  7. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/BTzgpgtNaGq/Initial_manuscript_md/Initial_manuscript.md +323 -0
  8. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/BTzgpgtNaGq/Initial_manuscript_tex/Initial_manuscript.tex +245 -0
  9. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/BU9lJWKNTG9/Initial_manuscript_md/Initial_manuscript.md +421 -0
  10. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/BU9lJWKNTG9/Initial_manuscript_tex/Initial_manuscript.tex +323 -0
  11. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/BhGl3eYVaf9/Initial_manuscript_md/Initial_manuscript.md +317 -0
  12. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/BhGl3eYVaf9/Initial_manuscript_tex/Initial_manuscript.tex +243 -0
  13. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/BrBlpeYNTMc/Initial_manuscript_md/Initial_manuscript.md +285 -0
  14. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/BrBlpeYNTMc/Initial_manuscript_tex/Initial_manuscript.tex +245 -0
  15. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/CT27gkIMlKU/Initial_manuscript_md/Initial_manuscript.md +222 -0
  16. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/CT27gkIMlKU/Initial_manuscript_tex/Initial_manuscript.tex +194 -0
  17. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/D9M6uwZGC5y/Initial_manuscript_md/Initial_manuscript.md +473 -0
  18. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/D9M6uwZGC5y/Initial_manuscript_tex/Initial_manuscript.tex +464 -0
  19. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/E-PcUeaDbzv/Initial_manuscript_md/Initial_manuscript.md +515 -0
  20. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/E-PcUeaDbzv/Initial_manuscript_tex/Initial_manuscript.tex +353 -0
  21. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/H2GICxFVaGc/Initial_manuscript_md/Initial_manuscript.md +637 -0
  22. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/H2GICxFVaGc/Initial_manuscript_tex/Initial_manuscript.tex +425 -0
  23. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/H3GlkWt46f9/Initial_manuscript_md/Initial_manuscript.md +569 -0
  24. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/H3GlkWt46f9/Initial_manuscript_tex/Initial_manuscript.tex +433 -0
  25. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/HI9zjeYVaG9/Initial_manuscript_md/Initial_manuscript.md +259 -0
  26. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/HI9zjeYVaG9/Initial_manuscript_tex/Initial_manuscript.tex +178 -0
  27. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/HLcgsgKEpMq/Initial_manuscript_md/Initial_manuscript.md +371 -0
  28. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/HLcgsgKEpMq/Initial_manuscript_tex/Initial_manuscript.tex +239 -0
  29. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/HzgpxFETf5/Initial_manuscript_md/Initial_manuscript.md +449 -0
  30. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/HzgpxFETf5/Initial_manuscript_tex/Initial_manuscript.tex +305 -0
  31. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/SDyj8aZBPrs/Initial_manuscript_md/Initial_manuscript.md +361 -0
  32. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/SDyj8aZBPrs/Initial_manuscript_tex/Initial_manuscript.tex +263 -0
  33. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/SHQU_yejZFv/Initial_manuscript_md/Initial_manuscript.md +505 -0
  34. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/SHQU_yejZFv/Initial_manuscript_tex/Initial_manuscript.tex +383 -0
  35. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/SIclhxYV6f9/Initial_manuscript_md/Initial_manuscript.md +303 -0
  36. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/SIclhxYV6f9/Initial_manuscript_tex/Initial_manuscript.tex +239 -0
  37. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/SMxl-K4pG9/Initial_manuscript_md/Initial_manuscript.md +383 -0
  38. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/SMxl-K4pG9/Initial_manuscript_tex/Initial_manuscript.tex +287 -0
  39. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/ShGxRxFV6Mq/Initial_manuscript_md/Initial_manuscript.md +305 -0
  40. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/ShGxRxFV6Mq/Initial_manuscript_tex/Initial_manuscript.tex +280 -0
  41. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/Uh8fD3uPiv6/Initial_manuscript_md/Initial_manuscript.md +379 -0
  42. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/Uh8fD3uPiv6/Initial_manuscript_tex/Initial_manuscript.tex +375 -0
  43. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/dcbsb4qTmnt/Initial_manuscript_md/Initial_manuscript.md +363 -0
  44. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/dcbsb4qTmnt/Initial_manuscript_tex/Initial_manuscript.tex +239 -0
  45. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/eK2ZbaaJvd/Initial_manuscript_md/Initial_manuscript.md +431 -0
  46. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/eK2ZbaaJvd/Initial_manuscript_tex/Initial_manuscript.tex +329 -0
  47. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/oyJfW3GmBGX/Initial_manuscript_md/Initial_manuscript.md +457 -0
  48. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/oyJfW3GmBGX/Initial_manuscript_tex/Initial_manuscript.tex +382 -0
  49. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/r3G_ReFNpM9/Initial_manuscript_md/Initial_manuscript.md +419 -0
  50. papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/r3G_ReFNpM9/Initial_manuscript_tex/Initial_manuscript.tex +416 -0
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/-qPznNJmVxx/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,497 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Promoting Feature Awareness by Leveraging Collaborators' Usage Habits in Collaborative Editors
2
+
3
+ Emmanouil Giannisakis*
4
+
5
+ University of British Columbia
6
+
7
+ Vancouver, Canada
8
+
9
+ Jessalyn Alvina†
10
+
11
+ Université Paris-Saclay, CNRS, Inria
12
+
13
+ Orsay, France
14
+
15
+ Andrea Bunt ${}^{ \ddagger }$
16
+
17
+ University of Manitoba
18
+
19
+ Winnipeg, Canada
20
+
21
+ Parmit Chilana ${}^{§}$
22
+
23
+ Simon Fraser University
24
+
25
+ Burnaby, Canada
26
+
27
+ Joanna McGrenerer ${}^{¶}$
28
+
29
+ University of British Columbia
30
+
31
+ Vancouver, Canada
32
+
33
+ ## Abstract
34
+
35
+ Users often rely on their collaborators to find relevant application features by observing them "over the shoulder" (OTS), usually in a synchronous co-located setting. However, as remote work settings have become more common, users can no longer rely on such in-person interaction with collaborators. Therefore, we investigate designs that help the user become aware of relevant features based on collaborators' feature usage habits. We created five design concepts as video prototypes which varied in five design dimensions: number of active collaborators, number of shared documents, specificity of comparison, user involvement, and goal of the feature awareness. Interviews $\left( {\mathrm{N} = {18}}\right)$ probing the design concepts indicate that collaborator-based feature awareness would be valuable for discovering novel features and producing a consistent style across the shared document, but some users may feel micromanaged or self-conscious. We conclude by reflecting on and expanding our design space and discussing future design directions supporting remote OTS learning.
36
+
37
+ Index Terms: User Interfaces [User Interfaces]: Graphical user interfaces (GUI)-Empirical studies in interaction design;
38
+
39
+ ## 1 INTRODUCTION
40
+
41
+ Modern software applications offer a large set of features which often include hundreds or thousands of different commands and keyboard shortcuts [35]. As a result, it is challenging for users to be aware of the available features and to identify which ones are relevant to their tasks $\left\lbrack {{25},{60},{66}}\right\rbrack$ . Although various support tools and mechanisms exist that aim to raise a user's awareness of features, such as online documentation, tutorials, and videos [41], it has been shown that users tend to prefer social solutions, where a user learns about a new feature from other users $\left\lbrack {{20},{40},{71}}\right\rbrack$ . Such solutions can draw on different "levels" of social communities, from the global level, often referred to as "the crowd", which includes Q&A forums, all the way down to a more local level, such as an individual in the same institution. For example, users commonly rely on their colleagues to discover relevant features by observing them "over-the-shoulder" (OTS) [60,70] or by directly asking them for help [40]. This type of serendipitous feature discovery thrives in a synchronous co-located setting as users can leverage their shared work context and users tend to trust their colleagues more than other sources [60].
42
+
43
+ With the increase in remote work over the past few years [68], especially during the COVID-19 pandemic [31], in-person serendipitous interactions are far less frequent today, leaving fewer opportunities for feature discovery among colleagues. Screen sharing could potentially enable synchronous OTS interactions, however, a lack of support for communicating about the interactions makes discovering new features in this setting challenging $\left\lbrack {{60},{72}}\right\rbrack$ . Prior work has also proposed tools as solutions that facilitate short synchronous help exchanges $\left\lbrack {7,{38}}\right\rbrack$ , or provide additional persistent, asynchronous content $\left\lbrack {{24},{72}}\right\rbrack$ (e.g., workflows from individuals). Such tools are useful, but they typically require the user to leave their current application and switch to another one, which can be disruptive for both the learner and expert [60]. Therefore, we wondered how could a user observe and leverage a colleague's software knowledge when working in remote asynchronous situations without having to switch from one application to another?
44
+
45
+ Our overarching goal is to design in-application tools and techniques that promote feature awareness based on a colleague's software knowledge. We focus on leveraging the user's direct collaborators within the context of common document(s) in collaborative editor applications (e.g., all the users working on a Google Sheet document) to provide feature awareness from trusted sources, who are working on the same tasks. The popularity of collaborative editors has increased over the past decade as they offer a shared environment for users to work remotely, synchronously, or asynchronously $\left\lbrack {{13},{61}}\right\rbrack$ .
46
+
47
+ While there is much design inspiration from other feature awareness solutions in the literature, designs that will satisfy our particular goals are not immediately obvious. For example, some existing solutions recommend features based on system-determined "similar users" across all those who use a given application [58,59]. These tools provide numerical command usage comparisons [59], which might be acceptable with "crowd-level" comparisons, but users might be less comfortable when comparisons are to known colleagues. Users might be comfortable sharing knowledge with their colleagues through Q&A approaches (e.g., AnswerGarden [1]), but are missing application context. Hence, as a starting point we asked: What are the potential benefits, drawbacks, and design consideration for tools that aim to raise feature awareness by leveraging collaborator usage patterns and shared application documents?
48
+
49
+ To answer our question, we followed a Research through Design [77] approach. This approach focuses on the generation of design artifacts that are used as exemplars to probe people's reactions, attitudes, and perceptions, to produce research findings [77]. We first defined a design space based on the existing literature, our own experiences working in collaborative teams, and a small informal formative study. Our initial design space includes five dimensions (Fig. 1) that range from the number of active collaborators, to the degree of user involvement required by the user. We then generated five different design concepts which intentionally emphasized different aspects of the five design dimensions, and we created corresponding video prototypes [76]. We conducted a semi-structured interview study $\left( {\mathrm{N} = {18}}\right)$ to elicit feedback on the potential benefits and drawbacks of the design concepts and to understand users perception of points in the design space. We used this feedback to reflect on and expand the design space (Fig. 3).
50
+
51
+ ---
52
+
53
+ *e-mail: em.giannisakis@gmail.com
54
+
55
+ ${}^{ \dagger }$ e-mail: jessalyn.alvina@lisn.upsaclay.fr
56
+
57
+ ${}^{ \ddagger }$ e-mail: bunt@cs.umanitoba.ca
58
+
59
+ §e-mail: pchilana@cs.sfu.ca
60
+
61
+ Te-mail: joanna@cs.ubc.ca
62
+
63
+ ---
64
+
65
+ This paper makes the following contributions: First, we outline five design dimensions to characterize the design space around raising feature awareness based on the user's collaborators in a shared application with common documents. These can be used as a generative resource for creating new tools. Second, we offer five alternative design concepts generated using the design space that showcase how the user's collaborators use the application. Our elicitation study probed and explored the space, identifying where the most promising design opportunities lie as well as limitations of our overall approach to raising feature awareness. For example, participants felt such tools would be valuable for not only discovering novel features but also for identifying features that could help a group of collaborators produce a consistent style across the shared document. That said, they might feel micromanaged and self-conscious. Third, we present concrete design implications and important future considerations for raising feature awareness based on the user's collaborators.
66
+
67
+ ## 2 RELATED WORK
68
+
69
+ Feature awareness is an important part of software learnability and usability [25]. In this section, we focus on reviewing design efforts around raising feature awareness through social solutions that draw on user communities and individual users. We also briefly touch on technical solutions.
70
+
71
+ ### 2.1 Feature Awareness Based on User Communities
72
+
73
+ Some prior work in feature awareness has utilized the usage habits of broad user communities such as all users of an application (crowd). CommunityCommands [59] recommends commands by implicitly comparing similar users from the crowd using collaborative filtering algorithms $\left\lbrack {{30},{65}}\right\rbrack$ . Patina $\left\lbrack {58}\right\rbrack$ also utilizes similar users from the crowd to highlight commands within the interface that the user most frequently uses and that other similar users most frequently use. As such, Patina provides a visual feature usage comparison. Owl $\left\lbrack {{53},{54}}\right\rbrack$ is also a feature recommendation system that compares the usage habits of the users within the same organization as the main user to recommend relevant features. These tools operate on the command level and offer a lightweight way to help users become aware of relevant features. Although these solutions can provide useful feature recommendations while minimizing the user's involvement, it can be difficult for the users to assess the usefulness of the highlighted features (i.e., relevancy) as they may not have enough information about the users that the system is based on (i.e., trust on the sources) $\left\lbrack {{60},{75}}\right\rbrack$ .
74
+
75
+ Prior work has also focused on recommending workflows (i.e., sequences of commands) based on the community. CADament aims to help users observe other users by providing a viewport to their screens [49], Coscripter [47] allows users to create and share scripts to automate processes within the same enterprise. Other tools [44,74] recommend relevant workflow videos generated from the crowd. These tools can increase the user's understanding of the software's capabilities, but they require the user to stop their current task to see the generated videos. Prior work has also leveraged broader user communities to help the user understand how to use their software. For example, AnswerGarden [1] offers a Q&A repository within the organization while other tools like $\left\lbrack {8,{16},{29},{57}}\right\rbrack$ leverage the knowledge of the broader user community by using widgets that are integrated in the user's applications. For example, in LemonAid [8], the user can select an application widget to see community questions and answers related to that widget. Tools like AnswerGarden can help users get help from their direct collaborators. However, they need to interrupt their current task [60], and also it can be difficult to locate useful answers from past discussions [8].
76
+
77
+ Our work builds on community-based feature awareness tools that offer lightweight and in-application solutions (e.g., Community-Commands [59] and Patina [58]). However, instead of focusing on large user communities such as the user's organization or all users of an application, we focus on the close-knit group of a user's collaborators on a shared document. We hypothesize that by focusing on this group, we can avoid challenges that systems based on the broader user community often face, such as understanding the user's goals $\left\lbrack {2,9}\right\rbrack$ and finding similar users within the community.
78
+
79
+ ### 2.2 Feature Awareness Based on Individual Users
80
+
81
+ Some tools aim to mediate the social interaction between two users to help one or both discover relevant features. Users prefer this type of social solutions $\left\lbrack {{40},{71}}\right\rbrack$ where, for example, they can get task-specific advice by observing what one of their colleagues are doing "over the shoulder" [69]. Such interactions can be very effective, yet they do not happen frequently [60] because they can be time consuming as well as difficult to coordinate and record [72].
82
+
83
+ Prior work has aimed to address the issues of coordination. Some systems have focused on helping users to find experts who can respond to their questions $\left\lbrack {{32},{36},{38}}\right\rbrack$ which can minimize the response time [63]. MicroMentor [38] for example, helps the user arrange 3-minute sessions with an expert user. MarmalAid [7] anchors real-time chat conversations to individual graphical widgets of a 3D modeling tool. These tools requires high involvement from users, as they have to interrupt their current task to join a video call for the learning exchanges. Other tools aim to help the user find relevant workflows by seeing their colleagues asynchronously. For example, Customizer [72] allows users to see how their colleagues have customized their tools and thus help them find relevant workflows. Some other tools $\left\lbrack {{24},{26},{45}}\right\rbrack$ record and extract video that shows the workflow that individual users follow to complete a task. Finally, some tools $\left\lbrack {{22},{28}}\right\rbrack$ aim to optimize the synchronous one-to-one interaction, especially in the case of IDEs while users are in pair programming sessions [5]. The main goal of these tools $\left\lbrack {{22},{28}}\right\rbrack$ is to help the user understand their collaborators' actions, specifically focusing on their collaborators' changes in the shared document.
84
+
85
+ The above tools can be effective but also time-consuming and require users to stop their current tasks to interact with other users. Therefore, these tools may be more appropriate for helping users solve more complex issues that go beyond feature awareness. Our work focuses on feature awareness and explores design solutions that aim to minimize user involvement and thus task interruption while taking advantage of the user's direct collaborators.
86
+
87
+ ### 2.3 Technical Solutions to raising feature awareness
88
+
89
+ Prior work has also proposed technical solutions to raising feature awareness. For example, tip-of-the-day tools [19] proactively introduce available functionalities, and quick assist [19] (often available IDEs) proposes quick fixes when developers face a problem. These tools propose features that are not necessarily relevant to the user or novel [17]. Other tools highlight features based on the user's current context $\left\lbrack {{11},{12},{18}}\right\rbrack$ , current actions $\left\lbrack {{33},{37}}\right\rbrack$ or command usage history $\left\lbrack {3,9,{34}}\right\rbrack$ . The challenge with these tools is that their domain knowledge is often predesigned and self-contained without considering community knowledge, which constantly evolves [51]. An exception is QFRecs [39] which bases its recommendations on an application's online documentation, which can be up to date with the newest features. Finally, some tools highlight shortcut alternatives using notifications $\left\lbrack {{23},{64}}\right\rbrack$ , by integrating shortcut cues within the UI $\left\lbrack {{21},{55}}\right\rbrack$ , or by using external widgets $\left\lbrack {{42},{48},{56}}\right\rbrack$ . While these tools offer reactive, contextual help, prior studies indicate that users tend to learn only a small subset of the available shortcuts. $\left\lbrack {{43},{54}}\right\rbrack$ .
90
+
91
+ Our work explores a solution that focuses on collaborators' software usage habits to help the user identify the commands and the keyboard shortcuts that they need to complete their current tasks.
92
+
93
+ ## 3 DESIGN SPACE
94
+
95
+ ### 3.1 Methodology Overview and Rationale
96
+
97
+ Our review of prior work indicates that raising feature awareness based on the user's collaborators while requiring only modest user involvement is an under-explored space. While there are opportunities to apply design insights from related work on crowd-based approaches or solutions that are based on individual users, how to translate these insights and leverage the unique design opportunities afforded by this new context is unclear. Therefore, to systematically explore this design space, we used a Research through design (RtD) [77], an approach in interaction design research that intersects theories and technical opportunities to generate a concrete problem framing and a series of design artifacts (e.g., concepts, prototypes, and documentation of the design process). Prior work on raising feature awareness has often focused on proposing, implementing, and evaluating a single system, with the aim of understanding in depth how the proposed system can benefit the user. In contrast, our approach probes on the potential roles, forms, and values of emerging near-future technology by using more than one design vision, as proposed in other works $\left\lbrack {{14},{62}}\right\rbrack )$ . Prior work has used a similar approach to investigate the design space around supporting cross-device learnability [4], data legacy [27], and personal data curation [73]. We aim to understand user reactions towards this under-explored problem space, to define concrete design goals, and to generate design implications for future implemented systems.
98
+
99
+ Our application of the RtD approach was as follows: We first carefully generated a set of design dimensions as similarly done in $\left\lbrack {4,{73}}\right\rbrack$ . We generated this set by clustering and mapping insights from prior work, reflecting on the authors' personal experiences, and using findings from an informal formative study. During a series of our research group meetings, we refined these insights into a set of five relevant design dimensions. These dimensions are not meant to be exhaustive, but rather are those that seem to be most prominent based on our review of our insights from prior work and the informal formative study. We then use this set of design dimensions as a generative tool to create five design concepts in the form of video prototypes. Finally we use these design concepts in an interview study to elicit participants reactions towards the problem space and aspects of our design space.
100
+
101
+ In the remainder of this section, we describe our informal formative study and detail our proposed design space.
102
+
103
+ ### 3.2 Informal Formative Study: Method and Analysis
104
+
105
+ We conducted an informal formative study with two goals in mind: 1) to understand how users currently learn from each other when collaborating remotely, and 2) to gather initial thoughts on how raising feature awareness based on their collaborators might impact their current practices. We advertised our study on a university mailing list. We recruited 11 participants ( 6 women and 5 men, 21-30 years old) with diverse occupations (e.g., accountants, data analyst, event planner, etc.), all of whom reported collaborating with others at least once per week using editors like Google Docs.
106
+
107
+ During a 60-minute Zoom session with each participant, we introduced an interactive prototype ${}^{1}$ that shows feature recommendations within an editor that differ in terms of 1 ) the user community from which the recommendations are derived (from crowd-powered recommendations or from the user's collaborators on a shared document); and 2) whether or not the user's collaborators are directly identifiable in individual recommendations. We then elicited participants' reactions towards the problem space and each feature recommendation type. We analyzed participant feedback inductively and saw themes emerge related to the participants' different goals for feature discovery, preferences for seeing recommendation from collaborators, and perceptions of how much time they wanted to invest in such a system. We used these initial insights to inform our design space, which we discuss in the next section.
108
+
109
+ ### 3.3 Design Space Dimensions
110
+
111
+ The informal formative study provided new insights on potential benefits and drawbacks of tools that raise feature awareness based on the user's collaborators working on shared documents. We do not provide a comprehensive description of these findings here (in part because of some overlap with the elicitation study findings described later). Instead, in this section we discuss how we used the study findings, the related work, and the authors' own experiences to derive a design space. We describe each dimension and provide relevant participant quotes for those motivated by the informal study.
112
+
113
+ D1: Number of active collaborators: Our informal formative study suggested that some participants were more interested in the features that specific individuals were using rather than the features that the majority of their collaborators were using For example, some participants noted that they would be more willing to try a feature if they perceived their collaborator to be technical savvy. As formative study participant ${\mathrm{{FP00}}}^{2}$ commented "If it was someone on my team who I know is really tech-savvy, I saw that they used certain functions more I might pay a little more attention to that". In contrast, prior work indicates that including more users allows the main user to discover a broader selection of features [50]. Therefore, this dimension investigates whether including a single collaborator (e.g., the technical expert) or more collaborators would help to raise feature awareness: on one end (Fig. 1-D1), we have a single active collaborator, on the other end we have all active collaborators. By active collaborators, we mean the collaborators who have access to the document and actively edit it.
114
+
115
+ D2: Number of documents included: We based this dimension on our (i.e. this paper's authors) experiences. Specifically, while discussing the $\mathbf{D}\mathbf{1}$ dimension, we realized that we often worked with the same group of collaborators to create multiple similar shared documents that follow similar formatting guidelines. For example, the same group of collaborators could work on multiple presentations. This dimension investigates whether including the collaborators' actions from only the current document or other similar shared documents would help the user become aware of relevant features. On one end (Fig. 1-D2), we have the current document only, while on the other end, we have all the documents that are shared across the same collaborators.
116
+
117
+ D3: Specificity of comparison: This dimension is based on existing work on raising feature awareness that explicitly $\left\lbrack {{53},{54},{58}}\right\rbrack$ or implicitly compares $\left\lbrack {{50},{58}}\right\rbrack$ the user’s individual feature usage habits with the user community as an aggregate. On one end (Fig. 1-D3), we have tools that explicitly compare the user's actions with their collaborators' (e.g., with the use of visualizations). The goal of these tools is to help the user reflect on their actions and adjust their habits. On the other end, we have tools that implicitly compare the user's actions with their collaborators' to highlight relevant features.
118
+
119
+ D4: User involvement: An early motivation of this work was to investigate tools that raise feature awareness based on the user's collaborators while completely minimizing the user involvement. However, our formative study suggested that users might be willing to invest more time using these systems under specific circumstances. One example that our participants gave was for asking follow-up questions regarding the highlighted feature. FP01 said, "if I have any further questions or a detailed question, I know who I can talk to". With this dimension we want to investigate the amount of involvement that the user and their collaborators need to invest using the tool for the user to discover relevant features. On one end we have low user involvement (Fig. 1-D4), where the system focuses on showing the relevant features without offering possibilities for further interaction (as in $\left\lbrack {{54},{56},{58},{59}}\right\rbrack$ ). On the other end, we have high user involvement where the user needs to interact with the system and with their collaborators to find the relevant features (solutions that may fall to this end are [38]).
120
+
121
+ ---
122
+
123
+ ${}^{1}$ We included figures of the prototype in the supplementary material.
124
+
125
+ ${}^{2}$ We use FPXX to refer to a participant in our formative study
126
+
127
+ ---
128
+
129
+ ![01963e70-187a-7d5c-b5e7-32099abf3a23_3_292_159_1214_432_0.jpg](images/01963e70-187a-7d5c-b5e7-32099abf3a23_3_292_159_1214_432_0.jpg)
130
+
131
+ Figure 1: We identified five design dimensions that we used to generate the five design concepts of feature awareness tools: D1: Number of Active Collaborators, D2: Number of Documents Included, D3: User Involvement, D4: Specificity of Comparison, and D5: Goal of Feature Awareness. Subsequent to the elicitation study, we expanded the design space by adding D6: Detail of Feature Usage.
132
+
133
+ D5: Goal of feature discovery: Perhaps the most unexpected observation from our informal formative study was that participants were interested in how collaborator-based recommendations could help them keep the document formatting consistent across the collaborators. They cared about which commands their collaborators were using regardless of whether they already knew the commands or not. For example, FP10 commented that collaborator-based recommendations would be useful "for the sake of consistency, because people will often use different methods in collaborative documents that do make them a bit messy". This observation is an interesting contrast to prior work $\left\lbrack {{50},{59}}\right\rbrack$ that has identified "good" feature recommendations to be novel and useful to the user. While this might be true for crowd-based recommendations, we see that collaborator-based feature recommendations might be perceived as "good" regardless of whether the user is familiar with the recommended feature. This dimensions aims to explore the user's goal in using the tool. On one end (Fig. 1-D5), we have tools that aim to highlight features that may be known to the user already, in order to help the user converge on common software usage practices. On the other end, we have tools that aim to highlight novel features (i.e., only the features that the user has never used before) that are relevant to the user.
134
+
135
+ ## 4 DESIGN CONCEPTS
136
+
137
+ To explore where user preference lies within the design space (Fig. 1), we created five design concepts that differ along the design dimensions. For these design concepts, we took inspiration from existing tools that raise feature awareness which we then redesigned to emphasize collaborator-based feature awareness. For each concept, we created a video prototype to illustrate how it works and to be able to compare the design concepts in a systematic manner without the influence of potential implementation biases [4]. We used Figma ${}^{3}$ to create the clickable prototype and a video editor Camtasia ${}^{4}$ to record the user interaction and produce the final video.
138
+
139
+ By creating our own concepts and video prototypes, we were able to push the design dimensions in specific directions, often exploring their extremes in new combinations $\left\lbrack {{73},{76}}\right\rbrack$ . These design concepts synthesize a mix of contrasting ideas into a cohesive collection, applying existing and proposed design approaches in this new context. It is important to note that this is not an exhaustive exploration, i.e., we did not cover all the possible combinations that we could derive from the design space. This would not be feasible without overwhelming the participants of our elicitation study. We thus focused on the combinations we thought were interesting to explore. For example, we did not design a concept that emphasized explicit comparison with single active collaborator because we believed that such combination would not necessarily prompt the user to reflect on their actions.
140
+
141
+ To explain the concepts in the video prototype, we asked the viewer to imagine working on a shared document with other collaborators. We presented all concepts as add-ons to the Google Drive Suite ${}^{5}$ (Google Documents, Google Sheets, and Google Slides). We did not focus on one application of the suite because we wanted to show users that they could potentially install these add-ons with any collaborative shared editor. Finally, we noted to participants that these concepts might raise some privacy concerns, for which we would discuss some solutions in the final discussion with them) However, to keep the focus on the design dimensions, we did not explore any privacy-preserving solutions (except for NewsFeat) in the video prototypes.
142
+
143
+ ### 4.1 NewsFeat
144
+
145
+ The design concept NewsFeat was created to strongly emphasize the single active collaborator (D1), high user involvement (D4), and implicit comparisons between the user's command usage and their collaborators' command usage (D3). Additionally, NewsFeat focuses on the current shared document in D2. It positions in the middle of the D5 Goal of feature discovery dimension.
146
+
147
+ NewsFeat allows users to identify potential useful features by allowing the user to see what commands each of their collaborators use. We took inspiration from existing social networks like Twitter and Facebook where the user can follow other users to see their activities. The user first has to send a request and if their collaborators approve it, the user can see the commands that each collaborator used the same day, including the frequency of use (Fig. 2-A.1). By default the user sees the commands that their collaborator used and the user did not (i.e., implicit comparison with a single active collaborator), although if the user wants they can remove this filter. By allowing users to filter which commands they want to see, NewsFeat can be used both to discover new features or to converge to common practices. In addition, the user and their collaborators can further interact with the system to identify relevant features or to gain more information about one of the commands (i.e., allow high user involvement). For example, they can "like" (Fig. 2-A.2) a command or ask follow-up questions using the comment button (Fig. 2-A.4). The collaborators can also recommend commands that they think are useful. In this case, the recommended command appears with a checkmark next to the command's name (Fig. 2-A.3). The collaborators can group a repeated sequence of commands (Fig. 2-A.5). For example, imagine a scenario where the collaborator applies the same style (font family, font style, and color) for all the headings. They can group these three commands and give them a specific name. Finally, the user can also see the commands that their collaborators used recently (Fig. 2-A.6) in addition to the commands their collaborators used on the same day. They can choose between the commands that their collaborators used last week, last month, or the last six months (not shown in the figure).
148
+
149
+ ---
150
+
151
+ ${}^{3}$ https://www.figma.com/
152
+
153
+ ${}^{4}$ https://www.techsmith.com/store/camtasia
154
+
155
+ ${}^{5}$ https://drive.google.com/
156
+
157
+ ---
158
+
159
+ ![01963e70-187a-7d5c-b5e7-32099abf3a23_4_294_134_1211_751_0.jpg](images/01963e70-187a-7d5c-b5e7-32099abf3a23_4_294_134_1211_751_0.jpg)
160
+
161
+ Figure 2: (A) NewsFeat. (B) CommandMeter. (C) CollabCommands. (D) CollabPatina. (E) MostFrequentKS.
162
+
163
+ ### 4.2 CommandMeter
164
+
165
+ The design concept CommandMeter was created to strongly emphasize the explicit comparison between the user and their collaborators (D3), high user involvement (D4), and all active collaborators (D1). Similar to NewsFeat, it focuses on the current shared document in D2, and has a slight focus on helping users to converge in common usage habits in D5.
166
+
167
+ With CommandMeter, the user can identify useful features by comparing their command usage to that of their collaborators through the use of visualizations. By making this explicit comparison, the user can reflect on their own behavior and consider whether they want to change their command usage habits. We were inspired by similar systems like Skil-o-Meter [54] that also used visualizations to compare command usage habits between the user and other members within the same organization. Differently, Com-mandMeter compares command usage habits between the user and their collaborators on a shared document. CommandMeter requires high involvement as the user has to switch between two views. One view is a collapsible panel on the bottom right corner (Fig. 2-B.1). Every time the user selects a command ('Strikethrough' shown in figure), this panel uses horizontal bars to compare the user's and their collaborators' frequency of usage. The second (larger) view offers similar visualizations for all the available commands. In the second view, the user can see all the commands and how their frequency of command usage differs from the average frequency of all of their active collaborators (Fig. 2-B.2). Finally, they can choose which collaborators they want to include or exclude from their visualizations (not shown in Fig. 2-B).
168
+
169
+ ### 4.3 CollabCommands
170
+
171
+ The design concept CollabCommands was created to support users in discovering new features (D5) based on all active collaborators (D1). Contrary to the other design concepts, CollabCommands strongly emphasizes the possibility to include all shared documents (D2). It requires little involvement from the user (D4).
172
+
173
+ With CollabCommands, the user can see recommendations derived from their collaborators' usage habits. Drawing inspiration from CommunityCommands [59], CollabCommands uses a collapsible panel (bottom right corner) to recommend commands that the user does not use but their collaborators do. Hence, CollabCom-mands offers a quick way for the user to identify new features that they might consider using (i.e., requires only low involvement). For each command, the tool shows the avatar of the collaborators that are using this command (Fig. 2-C.1). The user can further customize the tool if they want. They can choose which collaborators the tool will consider when it decides which commands may be relevant to the user (Fig. 2-C.2). Also, the user can decide to include all other shared documents in their recommendations (Fig. 2-C.3).
174
+
175
+ ### 4.4 CollabPatina
176
+
177
+ The design concept CollabPatina was created to slightly emphasize explicit comparisons (D3) while minimizing user involvement (D4) and it includes all collaborators (D1). It focuses on the current shared document (D2) and puts a slight emphasis on converging to common usage practices (D5).
178
+
179
+ CollabPatina overlays the current interface with color coded visual indicators to show the user's and their collaborators' feature usage (Fig. 2-D). We drew inspiration from the Patina tool [58], but CollabPatina is based on the user's collaborators and allows for some extra customisation. CollabPatina overlays both the toolbar and the menu with color highlights, indicating which features (commands and keyboard shortcuts) the user frequently uses (Fig. 2-D.2) and which features all of the collaborators frequently use (Fig. 2-D.1). As such, CollabPatina requires low to no involvement from the user. The color highlights express a visual comparison, but one that is less explicit than in CommandMeter. The user can see a color bar on the top of the screen that shows what each color indicates (Fig. 2-D.3). When they click the color bar, a setting menu appears (not shown in Fig. 2-D) where the user can select whether they want to see color highlights that show the most frequently used commands or highlights that show the most frequently used keyboard shortcuts, or no color highlights.
180
+
181
+ ### 4.5 MostFrequentKS
182
+
183
+ The design concept MostFrequentKS was created to emphasize the discovery of new features (D5) (in this case, new keyboard shortcuts), by implicitly comparing (D3) all active collaborators (D1). It aims to minimize the user involvement (D4) and it focuses on the current shared document (D2).
184
+
185
+ MostFrequentKS requires low to no involvement from users. When the user selects a menu or toolbar to choose a command, the tool automatically checks if their collaborators frequently use the corresponding keyboard shortcut and shows a notification in the form of tooltip along with the collaborators' avatars (Fig. 2-E.1). If none of their collaborators frequently use the keyboard shortcut, then no notification appears. Clicking the toolbar buttons or the menu items will execute the command as it normally would in any scenario. MostFrequentKS draws inspiration from tools that use notifications to inform users about the existing keyboard shortcuts [64].
186
+
187
+ ## 5 ELICITATION INTERVIEW STUDY
188
+
189
+ We used the video prototypes of the design concepts as probes in a semi-structured interview study with 18 participants. The goal of this study was not to find a winner among the design concepts but rather to broaden our understanding of the potential benefits and drawbacks of raising feature awareness based on the user's collaborators' application usage, i.e., to assess our general approach to raising feature awareness. We solicited participants' attitudes, reactions, and perceptions of the design concepts, probing the spots in the design space that each concept highlights. In this way, we explored the design dimensions in a semi-targeted way.
190
+
191
+ ### 5.1 Participants
192
+
193
+ We used a screening survey (available in supplementary material) to recruit participants who had experience collaborating using shared editors. To ensure a diverse sample, we asked participants to mention how often they used collaborative editors, how often they used these editors to work remotely with others, their profession, and the number of collaborators that they worked with. We advertised the study on a mailing list for advertising research studies and stopped recruiting when we reached a saturation point, as is common in qualitative studies. We ended up with 18 participants ${}^{6}$ (10 women,8 men) between 18-50 years old (the majority were between 18-37 and one 50). The participants had difverse occupations such as software developers, students, receptionists, graphic designers, lighting artists, social workers, and teachers. All participants reported using shared editors like Google Docs to collaborate with others at least one or two times per week. The number of collaborators reported by participants ranged from 2 to 20 , with most regularly collaborating with 2 to 4 people.
194
+
195
+ ### 5.2 Procedure
196
+
197
+ The procedure we followed was based on prior work using RtD approach that used design concepts to elicit user reactions $\left\lbrack {4,{73}}\right\rbrack$ . Each session lasted between 60 to 90 minutes. It consisted of three parts: 1) a brief introductory interview focusing on the participants' experiences with collaboration on shared documents, 2) the elicitation part where the participants would see and discuss each design concept, and 3) a final discussion comparing all of the design concepts. One paper author conducted the interviews remotely using Zoom. We recorded all interviews (both audio and video) for later transcription. The participants received $\$ {15}$ per hour as compensation. Our study was approved by an institutional research ethics board.
198
+
199
+ During the introductory interview, we asked each participant about their experiences with collaborative editors. We asked them about which collaborative editors they used, how often they collaborate with other users, and typical sizes of their teams.
200
+
201
+ During the main elicitation part, we showed each of the five video prototypes, one at a time in random order. Before showing each prototype, we emphasized that the design concepts are not tied to a specific application, and they should try to reflect on how they would use it within their software of choice. We also told them that although our video prototypes do not address any privacy issues, they should feel free to express privacy concerns. For each video, first, we made sure that the participant understood the concept, and we encouraged them to ask any questions they may have or to replay to video if they wished. Afterward, we asked the participant about their first impressions and their thoughts on each design concept's different aspects. We focused on the aspects that provided insights into the design space. For example, we asked participants if they would use the filtering functionality of CollabCommands and CommandMeter to include or exclude any collaborator.
202
+
203
+ During the final part, we asked each participant about their experience across all concepts. We asked them to sort the five concepts from the most to the least preferred and to explain their rationale for their sort order.
204
+
205
+ ### 5.3 Data Analysis
206
+
207
+ We used thematic analysis [10] to identify recurring themes and patterns from our sessions. We transcribed all sessions and started analyzing them using inductive analysis. Initially, two of the authors coded five transcripts and discussed their codes, and then one author open coded the rest of the sessions. Next, we grouped the codes, and all the authors discussed possible themes and patterns across the groups. We discussed the possible themes over several iterations, focusing on areas that highlighted the potential benefits and drawbacks of raising a user's feature awareness based on their collaborators' use of an application. We used these themes and the participants' feedback on the individual design concepts to identify the approximate relative variation in participants' preferences across the design dimensions.
208
+
209
+ ### 5.4 Findings
210
+
211
+ Almost all participants (17/18) reported experiences discovering new features while observing their colleagues. In line with prior work [60], the participants found such interactions desirable but rarely happened. For example, P00 explained, "It's definitely more difficult to find [a new feature] on your own than to observe. Observing is easier." As expected, some participants explicitly reported fewer instances of this interaction with the switch to remote working, for example: "Because it's work from home, we don't really see each other and I don't get to observe their work (P08)". This participant
212
+
213
+ ---
214
+
215
+ ${}^{6}$ Initially we recruited 20 participants, but we had to exclude 2 participants due to technical issues.
216
+
217
+ ---
218
+
219
+ went on to talk about using email and messaging to replace such OTS knowledge sharing, yet wishing for an in-application support: "We'd usually be texting each other or calling each other to inform each other... So, we have to stick to this particular layout, or these other things we have to keep uniform. Instead of doing that communication outside the platform, I think, within the same platform, if you could see this information, I think it will be more efficient". As such, the participants felt positively about the idea of raising feature awareness based on their collaborators' software use using in-application tools.
220
+
221
+ #### 5.4.1 Overview of User Preference on Design Concepts and Design Dimensions
222
+
223
+ At the end of each session we asked participants to rank the design concepts from the most preferred to the least preferred. We aggregated all the first and second rankings by participants to identify which concepts participants preferred the most and which the least (this produced 36 ranking data points). CollabPatina was the most preferred (13/36) then NewsFeat (10/36), followed by MostFrequen- ${tKS}$ (6/36) and CollabCommands (5/36), with CommandMeter a clear last $\left( {2/{36}}\right)$ .
224
+
225
+ It is interesting to note that CollabPatina and NewsFeat represent different edges in the design space. CollabPatina was popular because of its low user involvement. This concept's goal was to provide an easy and quick way for the users to see which commands their collaborators use, and more importantly it also shows where the commands are located within the interface. The participants appreciated this functionality because they did not have to spend time locating the commands, which was not the case for Collab-Commands, NewsFeat, and CommandMeter. NewsFeat was popular because participants could see sequences of commands that their collaborators were using and ask follow-up questions. In contrast, CommandMeter, which is also a design concept that requires high user involvement was not so popular. It was ranked last most often because it requires high user involvement in order to compare the users' actions to their collaborators.
226
+
227
+ It is important to note that although NewsFeat was well received, participants did raise some concerns regarding feeling self-conscious and micromanaged, which we discuss in Theme 4. Also, the participants were particularly enthusiastic about the ability to see command groupings, but noted that the utility of this aspect of NewsFeat would require high user involvement, i.e., the user and their collaborators would need to take the time to create groups of commands. Participants felt that investing this time would be fine under certain circumstances. For example, P05 commented "... if I want to help new members out in the company, then I would do this. I would group stuffup and then reply to comments and stuff".
228
+
229
+ For the rest of the section we discuss themes that emerged across all the design concepts.
230
+
231
+ #### 5.4.2 Theme 1: Raising Feature Awareness Based on The User's Collaborators Could Help Users Converge on Software Usage practices
232
+
233
+ Consistent with the insights from our informal formative study (Sect. 3.3-D5), participants commented on how these tools could help them and their team converge on common software usage practices when working on shared artifacts using feature awareness tools. The participants commented on the usefulness of the concepts to identify similarities and differences in features that their collaborators use to produce a consistent style. For example, P08 commented on why they thought CollabCommands could be useful to them and their colleagues "when I used to work on PowerPoint, we'd usually be texting or calling each other ... to stick to this particular [Pow-erPoint presentation] layout. Instead of doing that communication outside the platform, I think, within the same platform if you could see this information, it will be more efficient". P04 highlighted the efficiency of having an in-situ feature usage history displayed in NewsFeat: "Instead of me having to go and ask, 'What did you do? How did you do this?' I can actually see it in the activity, and it might save a few emails or some back and forth".
234
+
235
+ Participants also commented on how they could use these concepts the other way around (for example, the user could help their collaborators converge on common software usage practices). They described, for example, that if a user notices that their collaborators are not using the appropriate commands in a shared document, it could be useful to alert them about it. For example, P04 discussed how they would use NewsFeat to help their colleagues "... if we're stuck on something, if I get to see that, ... oh, okay, this is where maybe somebody got stuck, or why is this being returned to so many times, is there something that we need to revisit in that document itself?".
236
+
237
+ Finally, the participants also commented on how these design concepts could help them converge with their own past feature usage. Such a scenario may occur when a user tries to resume a task after a long time and could find it useful to be aware of features they had used in the past. For example, P00 commented on CollabPatina "Well, because I sometimes do things and I forget how I did them. So I like that I can also see how I did things".
238
+
239
+ One potential caveat that a couple participants noted was that exposing the user to other collaborators' usage habits may limit their style and creativity. They were concerned that by seeing what features their collaborators are using, they might feel discouraged to use the features they like to use or experiment less with new features. P07 who is a lighting artist had as initial impression of CollabCommands was "It will change my mind to use more and more whatever other people using. It will try to stop creativity, [...]" while P04 commented on CommandMeter "you might love this feature and want to use it all the time, but the rest of your team might not, and that can be a little tricky because if you're using it and nobody else is using it, then sometimes that's not helpful either".
240
+
241
+ #### 5.4.3 Theme 2: Raising Feature Awareness Based on The User's Collaborators Could Help Users be More Effi- cient With Their Tasks.
242
+
243
+ Some participants felt that they could use these tools to discover more efficient alternatives to do the same task. By efficient alternatives, we do not mean only keyboard shortcuts but also the sequence of steps that other collaborators take to complete the same task. For example, P09 commented when they saw the CommandMeter's visualizations "For example, if someone is using a command that all of us aren't, meaning something novel and different, that might help us figure out if we can also use that too, maybe it's a better way of doing a task than the version that we've been doing".
244
+
245
+ The participants also spoke about wanting to expose their own usage data to help their collaborators discover more efficient alternatives. For example, P02, a project coordinator working with a team of 6, said about MostFrequentKS, "Maybe I would just use this [MostFrequentKS] as a bit of an encouragement for those who might be on the fence about using keyboard shortcuts that, hey, there's actually a bunch of us are using it and this is ... helping us to be more efficient".
246
+
247
+ #### 5.4.4 Theme 3: Users Want Fine-Grained Control Over Awareness Data Sources
248
+
249
+ The majority of the participants (14/18) wanted fine-grain control over which subset of collaborators the tool draws feature usage from. They reported that their collaborators might have different roles, such as active editors, viewers, and reviewers. Further, active editors may be in charge of various tasks, only some of which may be relevant to the user. As a result, they felt that the features that the design concept will choose to highlight may not be sufficiently targeted to be valuable. For example, P09 commented, "There might be people that are just there for review or editing or just viewing purposes so their data will skew it a lot if you don't have the ability to exclude them".
250
+
251
+ When we asked participants about which collaborators the tools should include, their opinions differed. Some participants (4/18) wanted to include collaborators based on their role in the document. For example, P15 wanted to include all active editors in their NewsFeat: "probably the owner of the document, and then the main collaborators, and then anyone who's just kind of viewing it or doesn't actually have any [...] stake in the document, [...] then I wouldn't follow them". Other participants (5/18) wanted to include collaborators that are doing tasks similar to theirs. For example, P09 said "It's really helpful to be able to include or exclude certain people because [...] everyone is doing different things or there might be certain people that are just on there but not actively working on the documents. So being able to exclude those people from any sort of analytics is important".
252
+
253
+ Some participants $\left( {5/{18}}\right)$ wanted to include individuals based on their perceived expertise or role in the team/company. For example, P00 commented that they would like to include their collaborators who are knowledgeable with the software by using the CollabCom-mands filtering capabilities "I would include people I know are good at using the type of software that I'm working on".
254
+
255
+ Other participants $\left( {4/{18}}\right)$ did not want to include or exclude any of their collaborators. One possible reason is that, in their teams, all the collaborators have similar roles. For example, P06, a college student, said about CollabCommands: "....it's not like one collaborator is more useful and would have used more commands than another person, necessarily. So yeah, I don't really see a usefulness to that".
256
+
257
+ The participants were also interested in having some control over which documents the tools draws the feature usage from. They found this functionality useful if the other documents they included were similar to the current document. P04 commented about this functionality in CollabCommands: "I do find this valuable, because we do work with a lot of similar documents ... and especially because we're always looking to keep things consistent. So, I think having all shared would really help". Similarly, P09 said, "I wouldn't want it to do that by default because different documents, ... are trying to do different things ... the commands that I use in one might not necessarily be the same that I use in the other. But the ability to do that, having that option is fine".
258
+
259
+ #### 5.4.5 Theme 4: Too Detailed Information About the Collabo- rators' Actions Could Make Users Feel Micromanaged and Self-Conscious
260
+
261
+ The participants expressed concerns about the detailed information that some design concepts provide. Indeed our design concepts provide information about who used the feature, how often, and how recently to explain why this feature may be relevant to the user. The designs differ on the level of information detail. For example, NewsFeat provides more detailed information showing the exact number of times a named collaborator used a command on the same day. On the other hand, CollabPatina used color-codes highlights to imply the frequency of use of the user's collaborators without identifying the collaborators.
262
+
263
+ Although seeing more detailed information can benefit the user, as discussed in the previous themes, this information could also lead to feelings of being micromanaged and could cause anxiety among users. For example, when we prompted P07 about how they feel when they saw their collaborators’ avatars, they said,"when $I$ think about seeing collaborators' names using it, Ifeel like I am a very picky production manager who's trying to micromanage people and make them work faster". Similarly, when we asked P00 their reactions regarding the recency of information in NewsFeat they said, "Maybe they can have just a vague recents. [...] I wouldn't prefer an option to share daily because then there's an added pressure". P14 commented regarding detailed information of frequency: "If there is a command that I have not been using that often, I would feel that I am not contributing that much".
264
+
265
+ Some participants felt that detailed information could affect their decision to use a specific design concept and even suggested design changes. For example, P09 said about NewsFeat, "It would definitely make it less invasive if it was just a listing of [the collaborator's] most used commands without any numbers". Some participants suggested that they would like the ability to hide information to feel less stressed about the information they share. P06 said, "When I'm giving my permission, maybe I can hide one thing I don't want to show, or things I don't want to show off. Yes. I am giving you permission, but you can see this part, but I will hide the parts I don't want you to see".
266
+
267
+ We observed that individual differences related to professional dynamics and personality could affect how users feel about the level of shared details. Problematic professional dynamics such as the position within the organization's hierarchy and the relationship between the user and their collaborators could amplify micromanagement and self-consciousness issues. For example, P07 commented on their experience with their previous manager "it is just about who are you working with. [...] I've worked with some kind of a person who had psychological disorders, and the minimum mistake you made here will come to your very harsh way and he will give you some psychological difficulties [...] and that's the reason I wouldn't want to see my name is that there too: the blaming point". Also, the user's personality could affect how they perceive detailed information. If the user is more prone to stressful situations, they may be less open to see and share detailed software usage information. For example, P00 said, "My boss is super understanding, but I also struggle with anxiety. [...] So to have this other pressure of... I think people deserve a little bit more leniency and every detail shouldn't be shared with the people they're working with".
268
+
269
+ ## 6 Reflection on The Design Space
270
+
271
+ The findings from the elicitation study suggest that designers should consider all five dimensions when designing feature awareness tools based on the user's collaborators; none of the dimensions in our design space were shown to be unimportant. To further probe on the participants' preference for each design dimension, we went through the participants' transcripts to specifically look at comments related to the design dimension. We then positioned the participants' comments for each design dimension within the design space (Fig. 3). For example, P2's comment "I don't think I would really care to know who specifically out of my group uses these features" suggested that P2's preference for D1: Number of active collaborators leaned strongly towards all active editors. In the rest of this section, we reflect on our key findings on user preference within the design space and propose potential design dimensions to expand the design space (as illustrated in Fig. 3), and finally discuss their implications to drive future system designs.
272
+
273
+ We saw that most participants do want to include only a subset of data sources that the feature awareness tool draws from; they want the ability to control which collaborators (D1) and documents (D2) are included / excluded. This is an example where participants did not show preference for either end of the spectrum (Fig. 3 - D1 & D2). As a design implication, we imagine an interface that includes by default all collaborators and the current document, while easily allowing further control with interactive widgets.
274
+
275
+ We also observed that users had a strong preference for implicit comparison (Fig. 3 - D3) and generally prefer to have as little involvement as possible (D4). As a design implication, we propose that a system must make it easy for users to locate highlighted features within the interface. This can be accomplished with a solution like CollabPatina or with a hybrid solution that lists the highlighted features like CollabCommands but provides additional support for locating the feature when the user interacts with it in the list. Beyond locating a feature, participants are willing to have some involvement for features they deem to be especially valuable; for example, they will ask follow-up questions or actively recommend features to their collaborators (as in NewsFeat). Thus, with respect to the user involvement design dimension, participants had a preference for the low involvement end of the spectrum but there were some varying opinions (Fig. 3 - D4). As a design implication, the system should maximize the information related to the highlighted feature while minimizing user involvement. However, the system should provide non-trivial information, if the user wishes to interact more with it.
276
+
277
+ Finally, we observed that participants expressed a strong interest in using these tools to both find new features and to help them and their collaborators adopt common feature usage practices (D5). We see, therefore, that participants saw value in being exposed to the features that their collaborators use, both the ones that the user isn't aware of and the ones that the user already knows of (Fig. 3 - D5).
278
+
279
+ Our findings highlight a trade-off between the availability to view detailed usage information (which collaborator used a feature, how often they used it, and how recently) versus feeling microman-aged and self-conscious. Indeed a lot of the benefits highlighted in Themes 1, 2, and 3 depend on the user having access to this information. However, that same information can cause users to feel negatively, as discussed in Theme 4. Striking the right balance on how to present this information is an important design challenge.
280
+
281
+ Based on our results, we propose to expand our design space by adding Detail of feature usage information as an emerging design dimension. At one end is ${Low}$ level of detail, where designers could reveal collaborators' usage by using language (or a visual indicator) that describes the behavior, but avoids specific numerical values (for example, using "frequently used a command" vs. "used the command 20 times"). On the other end, we have High level of detail where designers could use precise numbers, dates, and names. An example of Low level of detail is CollabPatina which uses color-coded indicators to indicate commands that the user's collaborators frequently use, while an example of High level of detail is NewsFeat. This dimension is not independent of the other dimensions. For example, a design concept cannot offer explicit comparison (D3) without using detailed information. Also, it cannot offer the ability to control which collaborators the feature awareness system draws on without a High level detail of the collaborators' identities.
282
+
283
+ We observed that participants preferred a low level of detail, especially on the recency and frequency of feature usage. They were more comfortable with a system that provides more detailed identification information about the collaborators (Fig. 3 - D6). As a design implication, we propose a system that avoids numerical values for frequency and recency of command usage and can allow for a high level of detail for which collaborators used a feature. Our concepts displayed the avatars of the individual collaborators but a future direction is to include other identification information such as role of the collaborator within the company or their technical expertise.
284
+
285
+ ## 7 OVERALL DISCUSSION
286
+
287
+ Current solutions that are based on individual users $\left\lbrack {{26},{38}}\right\rbrack$ require users to stop their current tasks to either have brief video chats or watch targeted video tutorials. Complementary to this approach, we aimed to leverage the user's collaborators to facilitate in-situ feature discovery while minimizing their involvement and task interruptions. Participants perceived collaborator-based feature awareness tools to be valuable and effective for discovering and adopting common usage practices, but also noted potential issues with self-consciousness and micromanagement.
288
+
289
+ We reflect on the value of our approach in terms of providing remote over the shoulder learning and how it relates to remote learning from crowd communities. We then discuss our key findings with respect to the need for user and collaborator control over usage information that is shared.
290
+
291
+ ### 7.1 Supporting Remote Over The Shoulder Learning
292
+
293
+ Software users often rely on their collaborators to learn new features by observing them $\left\lbrack {{60},{70}}\right\rbrack$ . But with the increase in remote work, especially during the COVID pandemic, such over the shoulder (OTS) learning opportunities are limited. Most participants noted that, unfortunately, current tools were limited in providing any support for in-situ software learning and knowledge-sharing, forcing them to coordinate back and forth with their collaborators using external applications (e.g., emails or text messages). The insights from our work can help designers tackle the challenge of supporting in-situ "remote over-the-shoulder learning", especially among collaborators working on shared documents. Although some recent work [38] has investigated how to support remote OTS using video chat, it seems more targeted at complex problems. In contrast, our work proposes more lightweight in-situ techniques for raising feature awareness among collaborators. A future direction is to design support systems that combine various types of remote OTS learning that vary in the user involvement they require and the complexity of the task at hand.
294
+
295
+ ### 7.2 Feature Awareness Tools Based on Different User Communities
296
+
297
+ In this work we have investigated an alternative to crowd-based approaches by relying on direct collaborators for raising feature awareness. Participants found that direct collaborators can identify the features that can help them complete their task efficiently. Interestingly, participants also commented on the idea of exposing their data to help their collaborators (Theme 1) discover features that they know to be useful. Previous work has focused on how the user can benefit from having access to the usage habits of various user communities $\left\lbrack {{53},{56},{59}}\right\rbrack$ . Our work highlights that with a more local community, users also see specific benefit to contributing their data.
298
+
299
+ While our work has focused on the user's direct collaborators rather than the crowd, we do not see the different user communities as competitors. Each user community can help the user in different ways to raise feature awareness and can even complement each other. Feature awareness systems based on the user's collaborators may be best for helping users to identify the features needed in their current context, i.e., the document they are currently working on. In contrast, feature awareness systems based on the crowd may be best for helping users to expand their feature vocabulary beyond the set of features that their collaborators are using.
300
+
301
+ A possible future direction is to explore hybrid solutions that support feature awareness based on different user communities. There are several potential design challenges herein. For example, how can we visually distinguish the various user communities? How can we allow the user to switch between user communities and customize their system easily? How can hybrid systems help tackle the privacy concerns highlighted in our elicitation study?
302
+
303
+ ### 7.3 Supporting User Control of Data Sources Used for Raising Feature Awareness
304
+
305
+ Theme 3 discussed how the participants wanted control over the data sources used to support their feature awareness (i.e., which subset of collaborators are viewed and the ability to include similar documents in the comparison), for purposes such as tracking a collaborator who has worked on a particular element of document or is technically savvy. One participant commented that determining the collaborators of interest could be a potential challenge. Although we suspect that this is not going to be a problem for an document that the user is actively working on, perhaps it could be a problem when they include similar documents or newly start working on a document that their collaborators have already been working on. In these cases, it could be useful for the system to highlight collaborators of interest (i.e., the collaborators who worked on the same graphical elements, or the collaborators who are the most active). One potential issue, however, is exposing each collaborator's role can lead to the same problems discussed in Theme 4.
306
+
307
+ ![01963e70-187a-7d5c-b5e7-32099abf3a23_9_291_142_1214_431_0.jpg](images/01963e70-187a-7d5c-b5e7-32099abf3a23_9_291_142_1214_431_0.jpg)
308
+
309
+ Figure 3: Based on a targeted analysis of the transcripts, we provide a visual representation of the approximate relative variation in participants’ preferences across the design dimensions. The width of the ellipses provides an indicator of the divergence of opinion.
310
+
311
+ ### 7.4 Allowing Collaborator Control Over What Informa- tion They Share
312
+
313
+ Theme 4 highlighted how sharing detailed personal feature usage information might make users feel self-conscious, stressed, and mi-cromanaged. However, we also noticed a divergence of opinion, meaning that some participants were more comfortable sharing detailed personal feature usage than others. This divergence could be explained by prior work on factors that affect the users' decision to share personal information to benefit from the system they use.
314
+
315
+ For example, privacy calculus theory [15] views these decisions as a rational process where users perform a subjective cost-benefit analysis regarding disclosing personal information. This disclosure happens if they anticipate that the benefits outweigh the risks of privacy loss. Work-related to privacy calculus has highlighted some interesting insights such as readiness to embrace new technology [46], self-efficacy [6], trust [52], and amount of involvement [67] that can affect the user's decision to disclose personal information. Furthermore, prior work was identified different personas [46] based on the value users put on the perceived benefits and privacy risks. A future direction is to investigate how these insights apply to our context and the potential design implications.
316
+
317
+ We also want to explore ways to give the users control over what information they share and the detail of this information. In Theme 3 , we discussed, for example, a participant who asked for the ability to hide certain commands they used, and we discussed some participants who asked for varying levels of detail sharing. An important future direction is to explore how users can customize the level of detail they share in balancing privacy with the benefits gained by sharing. The challenge is accomplishing this customization in a lightweight manner, given that users generally do not want high user involvement.
318
+
319
+ One possibility is to give users fine control over when they share their feature usage. For example, users can choose to share specific actions by enabling an option in the menu, and when they are done with their task, they can disable the sharing. An alternative is to let users review the highlighted features that the tool has chosen when the user closes the collaborative editor. This solution could help create " learning events" and highlight the features that the user thinks their collaborators would benefit from.
320
+
321
+ ## 8 LIMITATIONS AND FUTURE WORK
322
+
323
+ The video prototypes used in our elicitation study did not discuss differences in collaborators' roles (e.g., within one organization) because we wanted participants to ground their feedback to their own experiences. However, most participants did not feel that the different roles of their collaborators impacted their perceptions on our design concepts. Only a few participants mentioned certain professional dynamics that may increase the fear of micromanaging, for example if there is a competitive culture in their team. Future work could broaden the participant sample to further probe on other social factors, for example including more diverse age groups and participants with different remote-working experiences, as well as systematically explore how the role of the user within the company (i.e. manager, subordinate) can affect the user's perception of feature awareness based on the user's collaborators.
324
+
325
+ We designed each of the five concepts as an independent support mechanism, but many of their properties could work in combinations. Combining properties would be an interesting future direction given that two of the most well-received designs CollabPatina and NewsFeat, offer different functionalities. Our elicitation study used video prototypes to probe participants' reactions and perceptions while reducing biases due to potential implementation issues. One potential direction is to focus on building a feature awareness tool that incorporates aspects from the design concepts that were well received. With this tool, we can conduct longitudinal studies to assess how feature awareness based on the user's actual collaborators will impact the user's actual software usage habit over time.
326
+
327
+ ## 9 CONCLUSION
328
+
329
+ Our work contributes insights into how we can raise serendipitous feature awareness in remote shared contexts based on a user's collaborators. Drawing upon our informal formative study, prior work, and our own experiences, we created a design space, and then generated five design concepts that exercise this design space of serendipitous feature discovery. Through our elicitation study, we uncovered attitudes and perceptions towards feature awareness tools based on the user's collaborators, highlighting promising design directions and design elements, but also revealing sensitivities that need to be accommodated through careful design. Our work opens up possibilities for new tools that can leverage the user's collaborators' feature usage to provide over-the-shoulder learning in remote contexts. Altogether, it offers a promising direction for addressing feature learnability through improved feature discoverability, a longstanding challenge in HCI.
330
+
331
+ ## 10 ACKNOWLEDGEMENT
332
+
333
+ This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) "Making it personal: tools and techniques for fostering effective user interaction with feature-rich software" and by European Research Council (ERC) grants ${\mathrm{n}}^{ \circ }$ 695464 "ONE: Unified Principles of Interaction".
334
+
335
+ [1] M. S. Ackerman and T. W. Malone. Answer garden: A tool for growing organizational memory. In Proceedings of the ACM SIGOIS and IEEE CS TC-OA Conference on Office Information Systems, COCS '90, p. 31-39. Association for Computing Machinery, New York, NY, USA, 1990. doi: 10.1145/91474.91485
336
+
337
+ [2] S. Aggarwal, R. Garg, A. Sancheti, B. P. R. Guda, and I. A. Burhanud-din. Goal-driven command recommendations for analysts. In Fourteenth ACM Conference on Recommender Systems, p. 160-169. Association for Computing Machinery, New York, NY, USA, 2020.
338
+
339
+ [3] P. Akiki. Generating contextual help for user interfaces from software requirements. IET Software, 13, 10 2018. doi: 10.1049/iet-sen.2018. 5163
340
+
341
+ [4] J. Alvina, A. Bunt, P. K. Chilana, S. Malacria, and J. McGrenere. Where is that feature? designing for cross-device software learnability. In Proceedings of the 2020 ACM Designing Interactive Systems Conference, p. 1103-1115. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3357236.3395506
342
+
343
+ [5] K. Beck. Extreme programming explained: embrace change. Addison-Wesley Professional, 2000.
344
+
345
+ [6] H.-T. Chen and W. Chen. Couldn't or wouldn't? the influence of privacy concerns and self-efficacy in privacy management on privacy protection. Cyberpsychology, Behavior, and Social Networking, 18(1):13- 19, 2015. doi: 10.1089/cyber.2014.0456
346
+
347
+ [7] P. K. Chilana, N. Hudson, S. Bhaduri, P. Shashikumar, and S. K. Kane. Supporting remote real-time expert help: Opportunities and challenges for novice 3d modelers. In J. Cunha, J. P. Fernandes, C. Kelleher, G. Engels, and J. Mendes, eds., 2018 IEEE Symposium on Visual Languages and Human-Centric Computing, VL/HCC 2018, Lisbon, Portugal, October 1-4, 2018, pp. 157-166. IEEE Computer Society, 2018. doi: 10.1109/VLHCC.2018.8506568
348
+
349
+ [8] P. K. Chilana, A. J. Ko, and J. O. Wobbrock. Lemonaid: Selection-based crowdsourced contextual help for web applications. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '12, p. 1549-1558. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10.1145/2207676.2208620
350
+
351
+ [9] A. Ciborowska and K. Damevski. Recognizing developer activity based on joint modeling of code and command interactions. IEEE Access, 8:211653-211664, 2020. doi: 10.1109/ACCESS.2020.3040156
352
+
353
+ [10] V. Clarke, V. Braun, and N. Hayfield. Thematic analysis. Qualitative psychology: A practical guide to research methods, pp. 222-248, 2015.
354
+
355
+ [11] M. Claypool, P. Le, M. Wased, and D. Brown. Implicit interest indicators. In Proceedings of the 6th international conference on Intelligent user interfaces, pp. 33-40, 2001.
356
+
357
+ [12] E. B. Cutrell, M. Czerwinski, and E. Horvitz. Effects of instant messaging interruptions on computing tasks. In CHI '00 Extended Abstracts on Human Factors in Computing Systems, CHI EA '00, p. 99-100. Association for Computing Machinery, New York, NY, USA, 2000. doi: 10.1145/633292.633351
358
+
359
+ [13] G. D'Angelo, A. Di Iorio, and S. Zacchiroli. Spacetime characterization of real-time collaborative editing. Proc. ACM Hum.-Comput. Interact., 2(CSCW), nov 2018. doi: 10.1145/3274310
360
+
361
+ [14] S. Davidoff, M. K. Lee, A. K. Dey, and J. Zimmerman. Rapidly exploring application design through speed dating. In J. Krumm, G. D. Abowd, A. Seneviratne, and T. Strang, eds., UbiComp 2007: Ubiquitous Computing, pp. 429-446. Springer Berlin Heidelberg, Berlin, Heidelberg, 2007. doi: 10.1007/978-3-540-74853-3_25
362
+
363
+ [15] T. Dinev and P. Hart. An extended privacy calculus model for e-commerce transactions. Information systems research, 17(1):61-80, 2006.
364
+
365
+ [16] M. Ekstrand, W. Li, T. Grossman, J. Matejka, and G. Fitzmaurice. Searching for software learning resources using application context. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, UIST '11, p. 195-204. Association for Computing Machinery, New York, NY, USA, 2011. doi: 10.1145/2047196. 2047220
366
+
367
+ [17] G. Fischer. User modeling in human-computer interaction. User modeling and user-adapted interaction, 11(1):65-86, 2001. doi: 10. 1023/A:1011145532042
368
+
369
+ [18] K. Z. Gajos, D. S. Weld, and J. O. Wobbrock. Automatically generating personalized user interfaces with supple. Artificial Intelligence, 174(12):910-950, 2010. doi: 10.1016/j.artint.2010.05.005
370
+
371
+ [19] M. Gasparic and F. Ricci. Ide interaction support with command
372
+
373
+ recommender systems. IEEE Access, 8:19256-19270, 2020. doi: 10. 1109/ACCESS.2020.2967840
374
+
375
+ [20] C. Gautreau. Motivational factors affecting the integration of a learning management system by faculty. Journal of Educators Online, 8(1):n1, 2011. doi: 10.9743/JEO.2011.1.2
376
+
377
+ [21] E. Giannisakis, G. Bailly, S. Malacria, and F. Chevalier. Iconhk: Using toolbar button icons to communicate keyboard shortcuts. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17, p. 4715-4726. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10.1145/3025453.3025595
378
+
379
+ [22] M. Goldman, G. Little, and R. C. Miller. Real-time collaborative coding in a web ide. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, UIST '11, p. 155-164. Association for Computing Machinery, New York, NY, USA, 2011. doi: 10.1145/2047196.2047215
380
+
381
+ [23] T. Grossman, P. Dragicevic, and R. Balakrishnan. Strategies for accelerating on-line learning of hotkeys. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '07, p. 1591-1600. Association for Computing Machinery, New York, NY, USA, 2007. doi: 10.1145/1240624.1240865
382
+
383
+ [24] T. Grossman and G. Fitzmaurice. Toolclips: An investigation of contextual video assistance for functionality understanding. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ' 10, p. 1515-1524. Association for Computing Machinery, New York, NY, USA, 2010. doi: 10.1145/1753326. 1753552
384
+
385
+ [25] T. Grossman, G. Fitzmaurice, and R. Attar. A survey of software learnability: Metrics, methodologies and guidelines. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '09, p. 649-658. Association for Computing Machinery, New York, NY, USA, 2009. doi: 10.1145/1518701.1518803
386
+
387
+ [26] T. Grossman, J. Matejka, and G. Fitzmaurice. Chronicle: Capture, exploration, and playback of document workflow histories. In Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology, UIST '10, p. 143-152. Association for Computing Machinery, New York, NY, USA, 2010. doi: 10.1145/1866029. 1866054
388
+
389
+ [27] R. Gulotta, A. Sciuto, A. Kelliher, and J. Forlizzi. Curatorial agents: How systems shape our understanding of personal and familial digital information. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI '15, p. 3453-3462. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2702123.2702297
390
+
391
+ [28] P. J. Guo, J. White, and R. Zanelatto. Codechella: Multi-user program visualizations for real-time tutoring and collaborative learning. In 2015 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), pp. 79-87, 2015. doi: 10.1109/VLHCC.2015.7357201
392
+
393
+ [29] B. Hartmann, D. MacDougall, J. Brandt, and S. R. Klemmer. What would other programmers do: Suggesting solutions to error messages. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '10, p. 1019-1028. Association for Computing Machinery, New York, NY, USA, 2010. doi: 10.1145/1753326. 1753478
394
+
395
+ [30] W. Hill, L. Stead, M. Rosenstein, and G. Furnas. Recommending and evaluating choices in a virtual community of use. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '95, p. 194-201. ACM Press/Addison-Wesley Publishing Co., USA, 1995. doi: 10.1145/223904.223929
396
+
397
+ [31] J. Hiscott, M. Alexandridi, M. Muscolini, E. Tassone, E. Palermo, M. Soultsioti, and A. Zevini. The global impact of the coronavirus pandemic. Cytokine & growth factor reviews, 53:1-9, 2020. doi: 10. 1016/j.cytogfr.2020.05.010
398
+
399
+ [32] D. Horowitz and S. D. Kamvar. The anatomy of a large-scale social search engine. In Proceedings of the 19th International Conference on World Wide Web, WWW '10, p. 431-440. Association for Computing Machinery, New York, NY, USA, 2010. doi: 10.1145/1772690. 1772735
400
+
401
+ [33] E. J. Horvitz, J. S. Breese, D. Heckerman, D. Hovel, and K. Rommelse.
402
+
403
+ The lumiere project: Bayesian user modeling for inferring the goals and needs of software users. arXiv preprint arXiv:1301.7385, 2013.
404
+
405
+ [34] P. Hou, H. Zhang, Y. Wu, J. Yu, Y. Miao, and Y. Tai. FindCmd: A
406
+
407
+ personalised command retrieval tool. IET Software, 15(2):161-173, mar 2021. doi: 10.1049/sfw2. 12015
408
+
409
+ [35] I. Hsi and C. Potts. Studying the evolution and enhancement of software features. In Proceedings 2000 International Conference on Software Maintenance, pp. 143-151, 2000. doi: 10.1109/ICSM.2000.883033
410
+
411
+ [36] N. Hudson, P. K. Chilana, X. Guo, J. Day, and E. Liu. Understanding triggers for clarification requests in community-based software help forums. In 2015 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), pp. 189-193, 2015. doi: 10. 1109/VLHCC.2015.7357216
412
+
413
+ [37] S. Hudson, J. Fogarty, C. Atkeson, D. Avrahami, J. Forlizzi, S. Kiesler, J. Lee, and J. Yang. Predicting human interruptibility with sensors: A wizard of oz feasibility study. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '03, p. 257-264. Association for Computing Machinery, New York, NY, USA, 2003. doi: 10.1145/642611.642657
414
+
415
+ [38] N. Joshi, J. Matejka, F. Anderson, T. Grossman, and G. Fitzmaurice. Micromentor: Peer-to-peer software help sessions in three minutes or less. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, p. 1-13. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3313831.3376230
416
+
417
+ [39] M. A. A. Khan, V. Dziubak, and A. Bunt. Exploring personalized command recommendations based on information found in web documentation. In Proceedings of the 20th International Conference on Intelligent User Interfaces, IUI '15, p. 225-235. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2678025. 2701387
418
+
419
+ [40] K. Kiani, P. K. Chilana, A. Bunt, T. Grossman, and G. Fitzmaurice. "i would just ask someone": Learning feature-rich design software in the modern workplace. In 2020 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), pp. 1-10, 2020. doi: 10. 1109/VL/HCC50065.2020.9127288
420
+
421
+ [41] K. Kiani, G. Cui, A. Bunt, J. McGrenere, and P. K. Chilana. Beyond "one-size-fits-all": Understanding the diversity in how software newcomers discover and make use of help resources. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, p. 1-14. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3290605.3300570
422
+
423
+ [42] B. Krisler and R. Alterman. Training towards mastery: Overcoming the active user paradox. In Proceedings of the 5th Nordic Conference on Human-Computer Interaction: Building Bridges, NordiCHI '08, p. 239-248. Association for Computing Machinery, New York, NY, USA, 2008. doi: 10.1145/1463160.1463186
424
+
425
+ [43] B. Lafreniere, A. Bunt, J. S. Whissell, C. L. A. Clarke, and M. Terry. Characterizing large-scale use of a direct manipulation application in the wild. In Proceedings of Graphics Interface 2010, GI '10, p. 11-18. Canadian Information Processing Society, CAN, 2010.
426
+
427
+ [44] B. Lafreniere, T. Grossman, and G. Fitzmaurice. Community enhanced tutorials: Improving tutorials with multiple demonstrations. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '13, p. 1779-1788. Association for Computing Machinery, New York, NY, USA, 2013. doi: 10.1145/2470654.2466235
428
+
429
+ [45] B. Lafreniere, T. Grossman, J. Matejka, and G. Fitzmaurice. Investigating the feasibility of extracting tool demonstrations from in-situ video content. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '14, p. 4007-4016. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/2556288. 2557142
430
+
431
+ [46] J.-M. Lee and J.-Y. Rha. Personalization-privacy paradox and consumer conflict with the use of location-based mobile commerce. Computers in Human Behavior, 63:453-462, 2016.
432
+
433
+ [47] G. Leshed, E. M. Haber, T. Matthews, and T. Lau. Coscripter: Automating amp; sharing how-to knowledge in the enterprise. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '08, p. 1719-1728. Association for Computing Machinery, New York, NY, USA, 2008. doi: 10.1145/1357054.1357323
434
+
435
+ [48] B. Lewis, G. d'Eon, A. Cockburn, and D. Vogel. Keymap: Improving
436
+
437
+ keyboard shortcut vocabulary using norman's mapping. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, p. 1-10. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3313831.3376483
438
+
439
+ [49] W. Li, T. Grossman, and G. Fitzmaurice. Cadament: A gamified multiplayer software tutorial system. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '14, p. 3369-3378. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/2556288.2556954
440
+
441
+ [50] W. Li, J. Matejka, T. Grossman, and G. Fitzmaurice. Deploying com-munitycommands: A software command recommender system case study. Proceedings of the National Conference on Artificial Intelligence, 4:2922-2929, 01 2014. doi: 10.1609/aimag.v36i3.2600
442
+
443
+ [51] W. Li, J. Matejka, T. Grossman, J. A. Konstan, and G. Fitzmaurice. Design and evaluation of a command recommendation system for software applications. ACM Transactions on Computer-Human Interaction (TOCHI), 18(2):1-35, 2011.
444
+
445
+ [52] C. Liao, C.-C. Liu, and K. Chen. Examining the impact of privacy, trust and risk perceptions beyond monetary transactions: An integrated model. Electronic Commerce Research and Applications, 10(6):702- 715, 2011. doi: 10.1016/j.elerap.2011.07.003
446
+
447
+ [53] F. Linton, D. Joy, H.-P. Schaefer, and A. Charron. Owl: A recommender system for organization-wide learning. Educational Technology & Society, 3(1):62-76, 2000.
448
+
449
+ [54] F. Linton and H.-P. Schaefer. Recommender systems for learning: Building user and expert models through long-term observation of application use. User Modeling and User-Adapted Interaction, 10(2):181- 208, 2000.
450
+
451
+ [55] S. Malacria, G. Bailly, J. Harrison, A. Cockburn, and C. Gutwin. Promoting hotkey use through rehearsal with exposehk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '13, p. 573-582. Association for Computing Machinery, New York, NY, USA, 2013. doi: 10.1145/2470654.2470735
452
+
453
+ [56] S. Malacria, J. Scarr, A. Cockburn, C. Gutwin, and T. Grossman. Skil-lometers: Reflective widgets that motivate and help users to improve performance. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology, UIST '13, p. 321-330. Association for Computing Machinery, New York, NY, USA, 2013. doi: 10.1145/2501988.2501996
454
+
455
+ [57] J. Matejka, T. Grossman, and G. Fitzmaurice. Ip-qat: In-product questions, answers, amp; tips. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, UIST '11, p. 175-184. Association for Computing Machinery, New York, NY, USA, 2011. doi: 10.1145/2047196.2047218
456
+
457
+ [58] J. Matejka, T. Grossman, and G. Fitzmaurice. Patina: Dynamic heatmaps for visualizing application usage. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '13, p. 3227-3236. Association for Computing Machinery, New York, NY, USA, 2013. doi: 10.1145/2470654.2466442
458
+
459
+ [59] J. Matejka, W. Li, T. Grossman, and G. Fitzmaurice. Community-commands: Command recommendations for software applications. In Proceedings of the 22nd Annual ACM Symposium on User Interface Software and Technology, UIST '09, p. 193-202. Association for Computing Machinery, New York, NY, USA, 2009. doi: 10.1145/1622176. 1622214
460
+
461
+ [60] E. Murphy-Hill and G. C. Murphy. Peer interaction effectively, yet infrequently, enables programmers to discover new tools. In Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work, CSCW '11, p. 405-414. Association for Computing Machinery, New York, NY, USA, 2011. doi: 10.1145/1958824.1958888
462
+
463
+ [61] J. Novet. Google's g suite now has 6 million paying businesses, up from 5 million in feb. 2019, 2020.
464
+
465
+ [62] W. Odom, J. Zimmerman, S. Davidoff, J. Forlizzi, A. K. Dey, and M. K. Lee. A fieldwork of the future with user enactments. In Proceedings of the Designing Interactive Systems Conference, DIS '12, p. 338-347. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10.1145/2317956.2318008
466
+
467
+ [63] F. Riahi, Z. Zolaktaf, M. Shafiei, and E. Milios. Finding expert users in community question answering. In Proceedings of the 21st International Conference on World Wide Web, WWW '12 Companion, p.
468
+
469
+ 791-798. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10.1145/2187980.2188202
470
+
471
+ [64] J. Scarr, A. Cockburn, C. Gutwin, and P. Quinn. Dips and ceilings: Understanding and supporting transitions to expertise in user interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '11, p. 2741-2750. Association for Computing Machinery, New York, NY, USA, 2011. doi: 10.1145/1978942. 1979348
472
+
473
+ [65] U. Shardanand and P. Maes. Social information filtering: Algorithms for automating "word of mouth". In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '95, p. 210-217. ACM Press/Addison-Wesley Publishing Co., USA, 1995. doi: 10.1145/ 223904.223931
474
+
475
+ [66] B. Shneiderman. Direct manipulation: A step beyond programming languages (abstract only). In Proceedings of the Joint Conference on Easier and More Productive Use of Computer Systems. (Part - II): Human Interface and the User Interface - Volume 1981, CHI '81, p. 143. Association for Computing Machinery, New York, NY, USA, 1981. doi: 10.1145/800276.810991
476
+
477
+ [67] E. Swilley and R. E. Goldsmith. The role of involvement and experience with electronic commerce in shaping attitudes and intentions toward mobile commerce. International Journal of Electronic Marketing and Retailing, 1(4):370-384, 2007.
478
+
479
+ [68] B. Y. Thompson. The digital nomad lifestyle: (remote) work/leisure balance, privilege, and constructed community. International Journal of the Sociology of Leisure, 2:1-16, 03 2019. doi: 10.1007/s41978-018 -00030-y
480
+
481
+ [69] M. B. Twidale. Over the shoulder learning: Supporting brief informal learning. Computer Supported Cooperative Work, 14(6):505-547, December 2005. doi: 10.1007/s10606-005-9007-7
482
+
483
+ [70] M. B. Twidale and K. Ruhleder. Over-the-Shoulder Learning in a Distance Education Environment. Learning, Culture and Community in Online Education: Research and Practice, pp. 177-194, 2004.
484
+
485
+ [71] L. Vermette, J. McGrenere, C. Birge, A. Kelly, and P. K. Chilana. Freedom to personalize my digital classroom: Understanding teachers' practices and motivations. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, p. 1-14. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10. 1145/3290605.3300548
486
+
487
+ [72] L. Vermette, J. McGrenere, and P. K. Chilana. Peek-through customization: Example-based in-context sharing for learning management systems. In Proceedings of the 2020 ACM Designing Interactive Systems Conference, p. 1155-1167. Association for Computing Machinery, New York, NY, USA, 2020.
488
+
489
+ [73] F. Vitale, W. Odom, and J. McGrenere. Keeping and discarding personal data: Exploring a design space. In Proceedings of the 2019 on Designing Interactive Systems Conference, DIS '19, p. 1463-1477. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3322276.3322300
490
+
491
+ [74] X. Wang, B. Lafreniere, and T. Grossman. Leveraging community-generated videos and command logs to classify and recommend software workflows. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 1-13. Association for Computing Machinery, New York, NY, USA, 2018.
492
+
493
+ [75] M. Wiebe, D. Y. Geiskkovitch, and A. Bunt. Exploring user attitudes towards different approaches to command recommendation in feature-rich software. In Proceedings of the 21 st International Conference on Intelligent User Interfaces, IUI '16, p. 43-47. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/2856767. 2856814
494
+
495
+ [76] J. Zimmerman. Video sketches: Exploring pervasive computing interaction designs. IEEE Pervasive Computing, 4(4):91-94, oct 2005. doi: 10.1109/MPRV.2005.91
496
+
497
+ [77] J. Zimmerman, J. Forlizzi, and S. Evenson. Research through design as a method for interaction design research in hci. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '07, p. 493-502. Association for Computing Machinery, New York, NY, USA, 2007. doi: 10.1145/1240624.1240704
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/-qPznNJmVxx/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,317 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § PROMOTING FEATURE AWARENESS BY LEVERAGING COLLABORATORS' USAGE HABITS IN COLLABORATIVE EDITORS
2
+
3
+ Emmanouil Giannisakis*
4
+
5
+ University of British Columbia
6
+
7
+ Vancouver, Canada
8
+
9
+ Jessalyn Alvina†
10
+
11
+ Université Paris-Saclay, CNRS, Inria
12
+
13
+ Orsay, France
14
+
15
+ Andrea Bunt ${}^{ \ddagger }$
16
+
17
+ University of Manitoba
18
+
19
+ Winnipeg, Canada
20
+
21
+ Parmit Chilana ${}^{§}$
22
+
23
+ Simon Fraser University
24
+
25
+ Burnaby, Canada
26
+
27
+ Joanna McGrenerer ${}^{¶}$
28
+
29
+ University of British Columbia
30
+
31
+ Vancouver, Canada
32
+
33
+ § ABSTRACT
34
+
35
+ Users often rely on their collaborators to find relevant application features by observing them "over the shoulder" (OTS), usually in a synchronous co-located setting. However, as remote work settings have become more common, users can no longer rely on such in-person interaction with collaborators. Therefore, we investigate designs that help the user become aware of relevant features based on collaborators' feature usage habits. We created five design concepts as video prototypes which varied in five design dimensions: number of active collaborators, number of shared documents, specificity of comparison, user involvement, and goal of the feature awareness. Interviews $\left( {\mathrm{N} = {18}}\right)$ probing the design concepts indicate that collaborator-based feature awareness would be valuable for discovering novel features and producing a consistent style across the shared document, but some users may feel micromanaged or self-conscious. We conclude by reflecting on and expanding our design space and discussing future design directions supporting remote OTS learning.
36
+
37
+ Index Terms: User Interfaces [User Interfaces]: Graphical user interfaces (GUI)-Empirical studies in interaction design;
38
+
39
+ § 1 INTRODUCTION
40
+
41
+ Modern software applications offer a large set of features which often include hundreds or thousands of different commands and keyboard shortcuts [35]. As a result, it is challenging for users to be aware of the available features and to identify which ones are relevant to their tasks $\left\lbrack {{25},{60},{66}}\right\rbrack$ . Although various support tools and mechanisms exist that aim to raise a user's awareness of features, such as online documentation, tutorials, and videos [41], it has been shown that users tend to prefer social solutions, where a user learns about a new feature from other users $\left\lbrack {{20},{40},{71}}\right\rbrack$ . Such solutions can draw on different "levels" of social communities, from the global level, often referred to as "the crowd", which includes Q&A forums, all the way down to a more local level, such as an individual in the same institution. For example, users commonly rely on their colleagues to discover relevant features by observing them "over-the-shoulder" (OTS) [60,70] or by directly asking them for help [40]. This type of serendipitous feature discovery thrives in a synchronous co-located setting as users can leverage their shared work context and users tend to trust their colleagues more than other sources [60].
42
+
43
+ With the increase in remote work over the past few years [68], especially during the COVID-19 pandemic [31], in-person serendipitous interactions are far less frequent today, leaving fewer opportunities for feature discovery among colleagues. Screen sharing could potentially enable synchronous OTS interactions, however, a lack of support for communicating about the interactions makes discovering new features in this setting challenging $\left\lbrack {{60},{72}}\right\rbrack$ . Prior work has also proposed tools as solutions that facilitate short synchronous help exchanges $\left\lbrack {7,{38}}\right\rbrack$ , or provide additional persistent, asynchronous content $\left\lbrack {{24},{72}}\right\rbrack$ (e.g., workflows from individuals). Such tools are useful, but they typically require the user to leave their current application and switch to another one, which can be disruptive for both the learner and expert [60]. Therefore, we wondered how could a user observe and leverage a colleague's software knowledge when working in remote asynchronous situations without having to switch from one application to another?
44
+
45
+ Our overarching goal is to design in-application tools and techniques that promote feature awareness based on a colleague's software knowledge. We focus on leveraging the user's direct collaborators within the context of common document(s) in collaborative editor applications (e.g., all the users working on a Google Sheet document) to provide feature awareness from trusted sources, who are working on the same tasks. The popularity of collaborative editors has increased over the past decade as they offer a shared environment for users to work remotely, synchronously, or asynchronously $\left\lbrack {{13},{61}}\right\rbrack$ .
46
+
47
+ While there is much design inspiration from other feature awareness solutions in the literature, designs that will satisfy our particular goals are not immediately obvious. For example, some existing solutions recommend features based on system-determined "similar users" across all those who use a given application [58,59]. These tools provide numerical command usage comparisons [59], which might be acceptable with "crowd-level" comparisons, but users might be less comfortable when comparisons are to known colleagues. Users might be comfortable sharing knowledge with their colleagues through Q&A approaches (e.g., AnswerGarden [1]), but are missing application context. Hence, as a starting point we asked: What are the potential benefits, drawbacks, and design consideration for tools that aim to raise feature awareness by leveraging collaborator usage patterns and shared application documents?
48
+
49
+ To answer our question, we followed a Research through Design [77] approach. This approach focuses on the generation of design artifacts that are used as exemplars to probe people's reactions, attitudes, and perceptions, to produce research findings [77]. We first defined a design space based on the existing literature, our own experiences working in collaborative teams, and a small informal formative study. Our initial design space includes five dimensions (Fig. 1) that range from the number of active collaborators, to the degree of user involvement required by the user. We then generated five different design concepts which intentionally emphasized different aspects of the five design dimensions, and we created corresponding video prototypes [76]. We conducted a semi-structured interview study $\left( {\mathrm{N} = {18}}\right)$ to elicit feedback on the potential benefits and drawbacks of the design concepts and to understand users perception of points in the design space. We used this feedback to reflect on and expand the design space (Fig. 3).
50
+
51
+ *e-mail: em.giannisakis@gmail.com
52
+
53
+ ${}^{ \dagger }$ e-mail: jessalyn.alvina@lisn.upsaclay.fr
54
+
55
+ ${}^{ \ddagger }$ e-mail: bunt@cs.umanitoba.ca
56
+
57
+ §e-mail: pchilana@cs.sfu.ca
58
+
59
+ Te-mail: joanna@cs.ubc.ca
60
+
61
+ This paper makes the following contributions: First, we outline five design dimensions to characterize the design space around raising feature awareness based on the user's collaborators in a shared application with common documents. These can be used as a generative resource for creating new tools. Second, we offer five alternative design concepts generated using the design space that showcase how the user's collaborators use the application. Our elicitation study probed and explored the space, identifying where the most promising design opportunities lie as well as limitations of our overall approach to raising feature awareness. For example, participants felt such tools would be valuable for not only discovering novel features but also for identifying features that could help a group of collaborators produce a consistent style across the shared document. That said, they might feel micromanaged and self-conscious. Third, we present concrete design implications and important future considerations for raising feature awareness based on the user's collaborators.
62
+
63
+ § 2 RELATED WORK
64
+
65
+ Feature awareness is an important part of software learnability and usability [25]. In this section, we focus on reviewing design efforts around raising feature awareness through social solutions that draw on user communities and individual users. We also briefly touch on technical solutions.
66
+
67
+ § 2.1 FEATURE AWARENESS BASED ON USER COMMUNITIES
68
+
69
+ Some prior work in feature awareness has utilized the usage habits of broad user communities such as all users of an application (crowd). CommunityCommands [59] recommends commands by implicitly comparing similar users from the crowd using collaborative filtering algorithms $\left\lbrack {{30},{65}}\right\rbrack$ . Patina $\left\lbrack {58}\right\rbrack$ also utilizes similar users from the crowd to highlight commands within the interface that the user most frequently uses and that other similar users most frequently use. As such, Patina provides a visual feature usage comparison. Owl $\left\lbrack {{53},{54}}\right\rbrack$ is also a feature recommendation system that compares the usage habits of the users within the same organization as the main user to recommend relevant features. These tools operate on the command level and offer a lightweight way to help users become aware of relevant features. Although these solutions can provide useful feature recommendations while minimizing the user's involvement, it can be difficult for the users to assess the usefulness of the highlighted features (i.e., relevancy) as they may not have enough information about the users that the system is based on (i.e., trust on the sources) $\left\lbrack {{60},{75}}\right\rbrack$ .
70
+
71
+ Prior work has also focused on recommending workflows (i.e., sequences of commands) based on the community. CADament aims to help users observe other users by providing a viewport to their screens [49], Coscripter [47] allows users to create and share scripts to automate processes within the same enterprise. Other tools [44,74] recommend relevant workflow videos generated from the crowd. These tools can increase the user's understanding of the software's capabilities, but they require the user to stop their current task to see the generated videos. Prior work has also leveraged broader user communities to help the user understand how to use their software. For example, AnswerGarden [1] offers a Q&A repository within the organization while other tools like $\left\lbrack {8,{16},{29},{57}}\right\rbrack$ leverage the knowledge of the broader user community by using widgets that are integrated in the user's applications. For example, in LemonAid [8], the user can select an application widget to see community questions and answers related to that widget. Tools like AnswerGarden can help users get help from their direct collaborators. However, they need to interrupt their current task [60], and also it can be difficult to locate useful answers from past discussions [8].
72
+
73
+ Our work builds on community-based feature awareness tools that offer lightweight and in-application solutions (e.g., Community-Commands [59] and Patina [58]). However, instead of focusing on large user communities such as the user's organization or all users of an application, we focus on the close-knit group of a user's collaborators on a shared document. We hypothesize that by focusing on this group, we can avoid challenges that systems based on the broader user community often face, such as understanding the user's goals $\left\lbrack {2,9}\right\rbrack$ and finding similar users within the community.
74
+
75
+ § 2.2 FEATURE AWARENESS BASED ON INDIVIDUAL USERS
76
+
77
+ Some tools aim to mediate the social interaction between two users to help one or both discover relevant features. Users prefer this type of social solutions $\left\lbrack {{40},{71}}\right\rbrack$ where, for example, they can get task-specific advice by observing what one of their colleagues are doing "over the shoulder" [69]. Such interactions can be very effective, yet they do not happen frequently [60] because they can be time consuming as well as difficult to coordinate and record [72].
78
+
79
+ Prior work has aimed to address the issues of coordination. Some systems have focused on helping users to find experts who can respond to their questions $\left\lbrack {{32},{36},{38}}\right\rbrack$ which can minimize the response time [63]. MicroMentor [38] for example, helps the user arrange 3-minute sessions with an expert user. MarmalAid [7] anchors real-time chat conversations to individual graphical widgets of a 3D modeling tool. These tools requires high involvement from users, as they have to interrupt their current task to join a video call for the learning exchanges. Other tools aim to help the user find relevant workflows by seeing their colleagues asynchronously. For example, Customizer [72] allows users to see how their colleagues have customized their tools and thus help them find relevant workflows. Some other tools $\left\lbrack {{24},{26},{45}}\right\rbrack$ record and extract video that shows the workflow that individual users follow to complete a task. Finally, some tools $\left\lbrack {{22},{28}}\right\rbrack$ aim to optimize the synchronous one-to-one interaction, especially in the case of IDEs while users are in pair programming sessions [5]. The main goal of these tools $\left\lbrack {{22},{28}}\right\rbrack$ is to help the user understand their collaborators' actions, specifically focusing on their collaborators' changes in the shared document.
80
+
81
+ The above tools can be effective but also time-consuming and require users to stop their current tasks to interact with other users. Therefore, these tools may be more appropriate for helping users solve more complex issues that go beyond feature awareness. Our work focuses on feature awareness and explores design solutions that aim to minimize user involvement and thus task interruption while taking advantage of the user's direct collaborators.
82
+
83
+ § 2.3 TECHNICAL SOLUTIONS TO RAISING FEATURE AWARENESS
84
+
85
+ Prior work has also proposed technical solutions to raising feature awareness. For example, tip-of-the-day tools [19] proactively introduce available functionalities, and quick assist [19] (often available IDEs) proposes quick fixes when developers face a problem. These tools propose features that are not necessarily relevant to the user or novel [17]. Other tools highlight features based on the user's current context $\left\lbrack {{11},{12},{18}}\right\rbrack$ , current actions $\left\lbrack {{33},{37}}\right\rbrack$ or command usage history $\left\lbrack {3,9,{34}}\right\rbrack$ . The challenge with these tools is that their domain knowledge is often predesigned and self-contained without considering community knowledge, which constantly evolves [51]. An exception is QFRecs [39] which bases its recommendations on an application's online documentation, which can be up to date with the newest features. Finally, some tools highlight shortcut alternatives using notifications $\left\lbrack {{23},{64}}\right\rbrack$ , by integrating shortcut cues within the UI $\left\lbrack {{21},{55}}\right\rbrack$ , or by using external widgets $\left\lbrack {{42},{48},{56}}\right\rbrack$ . While these tools offer reactive, contextual help, prior studies indicate that users tend to learn only a small subset of the available shortcuts. $\left\lbrack {{43},{54}}\right\rbrack$ .
86
+
87
+ Our work explores a solution that focuses on collaborators' software usage habits to help the user identify the commands and the keyboard shortcuts that they need to complete their current tasks.
88
+
89
+ § 3 DESIGN SPACE
90
+
91
+ § 3.1 METHODOLOGY OVERVIEW AND RATIONALE
92
+
93
+ Our review of prior work indicates that raising feature awareness based on the user's collaborators while requiring only modest user involvement is an under-explored space. While there are opportunities to apply design insights from related work on crowd-based approaches or solutions that are based on individual users, how to translate these insights and leverage the unique design opportunities afforded by this new context is unclear. Therefore, to systematically explore this design space, we used a Research through design (RtD) [77], an approach in interaction design research that intersects theories and technical opportunities to generate a concrete problem framing and a series of design artifacts (e.g., concepts, prototypes, and documentation of the design process). Prior work on raising feature awareness has often focused on proposing, implementing, and evaluating a single system, with the aim of understanding in depth how the proposed system can benefit the user. In contrast, our approach probes on the potential roles, forms, and values of emerging near-future technology by using more than one design vision, as proposed in other works $\left\lbrack {{14},{62}}\right\rbrack )$ . Prior work has used a similar approach to investigate the design space around supporting cross-device learnability [4], data legacy [27], and personal data curation [73]. We aim to understand user reactions towards this under-explored problem space, to define concrete design goals, and to generate design implications for future implemented systems.
94
+
95
+ Our application of the RtD approach was as follows: We first carefully generated a set of design dimensions as similarly done in $\left\lbrack {4,{73}}\right\rbrack$ . We generated this set by clustering and mapping insights from prior work, reflecting on the authors' personal experiences, and using findings from an informal formative study. During a series of our research group meetings, we refined these insights into a set of five relevant design dimensions. These dimensions are not meant to be exhaustive, but rather are those that seem to be most prominent based on our review of our insights from prior work and the informal formative study. We then use this set of design dimensions as a generative tool to create five design concepts in the form of video prototypes. Finally we use these design concepts in an interview study to elicit participants reactions towards the problem space and aspects of our design space.
96
+
97
+ In the remainder of this section, we describe our informal formative study and detail our proposed design space.
98
+
99
+ § 3.2 INFORMAL FORMATIVE STUDY: METHOD AND ANALYSIS
100
+
101
+ We conducted an informal formative study with two goals in mind: 1) to understand how users currently learn from each other when collaborating remotely, and 2) to gather initial thoughts on how raising feature awareness based on their collaborators might impact their current practices. We advertised our study on a university mailing list. We recruited 11 participants ( 6 women and 5 men, 21-30 years old) with diverse occupations (e.g., accountants, data analyst, event planner, etc.), all of whom reported collaborating with others at least once per week using editors like Google Docs.
102
+
103
+ During a 60-minute Zoom session with each participant, we introduced an interactive prototype ${}^{1}$ that shows feature recommendations within an editor that differ in terms of 1 ) the user community from which the recommendations are derived (from crowd-powered recommendations or from the user's collaborators on a shared document); and 2) whether or not the user's collaborators are directly identifiable in individual recommendations. We then elicited participants' reactions towards the problem space and each feature recommendation type. We analyzed participant feedback inductively and saw themes emerge related to the participants' different goals for feature discovery, preferences for seeing recommendation from collaborators, and perceptions of how much time they wanted to invest in such a system. We used these initial insights to inform our design space, which we discuss in the next section.
104
+
105
+ § 3.3 DESIGN SPACE DIMENSIONS
106
+
107
+ The informal formative study provided new insights on potential benefits and drawbacks of tools that raise feature awareness based on the user's collaborators working on shared documents. We do not provide a comprehensive description of these findings here (in part because of some overlap with the elicitation study findings described later). Instead, in this section we discuss how we used the study findings, the related work, and the authors' own experiences to derive a design space. We describe each dimension and provide relevant participant quotes for those motivated by the informal study.
108
+
109
+ D1: Number of active collaborators: Our informal formative study suggested that some participants were more interested in the features that specific individuals were using rather than the features that the majority of their collaborators were using For example, some participants noted that they would be more willing to try a feature if they perceived their collaborator to be technical savvy. As formative study participant ${\mathrm{{FP00}}}^{2}$ commented "If it was someone on my team who I know is really tech-savvy, I saw that they used certain functions more I might pay a little more attention to that". In contrast, prior work indicates that including more users allows the main user to discover a broader selection of features [50]. Therefore, this dimension investigates whether including a single collaborator (e.g., the technical expert) or more collaborators would help to raise feature awareness: on one end (Fig. 1-D1), we have a single active collaborator, on the other end we have all active collaborators. By active collaborators, we mean the collaborators who have access to the document and actively edit it.
110
+
111
+ D2: Number of documents included: We based this dimension on our (i.e. this paper's authors) experiences. Specifically, while discussing the $\mathbf{D}\mathbf{1}$ dimension, we realized that we often worked with the same group of collaborators to create multiple similar shared documents that follow similar formatting guidelines. For example, the same group of collaborators could work on multiple presentations. This dimension investigates whether including the collaborators' actions from only the current document or other similar shared documents would help the user become aware of relevant features. On one end (Fig. 1-D2), we have the current document only, while on the other end, we have all the documents that are shared across the same collaborators.
112
+
113
+ D3: Specificity of comparison: This dimension is based on existing work on raising feature awareness that explicitly $\left\lbrack {{53},{54},{58}}\right\rbrack$ or implicitly compares $\left\lbrack {{50},{58}}\right\rbrack$ the user’s individual feature usage habits with the user community as an aggregate. On one end (Fig. 1-D3), we have tools that explicitly compare the user's actions with their collaborators' (e.g., with the use of visualizations). The goal of these tools is to help the user reflect on their actions and adjust their habits. On the other end, we have tools that implicitly compare the user's actions with their collaborators' to highlight relevant features.
114
+
115
+ D4: User involvement: An early motivation of this work was to investigate tools that raise feature awareness based on the user's collaborators while completely minimizing the user involvement. However, our formative study suggested that users might be willing to invest more time using these systems under specific circumstances. One example that our participants gave was for asking follow-up questions regarding the highlighted feature. FP01 said, "if I have any further questions or a detailed question, I know who I can talk to". With this dimension we want to investigate the amount of involvement that the user and their collaborators need to invest using the tool for the user to discover relevant features. On one end we have low user involvement (Fig. 1-D4), where the system focuses on showing the relevant features without offering possibilities for further interaction (as in $\left\lbrack {{54},{56},{58},{59}}\right\rbrack$ ). On the other end, we have high user involvement where the user needs to interact with the system and with their collaborators to find the relevant features (solutions that may fall to this end are [38]).
116
+
117
+ ${}^{1}$ We included figures of the prototype in the supplementary material.
118
+
119
+ ${}^{2}$ We use FPXX to refer to a participant in our formative study
120
+
121
+ < g r a p h i c s >
122
+
123
+ Figure 1: We identified five design dimensions that we used to generate the five design concepts of feature awareness tools: D1: Number of Active Collaborators, D2: Number of Documents Included, D3: User Involvement, D4: Specificity of Comparison, and D5: Goal of Feature Awareness. Subsequent to the elicitation study, we expanded the design space by adding D6: Detail of Feature Usage.
124
+
125
+ D5: Goal of feature discovery: Perhaps the most unexpected observation from our informal formative study was that participants were interested in how collaborator-based recommendations could help them keep the document formatting consistent across the collaborators. They cared about which commands their collaborators were using regardless of whether they already knew the commands or not. For example, FP10 commented that collaborator-based recommendations would be useful "for the sake of consistency, because people will often use different methods in collaborative documents that do make them a bit messy". This observation is an interesting contrast to prior work $\left\lbrack {{50},{59}}\right\rbrack$ that has identified "good" feature recommendations to be novel and useful to the user. While this might be true for crowd-based recommendations, we see that collaborator-based feature recommendations might be perceived as "good" regardless of whether the user is familiar with the recommended feature. This dimensions aims to explore the user's goal in using the tool. On one end (Fig. 1-D5), we have tools that aim to highlight features that may be known to the user already, in order to help the user converge on common software usage practices. On the other end, we have tools that aim to highlight novel features (i.e., only the features that the user has never used before) that are relevant to the user.
126
+
127
+ § 4 DESIGN CONCEPTS
128
+
129
+ To explore where user preference lies within the design space (Fig. 1), we created five design concepts that differ along the design dimensions. For these design concepts, we took inspiration from existing tools that raise feature awareness which we then redesigned to emphasize collaborator-based feature awareness. For each concept, we created a video prototype to illustrate how it works and to be able to compare the design concepts in a systematic manner without the influence of potential implementation biases [4]. We used Figma ${}^{3}$ to create the clickable prototype and a video editor Camtasia ${}^{4}$ to record the user interaction and produce the final video.
130
+
131
+ By creating our own concepts and video prototypes, we were able to push the design dimensions in specific directions, often exploring their extremes in new combinations $\left\lbrack {{73},{76}}\right\rbrack$ . These design concepts synthesize a mix of contrasting ideas into a cohesive collection, applying existing and proposed design approaches in this new context. It is important to note that this is not an exhaustive exploration, i.e., we did not cover all the possible combinations that we could derive from the design space. This would not be feasible without overwhelming the participants of our elicitation study. We thus focused on the combinations we thought were interesting to explore. For example, we did not design a concept that emphasized explicit comparison with single active collaborator because we believed that such combination would not necessarily prompt the user to reflect on their actions.
132
+
133
+ To explain the concepts in the video prototype, we asked the viewer to imagine working on a shared document with other collaborators. We presented all concepts as add-ons to the Google Drive Suite ${}^{5}$ (Google Documents, Google Sheets, and Google Slides). We did not focus on one application of the suite because we wanted to show users that they could potentially install these add-ons with any collaborative shared editor. Finally, we noted to participants that these concepts might raise some privacy concerns, for which we would discuss some solutions in the final discussion with them) However, to keep the focus on the design dimensions, we did not explore any privacy-preserving solutions (except for NewsFeat) in the video prototypes.
134
+
135
+ § 4.1 NEWSFEAT
136
+
137
+ The design concept NewsFeat was created to strongly emphasize the single active collaborator (D1), high user involvement (D4), and implicit comparisons between the user's command usage and their collaborators' command usage (D3). Additionally, NewsFeat focuses on the current shared document in D2. It positions in the middle of the D5 Goal of feature discovery dimension.
138
+
139
+ NewsFeat allows users to identify potential useful features by allowing the user to see what commands each of their collaborators use. We took inspiration from existing social networks like Twitter and Facebook where the user can follow other users to see their activities. The user first has to send a request and if their collaborators approve it, the user can see the commands that each collaborator used the same day, including the frequency of use (Fig. 2-A.1). By default the user sees the commands that their collaborator used and the user did not (i.e., implicit comparison with a single active collaborator), although if the user wants they can remove this filter. By allowing users to filter which commands they want to see, NewsFeat can be used both to discover new features or to converge to common practices. In addition, the user and their collaborators can further interact with the system to identify relevant features or to gain more information about one of the commands (i.e., allow high user involvement). For example, they can "like" (Fig. 2-A.2) a command or ask follow-up questions using the comment button (Fig. 2-A.4). The collaborators can also recommend commands that they think are useful. In this case, the recommended command appears with a checkmark next to the command's name (Fig. 2-A.3). The collaborators can group a repeated sequence of commands (Fig. 2-A.5). For example, imagine a scenario where the collaborator applies the same style (font family, font style, and color) for all the headings. They can group these three commands and give them a specific name. Finally, the user can also see the commands that their collaborators used recently (Fig. 2-A.6) in addition to the commands their collaborators used on the same day. They can choose between the commands that their collaborators used last week, last month, or the last six months (not shown in the figure).
140
+
141
+ ${}^{3}$ https://www.figma.com/
142
+
143
+ ${}^{4}$ https://www.techsmith.com/store/camtasia
144
+
145
+ ${}^{5}$ https://drive.google.com/
146
+
147
+ < g r a p h i c s >
148
+
149
+ Figure 2: (A) NewsFeat. (B) CommandMeter. (C) CollabCommands. (D) CollabPatina. (E) MostFrequentKS.
150
+
151
+ § 4.2 COMMANDMETER
152
+
153
+ The design concept CommandMeter was created to strongly emphasize the explicit comparison between the user and their collaborators (D3), high user involvement (D4), and all active collaborators (D1). Similar to NewsFeat, it focuses on the current shared document in D2, and has a slight focus on helping users to converge in common usage habits in D5.
154
+
155
+ With CommandMeter, the user can identify useful features by comparing their command usage to that of their collaborators through the use of visualizations. By making this explicit comparison, the user can reflect on their own behavior and consider whether they want to change their command usage habits. We were inspired by similar systems like Skil-o-Meter [54] that also used visualizations to compare command usage habits between the user and other members within the same organization. Differently, Com-mandMeter compares command usage habits between the user and their collaborators on a shared document. CommandMeter requires high involvement as the user has to switch between two views. One view is a collapsible panel on the bottom right corner (Fig. 2-B.1). Every time the user selects a command ('Strikethrough' shown in figure), this panel uses horizontal bars to compare the user's and their collaborators' frequency of usage. The second (larger) view offers similar visualizations for all the available commands. In the second view, the user can see all the commands and how their frequency of command usage differs from the average frequency of all of their active collaborators (Fig. 2-B.2). Finally, they can choose which collaborators they want to include or exclude from their visualizations (not shown in Fig. 2-B).
156
+
157
+ § 4.3 COLLABCOMMANDS
158
+
159
+ The design concept CollabCommands was created to support users in discovering new features (D5) based on all active collaborators (D1). Contrary to the other design concepts, CollabCommands strongly emphasizes the possibility to include all shared documents (D2). It requires little involvement from the user (D4).
160
+
161
+ With CollabCommands, the user can see recommendations derived from their collaborators' usage habits. Drawing inspiration from CommunityCommands [59], CollabCommands uses a collapsible panel (bottom right corner) to recommend commands that the user does not use but their collaborators do. Hence, CollabCom-mands offers a quick way for the user to identify new features that they might consider using (i.e., requires only low involvement). For each command, the tool shows the avatar of the collaborators that are using this command (Fig. 2-C.1). The user can further customize the tool if they want. They can choose which collaborators the tool will consider when it decides which commands may be relevant to the user (Fig. 2-C.2). Also, the user can decide to include all other shared documents in their recommendations (Fig. 2-C.3).
162
+
163
+ § 4.4 COLLABPATINA
164
+
165
+ The design concept CollabPatina was created to slightly emphasize explicit comparisons (D3) while minimizing user involvement (D4) and it includes all collaborators (D1). It focuses on the current shared document (D2) and puts a slight emphasis on converging to common usage practices (D5).
166
+
167
+ CollabPatina overlays the current interface with color coded visual indicators to show the user's and their collaborators' feature usage (Fig. 2-D). We drew inspiration from the Patina tool [58], but CollabPatina is based on the user's collaborators and allows for some extra customisation. CollabPatina overlays both the toolbar and the menu with color highlights, indicating which features (commands and keyboard shortcuts) the user frequently uses (Fig. 2-D.2) and which features all of the collaborators frequently use (Fig. 2-D.1). As such, CollabPatina requires low to no involvement from the user. The color highlights express a visual comparison, but one that is less explicit than in CommandMeter. The user can see a color bar on the top of the screen that shows what each color indicates (Fig. 2-D.3). When they click the color bar, a setting menu appears (not shown in Fig. 2-D) where the user can select whether they want to see color highlights that show the most frequently used commands or highlights that show the most frequently used keyboard shortcuts, or no color highlights.
168
+
169
+ § 4.5 MOSTFREQUENTKS
170
+
171
+ The design concept MostFrequentKS was created to emphasize the discovery of new features (D5) (in this case, new keyboard shortcuts), by implicitly comparing (D3) all active collaborators (D1). It aims to minimize the user involvement (D4) and it focuses on the current shared document (D2).
172
+
173
+ MostFrequentKS requires low to no involvement from users. When the user selects a menu or toolbar to choose a command, the tool automatically checks if their collaborators frequently use the corresponding keyboard shortcut and shows a notification in the form of tooltip along with the collaborators' avatars (Fig. 2-E.1). If none of their collaborators frequently use the keyboard shortcut, then no notification appears. Clicking the toolbar buttons or the menu items will execute the command as it normally would in any scenario. MostFrequentKS draws inspiration from tools that use notifications to inform users about the existing keyboard shortcuts [64].
174
+
175
+ § 5 ELICITATION INTERVIEW STUDY
176
+
177
+ We used the video prototypes of the design concepts as probes in a semi-structured interview study with 18 participants. The goal of this study was not to find a winner among the design concepts but rather to broaden our understanding of the potential benefits and drawbacks of raising feature awareness based on the user's collaborators' application usage, i.e., to assess our general approach to raising feature awareness. We solicited participants' attitudes, reactions, and perceptions of the design concepts, probing the spots in the design space that each concept highlights. In this way, we explored the design dimensions in a semi-targeted way.
178
+
179
+ § 5.1 PARTICIPANTS
180
+
181
+ We used a screening survey (available in supplementary material) to recruit participants who had experience collaborating using shared editors. To ensure a diverse sample, we asked participants to mention how often they used collaborative editors, how often they used these editors to work remotely with others, their profession, and the number of collaborators that they worked with. We advertised the study on a mailing list for advertising research studies and stopped recruiting when we reached a saturation point, as is common in qualitative studies. We ended up with 18 participants ${}^{6}$ (10 women,8 men) between 18-50 years old (the majority were between 18-37 and one 50). The participants had difverse occupations such as software developers, students, receptionists, graphic designers, lighting artists, social workers, and teachers. All participants reported using shared editors like Google Docs to collaborate with others at least one or two times per week. The number of collaborators reported by participants ranged from 2 to 20, with most regularly collaborating with 2 to 4 people.
182
+
183
+ § 5.2 PROCEDURE
184
+
185
+ The procedure we followed was based on prior work using RtD approach that used design concepts to elicit user reactions $\left\lbrack {4,{73}}\right\rbrack$ . Each session lasted between 60 to 90 minutes. It consisted of three parts: 1) a brief introductory interview focusing on the participants' experiences with collaboration on shared documents, 2) the elicitation part where the participants would see and discuss each design concept, and 3) a final discussion comparing all of the design concepts. One paper author conducted the interviews remotely using Zoom. We recorded all interviews (both audio and video) for later transcription. The participants received $\$ {15}$ per hour as compensation. Our study was approved by an institutional research ethics board.
186
+
187
+ During the introductory interview, we asked each participant about their experiences with collaborative editors. We asked them about which collaborative editors they used, how often they collaborate with other users, and typical sizes of their teams.
188
+
189
+ During the main elicitation part, we showed each of the five video prototypes, one at a time in random order. Before showing each prototype, we emphasized that the design concepts are not tied to a specific application, and they should try to reflect on how they would use it within their software of choice. We also told them that although our video prototypes do not address any privacy issues, they should feel free to express privacy concerns. For each video, first, we made sure that the participant understood the concept, and we encouraged them to ask any questions they may have or to replay to video if they wished. Afterward, we asked the participant about their first impressions and their thoughts on each design concept's different aspects. We focused on the aspects that provided insights into the design space. For example, we asked participants if they would use the filtering functionality of CollabCommands and CommandMeter to include or exclude any collaborator.
190
+
191
+ During the final part, we asked each participant about their experience across all concepts. We asked them to sort the five concepts from the most to the least preferred and to explain their rationale for their sort order.
192
+
193
+ § 5.3 DATA ANALYSIS
194
+
195
+ We used thematic analysis [10] to identify recurring themes and patterns from our sessions. We transcribed all sessions and started analyzing them using inductive analysis. Initially, two of the authors coded five transcripts and discussed their codes, and then one author open coded the rest of the sessions. Next, we grouped the codes, and all the authors discussed possible themes and patterns across the groups. We discussed the possible themes over several iterations, focusing on areas that highlighted the potential benefits and drawbacks of raising a user's feature awareness based on their collaborators' use of an application. We used these themes and the participants' feedback on the individual design concepts to identify the approximate relative variation in participants' preferences across the design dimensions.
196
+
197
+ § 5.4 FINDINGS
198
+
199
+ Almost all participants (17/18) reported experiences discovering new features while observing their colleagues. In line with prior work [60], the participants found such interactions desirable but rarely happened. For example, P00 explained, "It's definitely more difficult to find [a new feature] on your own than to observe. Observing is easier." As expected, some participants explicitly reported fewer instances of this interaction with the switch to remote working, for example: "Because it's work from home, we don't really see each other and I don't get to observe their work (P08)". This participant
200
+
201
+ ${}^{6}$ Initially we recruited 20 participants, but we had to exclude 2 participants due to technical issues.
202
+
203
+ went on to talk about using email and messaging to replace such OTS knowledge sharing, yet wishing for an in-application support: "We'd usually be texting each other or calling each other to inform each other... So, we have to stick to this particular layout, or these other things we have to keep uniform. Instead of doing that communication outside the platform, I think, within the same platform, if you could see this information, I think it will be more efficient". As such, the participants felt positively about the idea of raising feature awareness based on their collaborators' software use using in-application tools.
204
+
205
+ § 5.4.1 OVERVIEW OF USER PREFERENCE ON DESIGN CONCEPTS AND DESIGN DIMENSIONS
206
+
207
+ At the end of each session we asked participants to rank the design concepts from the most preferred to the least preferred. We aggregated all the first and second rankings by participants to identify which concepts participants preferred the most and which the least (this produced 36 ranking data points). CollabPatina was the most preferred (13/36) then NewsFeat (10/36), followed by MostFrequen- ${tKS}$ (6/36) and CollabCommands (5/36), with CommandMeter a clear last $\left( {2/{36}}\right)$ .
208
+
209
+ It is interesting to note that CollabPatina and NewsFeat represent different edges in the design space. CollabPatina was popular because of its low user involvement. This concept's goal was to provide an easy and quick way for the users to see which commands their collaborators use, and more importantly it also shows where the commands are located within the interface. The participants appreciated this functionality because they did not have to spend time locating the commands, which was not the case for Collab-Commands, NewsFeat, and CommandMeter. NewsFeat was popular because participants could see sequences of commands that their collaborators were using and ask follow-up questions. In contrast, CommandMeter, which is also a design concept that requires high user involvement was not so popular. It was ranked last most often because it requires high user involvement in order to compare the users' actions to their collaborators.
210
+
211
+ It is important to note that although NewsFeat was well received, participants did raise some concerns regarding feeling self-conscious and micromanaged, which we discuss in Theme 4. Also, the participants were particularly enthusiastic about the ability to see command groupings, but noted that the utility of this aspect of NewsFeat would require high user involvement, i.e., the user and their collaborators would need to take the time to create groups of commands. Participants felt that investing this time would be fine under certain circumstances. For example, P05 commented "... if I want to help new members out in the company, then I would do this. I would group stuffup and then reply to comments and stuff".
212
+
213
+ For the rest of the section we discuss themes that emerged across all the design concepts.
214
+
215
+ § 5.4.2 THEME 1: RAISING FEATURE AWARENESS BASED ON THE USER'S COLLABORATORS COULD HELP USERS CONVERGE ON SOFTWARE USAGE PRACTICES
216
+
217
+ Consistent with the insights from our informal formative study (Sect. 3.3-D5), participants commented on how these tools could help them and their team converge on common software usage practices when working on shared artifacts using feature awareness tools. The participants commented on the usefulness of the concepts to identify similarities and differences in features that their collaborators use to produce a consistent style. For example, P08 commented on why they thought CollabCommands could be useful to them and their colleagues "when I used to work on PowerPoint, we'd usually be texting or calling each other ... to stick to this particular [Pow-erPoint presentation] layout. Instead of doing that communication outside the platform, I think, within the same platform if you could see this information, it will be more efficient". P04 highlighted the efficiency of having an in-situ feature usage history displayed in NewsFeat: "Instead of me having to go and ask, 'What did you do? How did you do this?' I can actually see it in the activity, and it might save a few emails or some back and forth".
218
+
219
+ Participants also commented on how they could use these concepts the other way around (for example, the user could help their collaborators converge on common software usage practices). They described, for example, that if a user notices that their collaborators are not using the appropriate commands in a shared document, it could be useful to alert them about it. For example, P04 discussed how they would use NewsFeat to help their colleagues "... if we're stuck on something, if I get to see that, ... oh, okay, this is where maybe somebody got stuck, or why is this being returned to so many times, is there something that we need to revisit in that document itself?".
220
+
221
+ Finally, the participants also commented on how these design concepts could help them converge with their own past feature usage. Such a scenario may occur when a user tries to resume a task after a long time and could find it useful to be aware of features they had used in the past. For example, P00 commented on CollabPatina "Well, because I sometimes do things and I forget how I did them. So I like that I can also see how I did things".
222
+
223
+ One potential caveat that a couple participants noted was that exposing the user to other collaborators' usage habits may limit their style and creativity. They were concerned that by seeing what features their collaborators are using, they might feel discouraged to use the features they like to use or experiment less with new features. P07 who is a lighting artist had as initial impression of CollabCommands was "It will change my mind to use more and more whatever other people using. It will try to stop creativity, [...]" while P04 commented on CommandMeter "you might love this feature and want to use it all the time, but the rest of your team might not, and that can be a little tricky because if you're using it and nobody else is using it, then sometimes that's not helpful either".
224
+
225
+ § 5.4.3 THEME 2: RAISING FEATURE AWARENESS BASED ON THE USER'S COLLABORATORS COULD HELP USERS BE MORE EFFI- CIENT WITH THEIR TASKS.
226
+
227
+ Some participants felt that they could use these tools to discover more efficient alternatives to do the same task. By efficient alternatives, we do not mean only keyboard shortcuts but also the sequence of steps that other collaborators take to complete the same task. For example, P09 commented when they saw the CommandMeter's visualizations "For example, if someone is using a command that all of us aren't, meaning something novel and different, that might help us figure out if we can also use that too, maybe it's a better way of doing a task than the version that we've been doing".
228
+
229
+ The participants also spoke about wanting to expose their own usage data to help their collaborators discover more efficient alternatives. For example, P02, a project coordinator working with a team of 6, said about MostFrequentKS, "Maybe I would just use this [MostFrequentKS] as a bit of an encouragement for those who might be on the fence about using keyboard shortcuts that, hey, there's actually a bunch of us are using it and this is ... helping us to be more efficient".
230
+
231
+ § 5.4.4 THEME 3: USERS WANT FINE-GRAINED CONTROL OVER AWARENESS DATA SOURCES
232
+
233
+ The majority of the participants (14/18) wanted fine-grain control over which subset of collaborators the tool draws feature usage from. They reported that their collaborators might have different roles, such as active editors, viewers, and reviewers. Further, active editors may be in charge of various tasks, only some of which may be relevant to the user. As a result, they felt that the features that the design concept will choose to highlight may not be sufficiently targeted to be valuable. For example, P09 commented, "There might be people that are just there for review or editing or just viewing purposes so their data will skew it a lot if you don't have the ability to exclude them".
234
+
235
+ When we asked participants about which collaborators the tools should include, their opinions differed. Some participants (4/18) wanted to include collaborators based on their role in the document. For example, P15 wanted to include all active editors in their NewsFeat: "probably the owner of the document, and then the main collaborators, and then anyone who's just kind of viewing it or doesn't actually have any [...] stake in the document, [...] then I wouldn't follow them". Other participants (5/18) wanted to include collaborators that are doing tasks similar to theirs. For example, P09 said "It's really helpful to be able to include or exclude certain people because [...] everyone is doing different things or there might be certain people that are just on there but not actively working on the documents. So being able to exclude those people from any sort of analytics is important".
236
+
237
+ Some participants $\left( {5/{18}}\right)$ wanted to include individuals based on their perceived expertise or role in the team/company. For example, P00 commented that they would like to include their collaborators who are knowledgeable with the software by using the CollabCom-mands filtering capabilities "I would include people I know are good at using the type of software that I'm working on".
238
+
239
+ Other participants $\left( {4/{18}}\right)$ did not want to include or exclude any of their collaborators. One possible reason is that, in their teams, all the collaborators have similar roles. For example, P06, a college student, said about CollabCommands: "....it's not like one collaborator is more useful and would have used more commands than another person, necessarily. So yeah, I don't really see a usefulness to that".
240
+
241
+ The participants were also interested in having some control over which documents the tools draws the feature usage from. They found this functionality useful if the other documents they included were similar to the current document. P04 commented about this functionality in CollabCommands: "I do find this valuable, because we do work with a lot of similar documents ... and especially because we're always looking to keep things consistent. So, I think having all shared would really help". Similarly, P09 said, "I wouldn't want it to do that by default because different documents, ... are trying to do different things ... the commands that I use in one might not necessarily be the same that I use in the other. But the ability to do that, having that option is fine".
242
+
243
+ § 5.4.5 THEME 4: TOO DETAILED INFORMATION ABOUT THE COLLABO- RATORS' ACTIONS COULD MAKE USERS FEEL MICROMANAGED AND SELF-CONSCIOUS
244
+
245
+ The participants expressed concerns about the detailed information that some design concepts provide. Indeed our design concepts provide information about who used the feature, how often, and how recently to explain why this feature may be relevant to the user. The designs differ on the level of information detail. For example, NewsFeat provides more detailed information showing the exact number of times a named collaborator used a command on the same day. On the other hand, CollabPatina used color-codes highlights to imply the frequency of use of the user's collaborators without identifying the collaborators.
246
+
247
+ Although seeing more detailed information can benefit the user, as discussed in the previous themes, this information could also lead to feelings of being micromanaged and could cause anxiety among users. For example, when we prompted P07 about how they feel when they saw their collaborators’ avatars, they said,"when $I$ think about seeing collaborators' names using it, Ifeel like I am a very picky production manager who's trying to micromanage people and make them work faster". Similarly, when we asked P00 their reactions regarding the recency of information in NewsFeat they said, "Maybe they can have just a vague recents. [...] I wouldn't prefer an option to share daily because then there's an added pressure". P14 commented regarding detailed information of frequency: "If there is a command that I have not been using that often, I would feel that I am not contributing that much".
248
+
249
+ Some participants felt that detailed information could affect their decision to use a specific design concept and even suggested design changes. For example, P09 said about NewsFeat, "It would definitely make it less invasive if it was just a listing of [the collaborator's] most used commands without any numbers". Some participants suggested that they would like the ability to hide information to feel less stressed about the information they share. P06 said, "When I'm giving my permission, maybe I can hide one thing I don't want to show, or things I don't want to show off. Yes. I am giving you permission, but you can see this part, but I will hide the parts I don't want you to see".
250
+
251
+ We observed that individual differences related to professional dynamics and personality could affect how users feel about the level of shared details. Problematic professional dynamics such as the position within the organization's hierarchy and the relationship between the user and their collaborators could amplify micromanagement and self-consciousness issues. For example, P07 commented on their experience with their previous manager "it is just about who are you working with. [...] I've worked with some kind of a person who had psychological disorders, and the minimum mistake you made here will come to your very harsh way and he will give you some psychological difficulties [...] and that's the reason I wouldn't want to see my name is that there too: the blaming point". Also, the user's personality could affect how they perceive detailed information. If the user is more prone to stressful situations, they may be less open to see and share detailed software usage information. For example, P00 said, "My boss is super understanding, but I also struggle with anxiety. [...] So to have this other pressure of... I think people deserve a little bit more leniency and every detail shouldn't be shared with the people they're working with".
252
+
253
+ § 6 REFLECTION ON THE DESIGN SPACE
254
+
255
+ The findings from the elicitation study suggest that designers should consider all five dimensions when designing feature awareness tools based on the user's collaborators; none of the dimensions in our design space were shown to be unimportant. To further probe on the participants' preference for each design dimension, we went through the participants' transcripts to specifically look at comments related to the design dimension. We then positioned the participants' comments for each design dimension within the design space (Fig. 3). For example, P2's comment "I don't think I would really care to know who specifically out of my group uses these features" suggested that P2's preference for D1: Number of active collaborators leaned strongly towards all active editors. In the rest of this section, we reflect on our key findings on user preference within the design space and propose potential design dimensions to expand the design space (as illustrated in Fig. 3), and finally discuss their implications to drive future system designs.
256
+
257
+ We saw that most participants do want to include only a subset of data sources that the feature awareness tool draws from; they want the ability to control which collaborators (D1) and documents (D2) are included / excluded. This is an example where participants did not show preference for either end of the spectrum (Fig. 3 - D1 & D2). As a design implication, we imagine an interface that includes by default all collaborators and the current document, while easily allowing further control with interactive widgets.
258
+
259
+ We also observed that users had a strong preference for implicit comparison (Fig. 3 - D3) and generally prefer to have as little involvement as possible (D4). As a design implication, we propose that a system must make it easy for users to locate highlighted features within the interface. This can be accomplished with a solution like CollabPatina or with a hybrid solution that lists the highlighted features like CollabCommands but provides additional support for locating the feature when the user interacts with it in the list. Beyond locating a feature, participants are willing to have some involvement for features they deem to be especially valuable; for example, they will ask follow-up questions or actively recommend features to their collaborators (as in NewsFeat). Thus, with respect to the user involvement design dimension, participants had a preference for the low involvement end of the spectrum but there were some varying opinions (Fig. 3 - D4). As a design implication, the system should maximize the information related to the highlighted feature while minimizing user involvement. However, the system should provide non-trivial information, if the user wishes to interact more with it.
260
+
261
+ Finally, we observed that participants expressed a strong interest in using these tools to both find new features and to help them and their collaborators adopt common feature usage practices (D5). We see, therefore, that participants saw value in being exposed to the features that their collaborators use, both the ones that the user isn't aware of and the ones that the user already knows of (Fig. 3 - D5).
262
+
263
+ Our findings highlight a trade-off between the availability to view detailed usage information (which collaborator used a feature, how often they used it, and how recently) versus feeling microman-aged and self-conscious. Indeed a lot of the benefits highlighted in Themes 1, 2, and 3 depend on the user having access to this information. However, that same information can cause users to feel negatively, as discussed in Theme 4. Striking the right balance on how to present this information is an important design challenge.
264
+
265
+ Based on our results, we propose to expand our design space by adding Detail of feature usage information as an emerging design dimension. At one end is ${Low}$ level of detail, where designers could reveal collaborators' usage by using language (or a visual indicator) that describes the behavior, but avoids specific numerical values (for example, using "frequently used a command" vs. "used the command 20 times"). On the other end, we have High level of detail where designers could use precise numbers, dates, and names. An example of Low level of detail is CollabPatina which uses color-coded indicators to indicate commands that the user's collaborators frequently use, while an example of High level of detail is NewsFeat. This dimension is not independent of the other dimensions. For example, a design concept cannot offer explicit comparison (D3) without using detailed information. Also, it cannot offer the ability to control which collaborators the feature awareness system draws on without a High level detail of the collaborators' identities.
266
+
267
+ We observed that participants preferred a low level of detail, especially on the recency and frequency of feature usage. They were more comfortable with a system that provides more detailed identification information about the collaborators (Fig. 3 - D6). As a design implication, we propose a system that avoids numerical values for frequency and recency of command usage and can allow for a high level of detail for which collaborators used a feature. Our concepts displayed the avatars of the individual collaborators but a future direction is to include other identification information such as role of the collaborator within the company or their technical expertise.
268
+
269
+ § 7 OVERALL DISCUSSION
270
+
271
+ Current solutions that are based on individual users $\left\lbrack {{26},{38}}\right\rbrack$ require users to stop their current tasks to either have brief video chats or watch targeted video tutorials. Complementary to this approach, we aimed to leverage the user's collaborators to facilitate in-situ feature discovery while minimizing their involvement and task interruptions. Participants perceived collaborator-based feature awareness tools to be valuable and effective for discovering and adopting common usage practices, but also noted potential issues with self-consciousness and micromanagement.
272
+
273
+ We reflect on the value of our approach in terms of providing remote over the shoulder learning and how it relates to remote learning from crowd communities. We then discuss our key findings with respect to the need for user and collaborator control over usage information that is shared.
274
+
275
+ § 7.1 SUPPORTING REMOTE OVER THE SHOULDER LEARNING
276
+
277
+ Software users often rely on their collaborators to learn new features by observing them $\left\lbrack {{60},{70}}\right\rbrack$ . But with the increase in remote work, especially during the COVID pandemic, such over the shoulder (OTS) learning opportunities are limited. Most participants noted that, unfortunately, current tools were limited in providing any support for in-situ software learning and knowledge-sharing, forcing them to coordinate back and forth with their collaborators using external applications (e.g., emails or text messages). The insights from our work can help designers tackle the challenge of supporting in-situ "remote over-the-shoulder learning", especially among collaborators working on shared documents. Although some recent work [38] has investigated how to support remote OTS using video chat, it seems more targeted at complex problems. In contrast, our work proposes more lightweight in-situ techniques for raising feature awareness among collaborators. A future direction is to design support systems that combine various types of remote OTS learning that vary in the user involvement they require and the complexity of the task at hand.
278
+
279
+ § 7.2 FEATURE AWARENESS TOOLS BASED ON DIFFERENT USER COMMUNITIES
280
+
281
+ In this work we have investigated an alternative to crowd-based approaches by relying on direct collaborators for raising feature awareness. Participants found that direct collaborators can identify the features that can help them complete their task efficiently. Interestingly, participants also commented on the idea of exposing their data to help their collaborators (Theme 1) discover features that they know to be useful. Previous work has focused on how the user can benefit from having access to the usage habits of various user communities $\left\lbrack {{53},{56},{59}}\right\rbrack$ . Our work highlights that with a more local community, users also see specific benefit to contributing their data.
282
+
283
+ While our work has focused on the user's direct collaborators rather than the crowd, we do not see the different user communities as competitors. Each user community can help the user in different ways to raise feature awareness and can even complement each other. Feature awareness systems based on the user's collaborators may be best for helping users to identify the features needed in their current context, i.e., the document they are currently working on. In contrast, feature awareness systems based on the crowd may be best for helping users to expand their feature vocabulary beyond the set of features that their collaborators are using.
284
+
285
+ A possible future direction is to explore hybrid solutions that support feature awareness based on different user communities. There are several potential design challenges herein. For example, how can we visually distinguish the various user communities? How can we allow the user to switch between user communities and customize their system easily? How can hybrid systems help tackle the privacy concerns highlighted in our elicitation study?
286
+
287
+ § 7.3 SUPPORTING USER CONTROL OF DATA SOURCES USED FOR RAISING FEATURE AWARENESS
288
+
289
+ Theme 3 discussed how the participants wanted control over the data sources used to support their feature awareness (i.e., which subset of collaborators are viewed and the ability to include similar documents in the comparison), for purposes such as tracking a collaborator who has worked on a particular element of document or is technically savvy. One participant commented that determining the collaborators of interest could be a potential challenge. Although we suspect that this is not going to be a problem for an document that the user is actively working on, perhaps it could be a problem when they include similar documents or newly start working on a document that their collaborators have already been working on. In these cases, it could be useful for the system to highlight collaborators of interest (i.e., the collaborators who worked on the same graphical elements, or the collaborators who are the most active). One potential issue, however, is exposing each collaborator's role can lead to the same problems discussed in Theme 4.
290
+
291
+ < g r a p h i c s >
292
+
293
+ Figure 3: Based on a targeted analysis of the transcripts, we provide a visual representation of the approximate relative variation in participants’ preferences across the design dimensions. The width of the ellipses provides an indicator of the divergence of opinion.
294
+
295
+ § 7.4 ALLOWING COLLABORATOR CONTROL OVER WHAT INFORMA- TION THEY SHARE
296
+
297
+ Theme 4 highlighted how sharing detailed personal feature usage information might make users feel self-conscious, stressed, and mi-cromanaged. However, we also noticed a divergence of opinion, meaning that some participants were more comfortable sharing detailed personal feature usage than others. This divergence could be explained by prior work on factors that affect the users' decision to share personal information to benefit from the system they use.
298
+
299
+ For example, privacy calculus theory [15] views these decisions as a rational process where users perform a subjective cost-benefit analysis regarding disclosing personal information. This disclosure happens if they anticipate that the benefits outweigh the risks of privacy loss. Work-related to privacy calculus has highlighted some interesting insights such as readiness to embrace new technology [46], self-efficacy [6], trust [52], and amount of involvement [67] that can affect the user's decision to disclose personal information. Furthermore, prior work was identified different personas [46] based on the value users put on the perceived benefits and privacy risks. A future direction is to investigate how these insights apply to our context and the potential design implications.
300
+
301
+ We also want to explore ways to give the users control over what information they share and the detail of this information. In Theme 3, we discussed, for example, a participant who asked for the ability to hide certain commands they used, and we discussed some participants who asked for varying levels of detail sharing. An important future direction is to explore how users can customize the level of detail they share in balancing privacy with the benefits gained by sharing. The challenge is accomplishing this customization in a lightweight manner, given that users generally do not want high user involvement.
302
+
303
+ One possibility is to give users fine control over when they share their feature usage. For example, users can choose to share specific actions by enabling an option in the menu, and when they are done with their task, they can disable the sharing. An alternative is to let users review the highlighted features that the tool has chosen when the user closes the collaborative editor. This solution could help create " learning events" and highlight the features that the user thinks their collaborators would benefit from.
304
+
305
+ § 8 LIMITATIONS AND FUTURE WORK
306
+
307
+ The video prototypes used in our elicitation study did not discuss differences in collaborators' roles (e.g., within one organization) because we wanted participants to ground their feedback to their own experiences. However, most participants did not feel that the different roles of their collaborators impacted their perceptions on our design concepts. Only a few participants mentioned certain professional dynamics that may increase the fear of micromanaging, for example if there is a competitive culture in their team. Future work could broaden the participant sample to further probe on other social factors, for example including more diverse age groups and participants with different remote-working experiences, as well as systematically explore how the role of the user within the company (i.e. manager, subordinate) can affect the user's perception of feature awareness based on the user's collaborators.
308
+
309
+ We designed each of the five concepts as an independent support mechanism, but many of their properties could work in combinations. Combining properties would be an interesting future direction given that two of the most well-received designs CollabPatina and NewsFeat, offer different functionalities. Our elicitation study used video prototypes to probe participants' reactions and perceptions while reducing biases due to potential implementation issues. One potential direction is to focus on building a feature awareness tool that incorporates aspects from the design concepts that were well received. With this tool, we can conduct longitudinal studies to assess how feature awareness based on the user's actual collaborators will impact the user's actual software usage habit over time.
310
+
311
+ § 9 CONCLUSION
312
+
313
+ Our work contributes insights into how we can raise serendipitous feature awareness in remote shared contexts based on a user's collaborators. Drawing upon our informal formative study, prior work, and our own experiences, we created a design space, and then generated five design concepts that exercise this design space of serendipitous feature discovery. Through our elicitation study, we uncovered attitudes and perceptions towards feature awareness tools based on the user's collaborators, highlighting promising design directions and design elements, but also revealing sensitivities that need to be accommodated through careful design. Our work opens up possibilities for new tools that can leverage the user's collaborators' feature usage to provide over-the-shoulder learning in remote contexts. Altogether, it offers a promising direction for addressing feature learnability through improved feature discoverability, a longstanding challenge in HCI.
314
+
315
+ § 10 ACKNOWLEDGEMENT
316
+
317
+ This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) "Making it personal: tools and techniques for fostering effective user interaction with feature-rich software" and by European Research Council (ERC) grants ${\mathrm{n}}^{ \circ }$ 695464 "ONE: Unified Principles of Interaction".
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/3MrDT4bTycn/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,305 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # VR Stylus and Controller Combination Was Preferred Over the Mouse or Hand Tracking for Object Manipulation and Marking Tasks in Virtual Reality
2
+
3
+ ## Abstract
4
+
5
+ For medical surgery planning, virtual reality (VR) provides a new kind of user experience, where 3D images of the operation area can be utilized. Using VR, it is possible to view the 3D models in a more realistic 3D environment, which would reduce the perception problems and increase spatial understanding. In this experiment, we compared a mouse, hand tracking, and a combination of a VR stylus and a VR controller as interaction methods in VR. The purpose was to study the viability of the methods for tasks used in medical surgery planning in VR. The tasks required interaction with 3D objects and high marking accuracy. The controller combination was statistically significantly the most liked interaction method compared to the other methods. In subjective results, it was the most appropriate, while in objective results, the mouse interaction method was the most accurate.
6
+
7
+ Index Terms: Human-centered computing-Human computer interaction (HCI)-Interaction devices-Pointing devices; Human-centered computing-Human computer interaction (HCI)- Empirical studies in HCI; Human-centered computing-Human computer interaction (HCI)—Interaction paradigms—Virtual reality
8
+
9
+ ## 1 INTRODUCTION
10
+
11
+ Virtual reality devices make it possible to create computer-generated environments that replace the real world for different applications. For example, the user can interact with virtual object models more flexibly using various interaction methods than with real objects in the real environment. VR has become a standard technology in research, but it has not been fully exploited in professional use even if its potential has been noticed.
12
+
13
+ In the field of medicine, x-ray imaging is routinely used to diagnose diseases and anatomical changes as well as for scientific surveys [23]. In many cases $2\mathrm{D}$ medical images are good, but they can be complemented with 3D images for more complex operations where detailed understanding of the 3D structures is needed.
14
+
15
+ When planning surgeries, medical doctors, surgeons, and radiologists study 3D images. Viewing the 3D images in 2D displays can present issues to control object position, orientation, and scaling. Using VR devices, like head mounted displays (HMD) the 3D images can be viewed and interacted with in a more easily perceived 3D environment than 2D display. For the medical professionals to be able to do the same tasks in VR as they do in 2D, the interaction methods need to be study properly. The interaction method needs to be accurate, reasonable, and suitable for the medical tasks. Because we talk about medicine, the accuracy is crucial to avoid as many mistakes as possible. The interaction method needs to be reasonable so that the doctors would use it in their daily work and so that they still can focus on their primary tasks without paying too much attention for the interaction method. One typical task for the doctors is marking anatomical structures and areas on the surface of the 3D model. The marked points create the operative area, or they can be used for training.
16
+
17
+ For 2D content, a mouse is one of the best options for interaction due to its capability to point at small targets with high accuracy and the fact that many users are already very experienced with this device [19]. Mouse cursor can be used for 3D pointing with ray-casting [26] which allows pointing of the distant objects as well. The familiarity and accuracy make the mouse a worthy input method in $\mathrm{{VR}}$ , even though it is not a $3\mathrm{\;D}$ input device. In addition, controller has been identified as an accurate interaction method $\left\lbrack {7,{10}}\right\rbrack$ and it is typically used in VR environments [14]. Controller enables direct manipulation, and the reach of distant objects is different than with the mouse with ray-casting. Other devices, like stylus has been studied in pointing task previously $\left\lbrack {{19},{32}}\right\rbrack$ and therefore we investigated its performance together with a controller.
18
+
19
+ Cameras on HMDs enable interaction even without input devices. This is made possible by capturing the hand position, movements, and location. Pointing the targets by a finger is natural for humans so hand interaction is convenient for our study. Hand interaction was used as one condition as users do not need to learn any new hardware devices.
20
+
21
+ We chose to use marking task to evaluate the three interaction conditions. The conditions were a standard mouse, bare hands, and a handheld controller with VR stylus. All the methods were used in VR so that the variation between the methods would be as low as possible so that the comparison would concentrate to the interaction techniques. We had 12 participants who were asked to do simplified medical surgery marking tasks. To study the accuracy of the interaction methods, we created an experiment where in the 3D model there was a predefined target that was marked (pointed+selected). In the real medical case, the doctor would define the target, but then the accuracy cannot be easily measured.
22
+
23
+ The paper is organized as follows: First, we go through background of object manipulation and marking, interaction methods in 3D environment, and jaw osteotomy surgery planning (Section 2). Then, we introduce the compared interaction methods and used measurements (Section 3), as well as go through the experiment (Section 4) including apparatus, participants, and study task. In the end the results are presented (Section 5) and discussed (Section 6).
24
+
25
+ ## 2 BACKGROUND
26
+
27
+ ### 2.1 Object manipulation and marking
28
+
29
+ Object manipulation and object marking have been used separately as a study task when different VR interaction methods have been studied. Sun et al. [24] had 3D positioning task that involved object manipulation. When mouse and controller were compared for precise 3D positioning mouse was better input device. Object marking has been studied without manipulation in [19]. Argelaguet and Andujar [1] have also studied 3D object selection techniques in VR.
30
+
31
+ ### 2.2 Input devices for object manipulation and marking 2.2.1 Mouse
32
+
33
+ A mouse is a common, familiar, and accurate device for $2\mathrm{D}$ content to point at small targets with high accuracy [19]. The mouse is also a common device to do medical surgery planning [14]. Many studies have used a mouse cursor for $3\mathrm{D}$ pointing with ray-casting $\left\lbrack {5,{14},{19},{26}}\right\rbrack$ . Ray-casting technique is easily understood, and it is a solution for reaching objects at a distance [17].
34
+
35
+ Compared to other interaction methods in VR, the issue of the discrepancy between the $2\mathrm{D}$ mouse and a $3\mathrm{D}$ environment has been reported [1], and Kim and Choi [13] mentioned that it creates a low user immersion. In addition, use of a mouse usually forces the user to sit down next to a table instead of standing. The user can rest their arms on the table while interacting with the mouse which decrease hand fatigue. Johnson et al. [11] stated that fatigue with mouse interaction will appear only after 3 hours.
36
+
37
+ Bachmann et al. [3] found that Leap Motion controller has a higher error rate and higher movement time than the mouse. Kim and Choi [13] showed in their study that 2D mouse have high performance in working time, accuracy, ease of learning, and ease of use in VR. Both Bachmann et al. and Kim and Choi found the mouse to be accurate but on the other hand Li et al. [14] pointed that with difficult marking tasks small displacement of a physical mouse would lead to a large displacement on the 3D model in the 3D environment.
38
+
39
+ #### 2.2.2 Hands
40
+
41
+ Hand interaction is a common VR interaction method. Voigt-Antons et al. [31] compared free hand interaction and controller interaction with different visualizations. Huang et al. [10] compared different interaction combinations between free hands and controllers. Both found out that hand interaction has lower precision than the controller interaction. With alternative solutions like a Leap Motion controller $\left\lbrack {{20},{33}}\right\rbrack$ or using wearable gloves $\left\lbrack {34}\right\rbrack$ the hand interaction can be done more accurately. Physical hand movements create a natural and realistic experience of interaction $\left\lbrack {6,{10}}\right\rbrack$ , and therefore hand interaction is still an area of interest.
42
+
43
+ #### 2.2.3 Controllers
44
+
45
+ Controllers are the leading control inputs for VR [10]. When using controllers as the interaction method, marking, and selecting are usually made with some of the triggers or buttons on the controller. Handheld controllers are described as stable and accurate devices $\left\lbrack {7,{10}}\right\rbrack$ . However, holding extra devices in hands might become inconvenient. When interacting with hands or controllers in VR, the fatigue in arms is one of the main issues $\left\lbrack {1,9}\right\rbrack$ . Upholding arms and carrying the devices increase the arm fatigue.
46
+
47
+ #### 2.2.4 VR stylus
48
+
49
+ Batmaz et al. [4] have studied Logitech VR Ink stylus for a selection method in virtual reality. They found that using a precision grip there is no statistical differences on the marking if the distance of the target is changing. Wacker et al. [32] presented as one of their design VR stylus for mid-air pointing and selection happened pressing a button. For object selection, the users prefer a 3D pen over a controller in VR [19].
50
+
51
+ #### 2.2.5 Others
52
+
53
+ It is not always necessary to have an interaction device. Suresh et al. [25] used three voice commands to control gestures of a robotic arm in VR. They concluded that voice commands were useful especially for people with disabilities. In addition, voice is a useful input method in cases where hands and eyes are continuously busy [9], and speech is the primary mode for human communication [22]. Pfeuf-fer et al. [18] studied gaze as an interaction method together with hand gestures but found that both hand and gaze tracking lack tracking fidelity. In addition to poor eye tracking, Nukarinen et al. [16] stated that human factor issues made the gaze as least preferred input method.
54
+
55
+ ### 2.3 Jaw osteotomy surgery planning
56
+
57
+ Cone Beam Computed Tomography (CBCT) is a medical imaging technique that produce $3\mathrm{D}$ images that can be used in virtual surgery planning. Compared to previous techniques that were used in medical surgery planning like cast models, virtual planning with CBCT images has extra costs and time requirements [8]. However, the technique offers several advantages for planning accuracy and reliability [23]. CBCT images can be used as 3D objects in VR for surgery planning with excellent match to real objects [8]. Ayoub and Pulijala [2] reviewed different studies about virtual and augmented reality applications in oral and maxillofacial surgeries.
58
+
59
+ In virtual surgery planning, the procedures for surgery are implemented and planned beforehand. The real surgery is done based on the virtual plan. Common tasks in dental planning are specifying the location of impacted teeth, preventing nerve injuries, or preparing guiding flanges [23]. In VR this can be done by marking critical areas or drawing cutting lines on to the models. Virtual planning can be used in student education as well, where the procedures can be realistically practiced. Reymus et al. [21] found that students understood the mouth anatomy better after studying 3D models in VR environment than from regular 2D image. The objects can be closer, bigger, and they can move in depth direction in 3D environment compared to 2D environment [12].
60
+
61
+ Tasks, like understanding the 3D object and marking critical areas on it need to be done in medical surgery planning. However, working with 3D objects in 2D environment makes the task more difficult Suitable interaction and marking methods help to understand 3D objects and perform required tasks in VR. In this study, we evaluated three methods for VR object manipulation and marking and examined the performances in simplified medical surgery planning tasks.
62
+
63
+ ## 3 Method
64
+
65
+ ### 3.1 Mouse
66
+
67
+ In the first interaction method a regular mouse was used inside a VR environment (Figure 1). In VR environment there was a visualized mouse model that the participant was able to move by manipulating the physical mouse and to control the direction of a ray starting from the model. The ray was always visible in Mouse interaction.
68
+
69
+ Mouse was used one-handed when the other two methods were two-handed. Mouse was used to perform two functions, manipulation and marking, while these functions had been separated in other methods into different hands. In addition, Mouse used ray-casting, ray from the mouse, while the two other methods did not use it. The other methods used direct mid-air object manipulation.
70
+
71
+ The participant could rotate the object in 3 dimensions using the mouse movements with right click. For the 3 dimensions translation the user can achieve it by pressing the scroll button. Using the scroll wheel, the user can zoom in and out (translate in $\mathrm{Z}$ ) and when the user presses the scroll button and move the mouse, the user can translate up-down and sideways (translate in $\mathrm{X}$ and $\mathrm{Y}$ ). Markings were made by pointing the target with the ray and pressing the left button.
72
+
73
+ For the real-world mouse to be visible inside VR, pass through is not really required even though the mouse was visible in our study. After wearing the headset, the user could see the virtual mouse that is positioned to where the physical mouse is located to be able to find and reach the device. When the user moved the physical mouse, the movements were translated into the rotation of the ray representation coming from the virtual mouse and this way the user can cover a large space similar to using mouse in $2\mathrm{D}$ displays. To improve ergonomics, the user could configure the desk and chair for their comfort.
74
+
75
+ ### 3.2 Hands
76
+
77
+ As the second interaction method, the participant used bare hands. The left hand was for object manipulation and the right hand for object marking. The participant could pick up the 3D object by a pinch gesture with their left hand, to rotate and move the object. Marking was done with a virtual pen. In the VR environment the participant had the virtual pen attached to their right palm, near to the index finger (Figure 2 right). The pen moved according to the moves of the palm. When the virtual pen was moved close to the target, the pen tip changed its color to green to show that the pen was on the surface of the object. Visual feedback compensated the possible lack of depth perception in mid-air gestures. The marking on the surface was done by pressing a virtual button on the pen with an index finger so that yellow sphere appeared where the pen tip was pointed.
78
+
79
+ ![01963e5a-97ad-76d9-bfad-cc2ee5117501_2_278_146_1238_497_0.jpg](images/01963e5a-97ad-76d9-bfad-cc2ee5117501_2_278_146_1238_497_0.jpg)
80
+
81
+ Figure 1: Mouse interaction method outside VR (left). Mouse marking method inside VR and the study task (right).
82
+
83
+ ### 3.3 Controller and VR stylus
84
+
85
+ The third interaction method was based on having a controller on participant’s left hand for the object manipulation and a VR stylus on the right hand for the marking (Figure 3). The participant grabbed the 3D object with hand grab gesture around the controller to rotate and move the object. The markings were made with the physical VR stylus. The VR stylus was visualized in VR as was the mouse, so the participant knew where the device was located. The participant pointed the target with the stylus and pressed its physical button to make the mark. The act of press was identical to the virtual pen press in Hands method. There was a haptic feedback when touching the physical VR stylus, which did not happen with the virtual pen.
86
+
87
+ There have been some supporting results for using mouse in VR $\left\lbrack {3,{13},{14},{17}}\right\rbrack$ but 2D mouse is not fully compatible with the 3D environment [13]. We studied the ray method with Mouse to compare it against Hands and Controller+Stylus for 3D object marking. We also compared Hands without any devices to a method with a device in one or two hands. The marking gesture was designed to be similar in Hands and Controller+Stylus methods to be able to compare the effect of the devices.
88
+
89
+ ### 3.4 Measurements and the pilot study
90
+
91
+ The participant was asked to make a marking as close to the target location as possible. We used Euclidean distance to measure the distance between the target and the participant's marking. The task completion times were measured. The participant was able to remark the target if s/he was dissatisfied with the current marking. We counted how many remarkings were made to see if any of the interaction methods required more remarking than the other methods. We measured accuracy in these two ways, as a distance from the target and as the number of dissatisfied markings.
92
+
93
+ A satisfaction questionnaire was filled after each interaction method trial. There were a question and seven satisfaction statements that were evaluated on a Likert scale from 1 (strongly disagree) to 5 (strongly agree). The statements were grouped so that the question and the first statement were about the overall feeling and the
94
+
95
+ rest of the statements were about object manipulation and marking separately. The statements were:
96
+
97
+ - Would you think to use this method daily?
98
+
99
+ - Your hands are NOT tired.
100
+
101
+ - It was natural to perform the given tasks with this interaction method.
102
+
103
+ - It was easy to handle the 3D objects with this interaction method.
104
+
105
+ - The interaction method was accurate.
106
+
107
+ - The marking method was natural.
108
+
109
+ - It was easy to make the marking with this marking method.
110
+
111
+ - The marking method was accurate.
112
+
113
+ The statements were designed to measure fatigue, naturalness, and accuracy as they have been measured in earlier studies $\left\lbrack {1,6,{10}}\right\rbrack$ as well. Accuracy was measured also from data to see if the objective and subjective results are consistent. With these statements, it was possible to measure the easiness and ability to use the method daily unlike from objective data.
114
+
115
+ In the questionnaire there were also open-ended questions about positive and negative aspects of the interaction method. In the end the participant was asked to rank the interaction methods in order from the most liked to the least liked.
116
+
117
+ A pilot study was arranged to ensure that tasks and the study procedure were feasible. Based on the findings in the pilot study, we modified the introduction to be more specific and we also added a mention about the measured features. In addition, we added a possibility to rotate the $3\mathrm{D}$ object even after the mouse ray moved out of the object. The speed of the mouse ray in VR environment was increased so that it matched better the movements of the real mouse.
118
+
119
+ ### 3.5 Statistical measures
120
+
121
+ We used two different statistical tests to analyze possible statistically significant differences between different parameter sets. For objective data (completion times, number of markings, and accuracy) we used the paired t-test. For data from evaluation questionnaires (fatigue, daily use, naturalness, easiness, and subjective accuracy) we first used Friedman test to see if any statistically significant differences appeared, and then we used the Wilcoxon signed rank test as it does not assume the numbers to be in ratio scale or to have normal distribution.
122
+
123
+ ![01963e5a-97ad-76d9-bfad-cc2ee5117501_3_257_146_1287_496_0.jpg](images/01963e5a-97ad-76d9-bfad-cc2ee5117501_3_257_146_1287_496_0.jpg)
124
+
125
+ Figure 2: Hands interaction method outside VR (left). Hands marking method inside VR and the study task (right).
126
+
127
+ ![01963e5a-97ad-76d9-bfad-cc2ee5117501_3_369_721_1056_493_0.jpg](images/01963e5a-97ad-76d9-bfad-cc2ee5117501_3_369_721_1056_493_0.jpg)
128
+
129
+ Figure 3: Controller interaction method outside VR (left). Stylus marking method inside VR and the study task (right).
130
+
131
+ The study software saved the resolution of time in milliseconds and the resolution of distances in meters. To clarify the analysis, we transferred these to seconds and millimeters.
132
+
133
+ ## 4 EXPERIMENT
134
+
135
+ ### 4.1 Participants
136
+
137
+ We recruited 12 participants for the study. The number of participants was decided based on a power analysis for paired t-test and the Wilcoxon signed rank test, assuming large effect size, a power level of 0.8 and an alpha level of 0.05 . The post hoc calculated effect sizes (Cohen’s d or R value, for paired t-test or Wilcoxon signed rank test, respectively) are reported together with the p-values in Results Section 5 for comparison to the assumption of large effect size. 10 of the participants were university students and two were full time employees, on the field not related to medicine or dentistry. The ages varied from 21 to 30 years, mean age was 25 years. There were 6 female participants and 6 male participants. Earlier VR experience was asked on a scale from 0 to 5 , and the mean was 1.75 . Two participants did not have any earlier experience. One participant was left-handed but was used to use the mouse with the right hand. Other participants were right-handed.
138
+
139
+ ### 4.2 Apparatus
140
+
141
+ #### 4.2.1 Software, hardware, and hand tracking
142
+
143
+ The experiment software was built using the Unity software [27]. With all methods we used Varjo VR2 Pro headset [29], which has an integrated vision based hand tracking system that was used for Hands interaction. Hands were tracked by Ultraleap Stereo IR 170 sensor mounted on a Varjo VR2 Pro. For the Controller+Stylus, we used Valve Index Controller [28] together with Logitech VR Ink stylus [15]. These were tracked by SteamVR 2.0 base stations [30] around the experiment area.
144
+
145
+ #### 4.2.2 Object manipulation and object marking
146
+
147
+ The study task combined two phases: object manipulation phase and object marking phase. In object manipulation phase the participant either selected the 3D object by mouse ray or pinched or grabbed the 3D object with hand gesture. The 3D objects did not have any physics and laid in mid-air. By rotating and translating the object the participant can view the object from different angles.
148
+
149
+ Instead of only pointing the target, the selection needs to be confirmed. This allows us to measure the marking accuracy and if the user understood the 3D target's location related to the pointing device. The participant could either release the 3D object in midair or hold it in their hand when Hands or Controller+Stylus was used in object marking task. The marking was done either pointing by mouse ray and clicking with Left click, touching the target with virtual pen, and selected with hand gesture, or touching and selecting with the VR stylus.
150
+
151
+ ### 4.3 Procedure
152
+
153
+ First, the participant was introduced to the study, s/he was asked to read and sign a consent form, and fill in a background information form. For all conditions the facilitator would demonstrate him/herself the system functions and the controls. Each participant had an opportunity to practice before every condition. The practice task was to move and rotate a cube having several target spheres, and to mark those targets as many times as needed to get to know both the interaction and the marking methods. After the participant felt confident with the used method, s/he was asked to press the Done button, and the real study task appeared.
154
+
155
+ The participant was asked to find and select a hidden target mark on the surface of each 3D object model. The target was visible all the time whereas the participant's marking was created by the participant. When the target was found it was first pointed and then selected. The aim was to place the participant's sphere (yellow) inside the target sphere (red) (see Figures 1 right, 2 right, and 3 right). Each 3D object had one target on it and the task was repeated five times per each condition. The order of $3\mathrm{D}$ objects was the same to all participants: lower jaw, heart, skull, tooth, and skull. The order of interaction methods was counter-balanced between the participants using balanced Latin Squares. This was done to compensate possible learning effects. The target locations on the 3D object were predefined and presented in the same order for the participants.
156
+
157
+ The used task needed both object manipulation (rotating and translating) and marking (pointing and selecting). By combining the manipulation and marking tasks together, we wanted to create a task that simulates a task that medical professionals would do during virtual surgery planning. Both the object manipulation and marking are needed by the medical professionals. The marking is relevant when selecting a specific locations and areas of a 3D model and it requires accuracy to make the marks in relevant locations. This medical marking task does not differ from regular marking tasks in other contexts as such, but the accuracy requirements are higher. By manipulating the $3\mathrm{D}$ model, the professional has an option to look at the pointed area from different angles to verify its specific location in 3D environment.
158
+
159
+ A satisfaction questionnaire was filled after each interaction method trial, and after all three trials, a questionnaire was used to rank the conditions.
160
+
161
+ ## 5 RESULTS
162
+
163
+ In this section, we report the findings of the study. First, we present the objective results from data collected during the experiment, and then the subjective results from the questionnaires.
164
+
165
+ ### 5.1 Objective results
166
+
167
+ The task completion times (Figure 4, top left) include both object manipulation and marking, and it had some variation, but the distributions of median values for each interaction method were similar and there were no significant differences. The completion time varied slightly depending on how much VR experience the participant had before, but there were no statistically significant differences.
168
+
169
+ The number of markings done before the task completion varied between the interaction methods (Figure 4, top right). The median values for Mouse, Hands, and Controller+Stylus conditions were 6.5, 12 , and 7 markings, respectively. However, there were no statistically significant differences. Some participants did many markings in a fast pace (2-3 markings per second) leading to a high number of total markings.
170
+
171
+ There were some clear differences in final marking accuracy between the interaction methods (Figure 4, bottom). The median values for Mouse, Hands, and Controller+Stylus methods were 3.2, 5.9, and 4.2 millimeters, respectively. The variability between participants was highest with Hands method. We found statistically significant difference between the Mouse and Hands methods (p-value 0.004, Cohen’s d ${1.178}^{1}$ ) using a paired t-test and Bonferroni corrected p-value limit ${0.017}\left( { = {0.05}/3}\right)$ .
172
+
173
+ ### 5.2 Subjective data
174
+
175
+ Friedman tests showed statistically significant differences in daily use (p-value 0.002), interaction naturalness (p-value 0.000), interaction easiness (p-value 0.001), interaction accuracy (p-value 0.007), marking easiness (p-value 0.039), and ranking (p-value 0.000). In evaluations for tiredness there were no significant differences (Figure 5, left). Most participants did not feel tired using any of the methods, but the experiment was rather short.
176
+
177
+ In pairwise tests of everyday use using Wilcoxon signed rank test we found significant differences (Figure 5, right). We found statistically significant differences between the Mouse and Controller+Stylus methods (p-value ${0.015},\mathrm{R}{0.773}^{2}$ ) and between Hands and Controller+Stylus methods (p-value 0.003, R 1.000).
178
+
179
+ We asked the participants to evaluate both object manipulation and marking separately. In object manipulation evaluation, there were statistically significant differences in naturalness between Controller+Stylus and Mouse (p-value 0.003, R 1.000) and Controller+Stylus and Hands (p-value 0.009, R 0.879). In object manipulation easiness Controller+Stylus had statistically significant difference between Mouse and Hands (p-values 0.003, R 1.000 in both methods), see Figure 6. In manipulation accuracy evaluation we found statistically significant difference between Controller+Stylus method and Hands method (p-value 0.003, R 1.000). In the object marking evaluation (Figure 7), the only significant difference was measured between Controller+Stylus method and Mouse method in easiness (p-value 0.009, R 1.000).
180
+
181
+ Multiple participants commented that the controller interaction felt stable and that it was easy to move and rotate the $3\mathrm{D}$ model with the controller. The participants also commented that holding a physical device in hand so that its weight could be felt increased the feel of naturalness. Not all comments agreed, when one participant felt VR stylus as accurate while another participant said it felt clumsy.
182
+
183
+ When asked 11 out of 12 participants ranked Controller+Stylus the most liked method. The distribution of ranking values is shown in Table 1. The ranking values of Controller+Stylus method were statistically significantly different to Mouse (p-value 0.008, R 0.885) and Hands (p-value 0.003, R 1.000).
184
+
185
+ Table 1: The number of mentions of different rankings of the interaction methods when asked for the most liked $\left( {1}^{st}\right)$ , the second most liked $\left( {2}^{nd}\right)$ , and the least liked $\left( {3}^{rd}\right)$ method.
186
+
187
+ <table><tr><td rowspan="2">Condition</td><td colspan="3">Ranking</td></tr><tr><td>${1}^{st}$</td><td>${2}^{nd}$</td><td>${3}^{rd}$</td></tr><tr><td>Mouse</td><td>1</td><td>7</td><td>4</td></tr><tr><td>Hands</td><td>0</td><td>4</td><td>8</td></tr><tr><td>Controller+Stylus</td><td>11</td><td>1</td><td>0</td></tr></table>
188
+
189
+ ## 6 Discussion
190
+
191
+ In this study, we were looking for the most feasible interaction method in VR for object manipulation and marking in medical context. Controller+Stylus method was overall the most suitable for a task that need both object manipulation and marking. Controller+Stylus method was the most liked in all subjective features, while Mouse and Hands conditions were evaluated very similarly. The smallest number of markings were done with Controller+Stylus, but no significant differences were found. There were statistically significant differences between the methods in daily use, interaction naturalness, and easiness. Controller+Stylus was statistically significantly more accurate in object manipulation than Hands (p-value 0.003 ), and easier to use than Mouse (p-value 0.003). Without earlier experience with the VR stylus, the participants had difficulties in finding the correct button when marking with the stylus. The physical stylus device cannot be seen when wearing the VR headset and the button could not be felt clearly. Even though Controller+Stylus combination was evaluated as natural and the most liked method in this study, the hand-held devices may feel inconvenient [10]. In our study, some participants seemed to like the physical feel of devices. However, our result was based on the subjective opinions of participants, and that might change depending on the use case or devices.
192
+
193
+ ---
194
+
195
+ ${}^{1}$ Cohen’s $\mathrm{d} \geq {0.8}$ is considered a large effect size
196
+
197
+ ${}^{2}\mathrm{R}$ value $\geq {0.5}$ is considered a large effect size
198
+
199
+ ---
200
+
201
+ ![01963e5a-97ad-76d9-bfad-cc2ee5117501_5_266_175_1229_833_0.jpg](images/01963e5a-97ad-76d9-bfad-cc2ee5117501_5_266_175_1229_833_0.jpg)
202
+
203
+ Figure 4: The task completion times for different conditions (top left). The median values for each participant are rather similar between the methods. There were two outlier values (by the same participant, for Mouse and Hands conditions) that are removed from the visualization. The number of markings per five targets (top right). There were some differences between the interaction methods (the median value for Hands was higher than for the other methods), but no significant differences. The marking accuracy (bottom). There were some clear differences between the interaction methods in the final marking accuracy.
204
+
205
+ ![01963e5a-97ad-76d9-bfad-cc2ee5117501_5_333_1238_1116_376_0.jpg](images/01963e5a-97ad-76d9-bfad-cc2ee5117501_5_333_1238_1116_376_0.jpg)
206
+
207
+ Figure 5: The evaluation of fatigue (left). None of the methods were found to be particularly tiring. The evaluation of possible daily use (right). Controller+Stylus was significantly more usable for daily use than the other methods.
208
+
209
+ There are many possible reasons for the low hand tracking accuracy. Hand inaccuracy can be seen in the large number of markings and large distribution in task completion times with Hands as the participants were not satisfied with their first marking. Hands were the only method where only one participant succeeded with minimum 5 markings, when by other methods, several participants succeeded in the task with 5 markings. One explanatory factor can be the lack of hand tracking fidelity that also has been noticed in other studies $\left\lbrack {{10},{34}}\right\rbrack$ . In addition, inaccuracy in human motor system leads to the inaccuracy of hands [9]. The vision based hand tracking system that uses camera on HMD does not recognize the hand gesture well enough and as a result, the participant must repeat the same gesture or movement multiple times to success. This extra work also increases the fatigue in Hands. Even though the fatigue were low with all interaction methods, this study did not measure the fatigue of long-term activity. These are clear indications that Hands interaction needs further development before it can be used in tasks that needs high marking accuracy. Several earlier studies have reported the hands inaccuracy compared to controllers [9, 10, 34].
210
+
211
+ ![01963e5a-97ad-76d9-bfad-cc2ee5117501_6_234_178_1321_424_0.jpg](images/01963e5a-97ad-76d9-bfad-cc2ee5117501_6_234_178_1321_424_0.jpg)
212
+
213
+ Figure 6: The evaluation of interaction method naturalness (left), easiness (middle), and accuracy (right). Controller+Stylus was the most liked method in these features.
214
+
215
+ ![01963e5a-97ad-76d9-bfad-cc2ee5117501_6_231_744_1326_422_0.jpg](images/01963e5a-97ad-76d9-bfad-cc2ee5117501_6_231_744_1326_422_0.jpg)
216
+
217
+ Figure 7: The evaluation of marking method naturalness (left), easiness (middle), and accuracy (right). Median values in these features are rather similar, and significant difference was found only in marking easiness.
218
+
219
+ Haptic feedback was provided with Mouse and when marking with VR stylus. With Hands there was only visual feedback. The lack of haptic feedback might have affected the marking accuracy as well because the accuracy was much better with the physical stylus. Li et al. [14] found that with the low marking difficulty, the mouse with 2D display was faster than the kinesthetic force feedback device in VR. For high marking difficulty the other VR interface that used a VR controller with vibrotactile feedback was better than the 2D interface. They found that mouse in $2\mathrm{D}$ display has fast pointing capability but in our study, the task completion times did not vary between Mouse and the other methods. Li et al. described the fact that manipulating viewing angle is more flexible when wearing HMD than with a mouse in 2D display. In VR interfaces the participant can rotate the $3\mathrm{D}$ object while changing the viewing angle by moving their head. In our study, all methods used HMD, so change of viewing angle was as equally flexible.
220
+
221
+ Mouse was statistically significantly more accurate marking method than Hands. Mouse was not affected by some of the issues that were noticed with Hands or Controller+Stylus. With Mouse it was not felt problematic that the device cannot be seen during the use. There were no sensor fidelity issues with Mouse, and Mouse was a familiar device to all participants. Only the ray that replaced the cursor was an unfamiliar feature and caused some problems. We found that the ray worked well with simple 3D models but there were a lot of difficulties with complex models where the viewing angle needed to be exactly right to reach the target. If any part of the 3D model blocked the ray, the target could not be marked. When the target was easy to mark the accuracy using Mouse was high. It can be stated that Mouse was an accurate method in VR but for all other measured properties of Controller+Stylus were measured to be better.
222
+
223
+ Both the target and the marking were spheres in 3D environment. During the study, it was noticed that when a participant made their marking in the same location as the target, the marking sphere disappeared inside the target sphere. This caused uncertainty if the marking was lost or if it was in the center of the target. This may have affected the results when the participants needed to make remarking to be able to see their marking that was not in the center of the target anymore. In future studies the marking sphere should be designed bigger size than the target and transparent so that the participant can be sure about the location of both spheres.
224
+
225
+ Our focus was in comparing three different interaction and marking methods and their suitability for the medical marking task. To simplify the experimental setup, the experiment was conducted with simplified medical images, which may have led to optimistic results for the viability of the methods. Even then, there were some problems with Mouse interaction method. To further confirm that the results are similar also for more realistic content, a similar study should be conducted in future work with authentic material utilizing, for example, original CBCT images in VR instead of the simplified ones.
226
+
227
+ ## 7 CONCLUSION
228
+
229
+ The 3D medical images can be used in VR environments to plan for surgeries with good results. During the planning process one needs to interact with the $3\mathrm{D}$ models and be able to make high accuracy markings on them. In this study, we evaluated the feasibility of three different VR interaction methods Mouse, Hands, and Controller+Stylus combination in virtual reality. Based on the results, we can state that Valve Index controller and Logitech VR Ink stylus combination was the most feasible for tasks that requires both 3D object manipulation and high marking accuracy in VR. This combination did not have issues with complex $3\mathrm{D}$ models and sensor fidelity was better than with Hands interaction. Statistically significant differences were found between the controller combination and the other methods.
230
+
231
+ Hand-based interaction was the least feasible for this kind of use according to the collected data. Hands and Mouse methods were evaluated almost equal in the feasibility by participants. With the current technology, free hands usage cannot be proposed for accurate marking tasks. Mouse interaction was more accurate than Controller+Stylus. In detailed tasks Mouse could replace the free hands interaction. The discrepancy between the $2\mathrm{D}$ mouse and the 3D environment needs to be solved before Mouse could be considered a viable interaction method in VR.
232
+
233
+ ## REFERENCES
234
+
235
+ [1] F. Argelaguet and C. Andujar. A survey of 3d object selection techniques for virtual environments. Computers & Graphics, 37(3):121- 136, 2013.
236
+
237
+ [2] A. Ayoub and Y. Pulijala. The application of virtual reality and augmented reality in oral & maxillofacial surgery. BMC Oral Health, 19(1):1-8, 2019.
238
+
239
+ [3] D. Bachmann, F. Weichert, and G. Rinkenauer. Evaluation of the leap motion controller as a new contact-free pointing device. Sensors, 15(1):214-233, 2015.
240
+
241
+ [4] A. U. Batmaz, A. K. Mutasim, and W. Stuerzlinger. Precision vs. power grip: A comparison of pen grip styles for selection in virtual reality. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 23-28. IEEE, 2020.
242
+
243
+ [5] J. C. Coelho and F. J. Verbeek. Pointing task evaluation of leap motion controller in $3\mathrm{\;d}$ virtual environment. Creating the difference,78:78-85, 2014.
244
+
245
+ [6] S. Esmaeili, B. Benda, and E. D. Ragan. Detection of scaled hand interactions in virtual reality: The effects of motion direction and task complexity. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 453-462. IEEE, 2020.
246
+
247
+ [7] E. Gusai, C. Bassano, F. Solari, and M. Chessa. Interaction in an immersive collaborative virtual reality environment: a comparison between leap motion and htc controllers. In International Conference on Image Analysis and Processing, pp. 290-300. Springer, 2017.
248
+
249
+ [8] H. Hanken, C. Schablowsky, R. Smeets, M. Heiland, S. Sehner, B. Riecke, I. Nourwali, O. Vorwig, A. Gröbe, and A. Al-Dam. Virtual planning of complex head and neck reconstruction results in satisfactory match between real outcomes and virtual models. Clinical oral investigations, 19(3):647-656, 2015.
250
+
251
+ [9] D. Hannema. Interaction in virtual reality. Interaction in Virtual Reality, 2001.
252
+
253
+ [10] Y.-J. Huang, K.-Y. Liu, S.-S. Lee, and I.-C. Yeh. Evaluation of a hybrid of hand gesture and controller inputs in virtual reality. International
254
+
255
+ Journal of Human-Computer Interaction, 37(2):169-180, 2021.
256
+
257
+ [11] P. W. Johnson, S. L. Lehman, and D. M. Rempel. Measuring muscle fatigue during computer mouse use. In Proceedings of 18th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, vol. 4, pp. 1454-1455. IEEE, 1996.
258
+
259
+ [12] M. Khamis, C. Oechsner, F. Alt, and A. Bulling. Vrpursuits: interaction in virtual reality using smooth pursuit eye movements. In Proceedings of the 2018 International Conference on Advanced Visual Interfaces, pp. 1-8, 2018.
260
+
261
+ [13] H. Kim and Y. Choi. Performance comparison of user interface devices for controlling mining software in virtual reality environments. Applied Sciences, 9(13):2584, 2019.
262
+
263
+ [14] Z. Li, M. Kiiveri, J. Rantala, and R. Raisamo. Evaluation of haptic virtual reality user interfaces for medical marking on $3\mathrm{\;d}$ models. International Journal of Human-Computer Studies, 147:102561, 2021.
264
+
265
+ [15] Logitech. Vr ink pilot edition, 2021.
266
+
267
+ [16] T. Nukarinen, J. Kangas, J. Rantala, O. Koskinen, and R. Raisamo. Evaluating ray casting and two gaze-based pointing techniques for object selection in virtual reality. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, pp. 1-2, 2018.
268
+
269
+ [17] J. Petford, M. A. Nacenta, and C. Gutwin. Pointing all around you: selection performance of mouse and ray-cast pointing in full-coverage displays. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-14, 2018.
270
+
271
+ [18] K. Pfeuffer, B. Mayer, D. Mardanbegi, and H. Gellersen. Gaze+ pinch interaction in virtual reality. In Proceedings of the 5th Symposium on Spatial User Interaction, pp. 99-108, 2017.
272
+
273
+ [19] D.-M. Pham and W. Stuerzlinger. Is the pen mightier than the controller? a comparison of input devices for selection in virtual and augmented reality. In 25th ACM Symposium on Virtual Reality Software and Technology, pp. 1-11, 2019.
274
+
275
+ [20] L. E. Potter, J. Araullo, and L. Carter. The leap motion controller: a view on sign language. In Proceedings of the 25th Australian computer-human interaction conference: augmentation, application, innovation, collaboration, pp. 175-178, 2013.
276
+
277
+ [21] M. Reymus, A. Liebermann, and C. Diegritz. Virtual reality: an effective tool for teaching root canal anatomy to undergraduate dental students-a preliminary study. International Endodontic Journal, 53(11):1581-1587, 2020.
278
+
279
+ [22] K. Samudravijaya. Automatic speech recognition. Tata Institute of Fundamental Research Archives, 2004.
280
+
281
+ [23] A. Shokri, K. Ramezani, F. Vahdatinia, E. Karkazis, and L. Tayebi. $3\mathrm{\;d}$ imaging in dentistry and oral tissue engineering. Applications of Biomedical Engineering in Dentistry, pp. 43-87, 2020.
282
+
283
+ [24] J. Sun, W. Stuerzlinger, and B. E. Riecke. Comparing input methods and cursors for $3\mathrm{\;d}$ positioning with head-mounted displays. In Proceedings of the 15th ACM Symposium on Applied Perception, pp. 1-8, 2018.
284
+
285
+ [25] A. Suresh, D. Gaba, S. Bhambri, and D. Laha. Intelligent multi-fingered dexterous hand using virtual reality (vr) and robot operating system (ros). In International Conference on Robot Intelligence Technology and Applications, pp. 459-474. Springer, 2017.
286
+
287
+ [26] R. J. Teather and W. Stuerzlinger. Pointing at 3d target projections with one-eyed and stereo cursors. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 159-168, 2013.
288
+
289
+ [27] Unity. Unity real-time development platform, 2020. https://unity.com/.
290
+
291
+ [28] Valve. The valve index controller, 2021. https://www.valvesoftware.com/en/index/controllers.
292
+
293
+ [29] Varjo. Varjo vr-2 pro, 2020. https://varjo.com/products/vr-2-pro/.
294
+
295
+ [30] Vive. Steamvr base station 2.0, 2021. https://www.vive.com/eu/accessory/base-station2/.
296
+
297
+ [31] J.-N. Voigt-Antons, T. Kojic, D. Ali, and S. Möller. Influence of hand tracking as a way of interaction in virtual reality on user experience. In 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX), pp. 1-4. IEEE, 2020.
298
+
299
+ [32] P. Wacker, O. Nowak, S. Voelker, and J. Borchers. Evaluating menu techniques for handheld ar with a smartphone & mid-air pen. In 22nd International Conference on Human-Computer Interaction with Mobile
300
+
301
+ Devices and Services, pp. 1-10, 2020.
302
+
303
+ [33] F. Weichert, D. Bachmann, B. Rudak, and D. Fisseler. Analysis of the accuracy and robustness of the leap motion controller. Sensors, 13(5):6380-6393, 2013.
304
+
305
+ [34] L. Yang, J. Huang, T. Feng, W. Hong-An, and D. Guo-Zhong. Gesture interaction in virtual reality. Virtual Reality & Intelligent Hardware, $1\left( 1\right) : {84} - {112},{2019}$ .
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/3MrDT4bTycn/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,243 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § VR STYLUS AND CONTROLLER COMBINATION WAS PREFERRED OVER THE MOUSE OR HAND TRACKING FOR OBJECT MANIPULATION AND MARKING TASKS IN VIRTUAL REALITY
2
+
3
+ § ABSTRACT
4
+
5
+ For medical surgery planning, virtual reality (VR) provides a new kind of user experience, where 3D images of the operation area can be utilized. Using VR, it is possible to view the 3D models in a more realistic 3D environment, which would reduce the perception problems and increase spatial understanding. In this experiment, we compared a mouse, hand tracking, and a combination of a VR stylus and a VR controller as interaction methods in VR. The purpose was to study the viability of the methods for tasks used in medical surgery planning in VR. The tasks required interaction with 3D objects and high marking accuracy. The controller combination was statistically significantly the most liked interaction method compared to the other methods. In subjective results, it was the most appropriate, while in objective results, the mouse interaction method was the most accurate.
6
+
7
+ Index Terms: Human-centered computing-Human computer interaction (HCI)-Interaction devices-Pointing devices; Human-centered computing-Human computer interaction (HCI)- Empirical studies in HCI; Human-centered computing-Human computer interaction (HCI)—Interaction paradigms—Virtual reality
8
+
9
+ § 1 INTRODUCTION
10
+
11
+ Virtual reality devices make it possible to create computer-generated environments that replace the real world for different applications. For example, the user can interact with virtual object models more flexibly using various interaction methods than with real objects in the real environment. VR has become a standard technology in research, but it has not been fully exploited in professional use even if its potential has been noticed.
12
+
13
+ In the field of medicine, x-ray imaging is routinely used to diagnose diseases and anatomical changes as well as for scientific surveys [23]. In many cases $2\mathrm{D}$ medical images are good, but they can be complemented with 3D images for more complex operations where detailed understanding of the 3D structures is needed.
14
+
15
+ When planning surgeries, medical doctors, surgeons, and radiologists study 3D images. Viewing the 3D images in 2D displays can present issues to control object position, orientation, and scaling. Using VR devices, like head mounted displays (HMD) the 3D images can be viewed and interacted with in a more easily perceived 3D environment than 2D display. For the medical professionals to be able to do the same tasks in VR as they do in 2D, the interaction methods need to be study properly. The interaction method needs to be accurate, reasonable, and suitable for the medical tasks. Because we talk about medicine, the accuracy is crucial to avoid as many mistakes as possible. The interaction method needs to be reasonable so that the doctors would use it in their daily work and so that they still can focus on their primary tasks without paying too much attention for the interaction method. One typical task for the doctors is marking anatomical structures and areas on the surface of the 3D model. The marked points create the operative area, or they can be used for training.
16
+
17
+ For 2D content, a mouse is one of the best options for interaction due to its capability to point at small targets with high accuracy and the fact that many users are already very experienced with this device [19]. Mouse cursor can be used for 3D pointing with ray-casting [26] which allows pointing of the distant objects as well. The familiarity and accuracy make the mouse a worthy input method in $\mathrm{{VR}}$ , even though it is not a $3\mathrm{\;D}$ input device. In addition, controller has been identified as an accurate interaction method $\left\lbrack {7,{10}}\right\rbrack$ and it is typically used in VR environments [14]. Controller enables direct manipulation, and the reach of distant objects is different than with the mouse with ray-casting. Other devices, like stylus has been studied in pointing task previously $\left\lbrack {{19},{32}}\right\rbrack$ and therefore we investigated its performance together with a controller.
18
+
19
+ Cameras on HMDs enable interaction even without input devices. This is made possible by capturing the hand position, movements, and location. Pointing the targets by a finger is natural for humans so hand interaction is convenient for our study. Hand interaction was used as one condition as users do not need to learn any new hardware devices.
20
+
21
+ We chose to use marking task to evaluate the three interaction conditions. The conditions were a standard mouse, bare hands, and a handheld controller with VR stylus. All the methods were used in VR so that the variation between the methods would be as low as possible so that the comparison would concentrate to the interaction techniques. We had 12 participants who were asked to do simplified medical surgery marking tasks. To study the accuracy of the interaction methods, we created an experiment where in the 3D model there was a predefined target that was marked (pointed+selected). In the real medical case, the doctor would define the target, but then the accuracy cannot be easily measured.
22
+
23
+ The paper is organized as follows: First, we go through background of object manipulation and marking, interaction methods in 3D environment, and jaw osteotomy surgery planning (Section 2). Then, we introduce the compared interaction methods and used measurements (Section 3), as well as go through the experiment (Section 4) including apparatus, participants, and study task. In the end the results are presented (Section 5) and discussed (Section 6).
24
+
25
+ § 2 BACKGROUND
26
+
27
+ § 2.1 OBJECT MANIPULATION AND MARKING
28
+
29
+ Object manipulation and object marking have been used separately as a study task when different VR interaction methods have been studied. Sun et al. [24] had 3D positioning task that involved object manipulation. When mouse and controller were compared for precise 3D positioning mouse was better input device. Object marking has been studied without manipulation in [19]. Argelaguet and Andujar [1] have also studied 3D object selection techniques in VR.
30
+
31
+ § 2.2 INPUT DEVICES FOR OBJECT MANIPULATION AND MARKING 2.2.1 MOUSE
32
+
33
+ A mouse is a common, familiar, and accurate device for $2\mathrm{D}$ content to point at small targets with high accuracy [19]. The mouse is also a common device to do medical surgery planning [14]. Many studies have used a mouse cursor for $3\mathrm{D}$ pointing with ray-casting $\left\lbrack {5,{14},{19},{26}}\right\rbrack$ . Ray-casting technique is easily understood, and it is a solution for reaching objects at a distance [17].
34
+
35
+ Compared to other interaction methods in VR, the issue of the discrepancy between the $2\mathrm{D}$ mouse and a $3\mathrm{D}$ environment has been reported [1], and Kim and Choi [13] mentioned that it creates a low user immersion. In addition, use of a mouse usually forces the user to sit down next to a table instead of standing. The user can rest their arms on the table while interacting with the mouse which decrease hand fatigue. Johnson et al. [11] stated that fatigue with mouse interaction will appear only after 3 hours.
36
+
37
+ Bachmann et al. [3] found that Leap Motion controller has a higher error rate and higher movement time than the mouse. Kim and Choi [13] showed in their study that 2D mouse have high performance in working time, accuracy, ease of learning, and ease of use in VR. Both Bachmann et al. and Kim and Choi found the mouse to be accurate but on the other hand Li et al. [14] pointed that with difficult marking tasks small displacement of a physical mouse would lead to a large displacement on the 3D model in the 3D environment.
38
+
39
+ § 2.2.2 HANDS
40
+
41
+ Hand interaction is a common VR interaction method. Voigt-Antons et al. [31] compared free hand interaction and controller interaction with different visualizations. Huang et al. [10] compared different interaction combinations between free hands and controllers. Both found out that hand interaction has lower precision than the controller interaction. With alternative solutions like a Leap Motion controller $\left\lbrack {{20},{33}}\right\rbrack$ or using wearable gloves $\left\lbrack {34}\right\rbrack$ the hand interaction can be done more accurately. Physical hand movements create a natural and realistic experience of interaction $\left\lbrack {6,{10}}\right\rbrack$ , and therefore hand interaction is still an area of interest.
42
+
43
+ § 2.2.3 CONTROLLERS
44
+
45
+ Controllers are the leading control inputs for VR [10]. When using controllers as the interaction method, marking, and selecting are usually made with some of the triggers or buttons on the controller. Handheld controllers are described as stable and accurate devices $\left\lbrack {7,{10}}\right\rbrack$ . However, holding extra devices in hands might become inconvenient. When interacting with hands or controllers in VR, the fatigue in arms is one of the main issues $\left\lbrack {1,9}\right\rbrack$ . Upholding arms and carrying the devices increase the arm fatigue.
46
+
47
+ § 2.2.4 VR STYLUS
48
+
49
+ Batmaz et al. [4] have studied Logitech VR Ink stylus for a selection method in virtual reality. They found that using a precision grip there is no statistical differences on the marking if the distance of the target is changing. Wacker et al. [32] presented as one of their design VR stylus for mid-air pointing and selection happened pressing a button. For object selection, the users prefer a 3D pen over a controller in VR [19].
50
+
51
+ § 2.2.5 OTHERS
52
+
53
+ It is not always necessary to have an interaction device. Suresh et al. [25] used three voice commands to control gestures of a robotic arm in VR. They concluded that voice commands were useful especially for people with disabilities. In addition, voice is a useful input method in cases where hands and eyes are continuously busy [9], and speech is the primary mode for human communication [22]. Pfeuf-fer et al. [18] studied gaze as an interaction method together with hand gestures but found that both hand and gaze tracking lack tracking fidelity. In addition to poor eye tracking, Nukarinen et al. [16] stated that human factor issues made the gaze as least preferred input method.
54
+
55
+ § 2.3 JAW OSTEOTOMY SURGERY PLANNING
56
+
57
+ Cone Beam Computed Tomography (CBCT) is a medical imaging technique that produce $3\mathrm{D}$ images that can be used in virtual surgery planning. Compared to previous techniques that were used in medical surgery planning like cast models, virtual planning with CBCT images has extra costs and time requirements [8]. However, the technique offers several advantages for planning accuracy and reliability [23]. CBCT images can be used as 3D objects in VR for surgery planning with excellent match to real objects [8]. Ayoub and Pulijala [2] reviewed different studies about virtual and augmented reality applications in oral and maxillofacial surgeries.
58
+
59
+ In virtual surgery planning, the procedures for surgery are implemented and planned beforehand. The real surgery is done based on the virtual plan. Common tasks in dental planning are specifying the location of impacted teeth, preventing nerve injuries, or preparing guiding flanges [23]. In VR this can be done by marking critical areas or drawing cutting lines on to the models. Virtual planning can be used in student education as well, where the procedures can be realistically practiced. Reymus et al. [21] found that students understood the mouth anatomy better after studying 3D models in VR environment than from regular 2D image. The objects can be closer, bigger, and they can move in depth direction in 3D environment compared to 2D environment [12].
60
+
61
+ Tasks, like understanding the 3D object and marking critical areas on it need to be done in medical surgery planning. However, working with 3D objects in 2D environment makes the task more difficult Suitable interaction and marking methods help to understand 3D objects and perform required tasks in VR. In this study, we evaluated three methods for VR object manipulation and marking and examined the performances in simplified medical surgery planning tasks.
62
+
63
+ § 3 METHOD
64
+
65
+ § 3.1 MOUSE
66
+
67
+ In the first interaction method a regular mouse was used inside a VR environment (Figure 1). In VR environment there was a visualized mouse model that the participant was able to move by manipulating the physical mouse and to control the direction of a ray starting from the model. The ray was always visible in Mouse interaction.
68
+
69
+ Mouse was used one-handed when the other two methods were two-handed. Mouse was used to perform two functions, manipulation and marking, while these functions had been separated in other methods into different hands. In addition, Mouse used ray-casting, ray from the mouse, while the two other methods did not use it. The other methods used direct mid-air object manipulation.
70
+
71
+ The participant could rotate the object in 3 dimensions using the mouse movements with right click. For the 3 dimensions translation the user can achieve it by pressing the scroll button. Using the scroll wheel, the user can zoom in and out (translate in $\mathrm{Z}$ ) and when the user presses the scroll button and move the mouse, the user can translate up-down and sideways (translate in $\mathrm{X}$ and $\mathrm{Y}$ ). Markings were made by pointing the target with the ray and pressing the left button.
72
+
73
+ For the real-world mouse to be visible inside VR, pass through is not really required even though the mouse was visible in our study. After wearing the headset, the user could see the virtual mouse that is positioned to where the physical mouse is located to be able to find and reach the device. When the user moved the physical mouse, the movements were translated into the rotation of the ray representation coming from the virtual mouse and this way the user can cover a large space similar to using mouse in $2\mathrm{D}$ displays. To improve ergonomics, the user could configure the desk and chair for their comfort.
74
+
75
+ § 3.2 HANDS
76
+
77
+ As the second interaction method, the participant used bare hands. The left hand was for object manipulation and the right hand for object marking. The participant could pick up the 3D object by a pinch gesture with their left hand, to rotate and move the object. Marking was done with a virtual pen. In the VR environment the participant had the virtual pen attached to their right palm, near to the index finger (Figure 2 right). The pen moved according to the moves of the palm. When the virtual pen was moved close to the target, the pen tip changed its color to green to show that the pen was on the surface of the object. Visual feedback compensated the possible lack of depth perception in mid-air gestures. The marking on the surface was done by pressing a virtual button on the pen with an index finger so that yellow sphere appeared where the pen tip was pointed.
78
+
79
+ < g r a p h i c s >
80
+
81
+ Figure 1: Mouse interaction method outside VR (left). Mouse marking method inside VR and the study task (right).
82
+
83
+ § 3.3 CONTROLLER AND VR STYLUS
84
+
85
+ The third interaction method was based on having a controller on participant’s left hand for the object manipulation and a VR stylus on the right hand for the marking (Figure 3). The participant grabbed the 3D object with hand grab gesture around the controller to rotate and move the object. The markings were made with the physical VR stylus. The VR stylus was visualized in VR as was the mouse, so the participant knew where the device was located. The participant pointed the target with the stylus and pressed its physical button to make the mark. The act of press was identical to the virtual pen press in Hands method. There was a haptic feedback when touching the physical VR stylus, which did not happen with the virtual pen.
86
+
87
+ There have been some supporting results for using mouse in VR $\left\lbrack {3,{13},{14},{17}}\right\rbrack$ but 2D mouse is not fully compatible with the 3D environment [13]. We studied the ray method with Mouse to compare it against Hands and Controller+Stylus for 3D object marking. We also compared Hands without any devices to a method with a device in one or two hands. The marking gesture was designed to be similar in Hands and Controller+Stylus methods to be able to compare the effect of the devices.
88
+
89
+ § 3.4 MEASUREMENTS AND THE PILOT STUDY
90
+
91
+ The participant was asked to make a marking as close to the target location as possible. We used Euclidean distance to measure the distance between the target and the participant's marking. The task completion times were measured. The participant was able to remark the target if s/he was dissatisfied with the current marking. We counted how many remarkings were made to see if any of the interaction methods required more remarking than the other methods. We measured accuracy in these two ways, as a distance from the target and as the number of dissatisfied markings.
92
+
93
+ A satisfaction questionnaire was filled after each interaction method trial. There were a question and seven satisfaction statements that were evaluated on a Likert scale from 1 (strongly disagree) to 5 (strongly agree). The statements were grouped so that the question and the first statement were about the overall feeling and the
94
+
95
+ rest of the statements were about object manipulation and marking separately. The statements were:
96
+
97
+ * Would you think to use this method daily?
98
+
99
+ * Your hands are NOT tired.
100
+
101
+ * It was natural to perform the given tasks with this interaction method.
102
+
103
+ * It was easy to handle the 3D objects with this interaction method.
104
+
105
+ * The interaction method was accurate.
106
+
107
+ * The marking method was natural.
108
+
109
+ * It was easy to make the marking with this marking method.
110
+
111
+ * The marking method was accurate.
112
+
113
+ The statements were designed to measure fatigue, naturalness, and accuracy as they have been measured in earlier studies $\left\lbrack {1,6,{10}}\right\rbrack$ as well. Accuracy was measured also from data to see if the objective and subjective results are consistent. With these statements, it was possible to measure the easiness and ability to use the method daily unlike from objective data.
114
+
115
+ In the questionnaire there were also open-ended questions about positive and negative aspects of the interaction method. In the end the participant was asked to rank the interaction methods in order from the most liked to the least liked.
116
+
117
+ A pilot study was arranged to ensure that tasks and the study procedure were feasible. Based on the findings in the pilot study, we modified the introduction to be more specific and we also added a mention about the measured features. In addition, we added a possibility to rotate the $3\mathrm{D}$ object even after the mouse ray moved out of the object. The speed of the mouse ray in VR environment was increased so that it matched better the movements of the real mouse.
118
+
119
+ § 3.5 STATISTICAL MEASURES
120
+
121
+ We used two different statistical tests to analyze possible statistically significant differences between different parameter sets. For objective data (completion times, number of markings, and accuracy) we used the paired t-test. For data from evaluation questionnaires (fatigue, daily use, naturalness, easiness, and subjective accuracy) we first used Friedman test to see if any statistically significant differences appeared, and then we used the Wilcoxon signed rank test as it does not assume the numbers to be in ratio scale or to have normal distribution.
122
+
123
+ < g r a p h i c s >
124
+
125
+ Figure 2: Hands interaction method outside VR (left). Hands marking method inside VR and the study task (right).
126
+
127
+ < g r a p h i c s >
128
+
129
+ Figure 3: Controller interaction method outside VR (left). Stylus marking method inside VR and the study task (right).
130
+
131
+ The study software saved the resolution of time in milliseconds and the resolution of distances in meters. To clarify the analysis, we transferred these to seconds and millimeters.
132
+
133
+ § 4 EXPERIMENT
134
+
135
+ § 4.1 PARTICIPANTS
136
+
137
+ We recruited 12 participants for the study. The number of participants was decided based on a power analysis for paired t-test and the Wilcoxon signed rank test, assuming large effect size, a power level of 0.8 and an alpha level of 0.05 . The post hoc calculated effect sizes (Cohen’s d or R value, for paired t-test or Wilcoxon signed rank test, respectively) are reported together with the p-values in Results Section 5 for comparison to the assumption of large effect size. 10 of the participants were university students and two were full time employees, on the field not related to medicine or dentistry. The ages varied from 21 to 30 years, mean age was 25 years. There were 6 female participants and 6 male participants. Earlier VR experience was asked on a scale from 0 to 5, and the mean was 1.75 . Two participants did not have any earlier experience. One participant was left-handed but was used to use the mouse with the right hand. Other participants were right-handed.
138
+
139
+ § 4.2 APPARATUS
140
+
141
+ § 4.2.1 SOFTWARE, HARDWARE, AND HAND TRACKING
142
+
143
+ The experiment software was built using the Unity software [27]. With all methods we used Varjo VR2 Pro headset [29], which has an integrated vision based hand tracking system that was used for Hands interaction. Hands were tracked by Ultraleap Stereo IR 170 sensor mounted on a Varjo VR2 Pro. For the Controller+Stylus, we used Valve Index Controller [28] together with Logitech VR Ink stylus [15]. These were tracked by SteamVR 2.0 base stations [30] around the experiment area.
144
+
145
+ § 4.2.2 OBJECT MANIPULATION AND OBJECT MARKING
146
+
147
+ The study task combined two phases: object manipulation phase and object marking phase. In object manipulation phase the participant either selected the 3D object by mouse ray or pinched or grabbed the 3D object with hand gesture. The 3D objects did not have any physics and laid in mid-air. By rotating and translating the object the participant can view the object from different angles.
148
+
149
+ Instead of only pointing the target, the selection needs to be confirmed. This allows us to measure the marking accuracy and if the user understood the 3D target's location related to the pointing device. The participant could either release the 3D object in midair or hold it in their hand when Hands or Controller+Stylus was used in object marking task. The marking was done either pointing by mouse ray and clicking with Left click, touching the target with virtual pen, and selected with hand gesture, or touching and selecting with the VR stylus.
150
+
151
+ § 4.3 PROCEDURE
152
+
153
+ First, the participant was introduced to the study, s/he was asked to read and sign a consent form, and fill in a background information form. For all conditions the facilitator would demonstrate him/herself the system functions and the controls. Each participant had an opportunity to practice before every condition. The practice task was to move and rotate a cube having several target spheres, and to mark those targets as many times as needed to get to know both the interaction and the marking methods. After the participant felt confident with the used method, s/he was asked to press the Done button, and the real study task appeared.
154
+
155
+ The participant was asked to find and select a hidden target mark on the surface of each 3D object model. The target was visible all the time whereas the participant's marking was created by the participant. When the target was found it was first pointed and then selected. The aim was to place the participant's sphere (yellow) inside the target sphere (red) (see Figures 1 right, 2 right, and 3 right). Each 3D object had one target on it and the task was repeated five times per each condition. The order of $3\mathrm{D}$ objects was the same to all participants: lower jaw, heart, skull, tooth, and skull. The order of interaction methods was counter-balanced between the participants using balanced Latin Squares. This was done to compensate possible learning effects. The target locations on the 3D object were predefined and presented in the same order for the participants.
156
+
157
+ The used task needed both object manipulation (rotating and translating) and marking (pointing and selecting). By combining the manipulation and marking tasks together, we wanted to create a task that simulates a task that medical professionals would do during virtual surgery planning. Both the object manipulation and marking are needed by the medical professionals. The marking is relevant when selecting a specific locations and areas of a 3D model and it requires accuracy to make the marks in relevant locations. This medical marking task does not differ from regular marking tasks in other contexts as such, but the accuracy requirements are higher. By manipulating the $3\mathrm{D}$ model, the professional has an option to look at the pointed area from different angles to verify its specific location in 3D environment.
158
+
159
+ A satisfaction questionnaire was filled after each interaction method trial, and after all three trials, a questionnaire was used to rank the conditions.
160
+
161
+ § 5 RESULTS
162
+
163
+ In this section, we report the findings of the study. First, we present the objective results from data collected during the experiment, and then the subjective results from the questionnaires.
164
+
165
+ § 5.1 OBJECTIVE RESULTS
166
+
167
+ The task completion times (Figure 4, top left) include both object manipulation and marking, and it had some variation, but the distributions of median values for each interaction method were similar and there were no significant differences. The completion time varied slightly depending on how much VR experience the participant had before, but there were no statistically significant differences.
168
+
169
+ The number of markings done before the task completion varied between the interaction methods (Figure 4, top right). The median values for Mouse, Hands, and Controller+Stylus conditions were 6.5, 12, and 7 markings, respectively. However, there were no statistically significant differences. Some participants did many markings in a fast pace (2-3 markings per second) leading to a high number of total markings.
170
+
171
+ There were some clear differences in final marking accuracy between the interaction methods (Figure 4, bottom). The median values for Mouse, Hands, and Controller+Stylus methods were 3.2, 5.9, and 4.2 millimeters, respectively. The variability between participants was highest with Hands method. We found statistically significant difference between the Mouse and Hands methods (p-value 0.004, Cohen’s d ${1.178}^{1}$ ) using a paired t-test and Bonferroni corrected p-value limit ${0.017}\left( { = {0.05}/3}\right)$ .
172
+
173
+ § 5.2 SUBJECTIVE DATA
174
+
175
+ Friedman tests showed statistically significant differences in daily use (p-value 0.002), interaction naturalness (p-value 0.000), interaction easiness (p-value 0.001), interaction accuracy (p-value 0.007), marking easiness (p-value 0.039), and ranking (p-value 0.000). In evaluations for tiredness there were no significant differences (Figure 5, left). Most participants did not feel tired using any of the methods, but the experiment was rather short.
176
+
177
+ In pairwise tests of everyday use using Wilcoxon signed rank test we found significant differences (Figure 5, right). We found statistically significant differences between the Mouse and Controller+Stylus methods (p-value ${0.015},\mathrm{R}{0.773}^{2}$ ) and between Hands and Controller+Stylus methods (p-value 0.003, R 1.000).
178
+
179
+ We asked the participants to evaluate both object manipulation and marking separately. In object manipulation evaluation, there were statistically significant differences in naturalness between Controller+Stylus and Mouse (p-value 0.003, R 1.000) and Controller+Stylus and Hands (p-value 0.009, R 0.879). In object manipulation easiness Controller+Stylus had statistically significant difference between Mouse and Hands (p-values 0.003, R 1.000 in both methods), see Figure 6. In manipulation accuracy evaluation we found statistically significant difference between Controller+Stylus method and Hands method (p-value 0.003, R 1.000). In the object marking evaluation (Figure 7), the only significant difference was measured between Controller+Stylus method and Mouse method in easiness (p-value 0.009, R 1.000).
180
+
181
+ Multiple participants commented that the controller interaction felt stable and that it was easy to move and rotate the $3\mathrm{D}$ model with the controller. The participants also commented that holding a physical device in hand so that its weight could be felt increased the feel of naturalness. Not all comments agreed, when one participant felt VR stylus as accurate while another participant said it felt clumsy.
182
+
183
+ When asked 11 out of 12 participants ranked Controller+Stylus the most liked method. The distribution of ranking values is shown in Table 1. The ranking values of Controller+Stylus method were statistically significantly different to Mouse (p-value 0.008, R 0.885) and Hands (p-value 0.003, R 1.000).
184
+
185
+ Table 1: The number of mentions of different rankings of the interaction methods when asked for the most liked $\left( {1}^{st}\right)$ , the second most liked $\left( {2}^{nd}\right)$ , and the least liked $\left( {3}^{rd}\right)$ method.
186
+
187
+ max width=
188
+
189
+ 2*Condition 3|c|Ranking
190
+
191
+ 2-4
192
+ ${1}^{st}$ ${2}^{nd}$ ${3}^{rd}$
193
+
194
+ 1-4
195
+ Mouse 1 7 4
196
+
197
+ 1-4
198
+ Hands 0 4 8
199
+
200
+ 1-4
201
+ Controller+Stylus 11 1 0
202
+
203
+ 1-4
204
+
205
+ § 6 DISCUSSION
206
+
207
+ In this study, we were looking for the most feasible interaction method in VR for object manipulation and marking in medical context. Controller+Stylus method was overall the most suitable for a task that need both object manipulation and marking. Controller+Stylus method was the most liked in all subjective features, while Mouse and Hands conditions were evaluated very similarly. The smallest number of markings were done with Controller+Stylus, but no significant differences were found. There were statistically significant differences between the methods in daily use, interaction naturalness, and easiness. Controller+Stylus was statistically significantly more accurate in object manipulation than Hands (p-value 0.003 ), and easier to use than Mouse (p-value 0.003). Without earlier experience with the VR stylus, the participants had difficulties in finding the correct button when marking with the stylus. The physical stylus device cannot be seen when wearing the VR headset and the button could not be felt clearly. Even though Controller+Stylus combination was evaluated as natural and the most liked method in this study, the hand-held devices may feel inconvenient [10]. In our study, some participants seemed to like the physical feel of devices. However, our result was based on the subjective opinions of participants, and that might change depending on the use case or devices.
208
+
209
+ ${}^{1}$ Cohen’s $\mathrm{d} \geq {0.8}$ is considered a large effect size
210
+
211
+ ${}^{2}\mathrm{R}$ value $\geq {0.5}$ is considered a large effect size
212
+
213
+ < g r a p h i c s >
214
+
215
+ Figure 4: The task completion times for different conditions (top left). The median values for each participant are rather similar between the methods. There were two outlier values (by the same participant, for Mouse and Hands conditions) that are removed from the visualization. The number of markings per five targets (top right). There were some differences between the interaction methods (the median value for Hands was higher than for the other methods), but no significant differences. The marking accuracy (bottom). There were some clear differences between the interaction methods in the final marking accuracy.
216
+
217
+ < g r a p h i c s >
218
+
219
+ Figure 5: The evaluation of fatigue (left). None of the methods were found to be particularly tiring. The evaluation of possible daily use (right). Controller+Stylus was significantly more usable for daily use than the other methods.
220
+
221
+ There are many possible reasons for the low hand tracking accuracy. Hand inaccuracy can be seen in the large number of markings and large distribution in task completion times with Hands as the participants were not satisfied with their first marking. Hands were the only method where only one participant succeeded with minimum 5 markings, when by other methods, several participants succeeded in the task with 5 markings. One explanatory factor can be the lack of hand tracking fidelity that also has been noticed in other studies $\left\lbrack {{10},{34}}\right\rbrack$ . In addition, inaccuracy in human motor system leads to the inaccuracy of hands [9]. The vision based hand tracking system that uses camera on HMD does not recognize the hand gesture well enough and as a result, the participant must repeat the same gesture or movement multiple times to success. This extra work also increases the fatigue in Hands. Even though the fatigue were low with all interaction methods, this study did not measure the fatigue of long-term activity. These are clear indications that Hands interaction needs further development before it can be used in tasks that needs high marking accuracy. Several earlier studies have reported the hands inaccuracy compared to controllers [9, 10, 34].
222
+
223
+ < g r a p h i c s >
224
+
225
+ Figure 6: The evaluation of interaction method naturalness (left), easiness (middle), and accuracy (right). Controller+Stylus was the most liked method in these features.
226
+
227
+ < g r a p h i c s >
228
+
229
+ Figure 7: The evaluation of marking method naturalness (left), easiness (middle), and accuracy (right). Median values in these features are rather similar, and significant difference was found only in marking easiness.
230
+
231
+ Haptic feedback was provided with Mouse and when marking with VR stylus. With Hands there was only visual feedback. The lack of haptic feedback might have affected the marking accuracy as well because the accuracy was much better with the physical stylus. Li et al. [14] found that with the low marking difficulty, the mouse with 2D display was faster than the kinesthetic force feedback device in VR. For high marking difficulty the other VR interface that used a VR controller with vibrotactile feedback was better than the 2D interface. They found that mouse in $2\mathrm{D}$ display has fast pointing capability but in our study, the task completion times did not vary between Mouse and the other methods. Li et al. described the fact that manipulating viewing angle is more flexible when wearing HMD than with a mouse in 2D display. In VR interfaces the participant can rotate the $3\mathrm{D}$ object while changing the viewing angle by moving their head. In our study, all methods used HMD, so change of viewing angle was as equally flexible.
232
+
233
+ Mouse was statistically significantly more accurate marking method than Hands. Mouse was not affected by some of the issues that were noticed with Hands or Controller+Stylus. With Mouse it was not felt problematic that the device cannot be seen during the use. There were no sensor fidelity issues with Mouse, and Mouse was a familiar device to all participants. Only the ray that replaced the cursor was an unfamiliar feature and caused some problems. We found that the ray worked well with simple 3D models but there were a lot of difficulties with complex models where the viewing angle needed to be exactly right to reach the target. If any part of the 3D model blocked the ray, the target could not be marked. When the target was easy to mark the accuracy using Mouse was high. It can be stated that Mouse was an accurate method in VR but for all other measured properties of Controller+Stylus were measured to be better.
234
+
235
+ Both the target and the marking were spheres in 3D environment. During the study, it was noticed that when a participant made their marking in the same location as the target, the marking sphere disappeared inside the target sphere. This caused uncertainty if the marking was lost or if it was in the center of the target. This may have affected the results when the participants needed to make remarking to be able to see their marking that was not in the center of the target anymore. In future studies the marking sphere should be designed bigger size than the target and transparent so that the participant can be sure about the location of both spheres.
236
+
237
+ Our focus was in comparing three different interaction and marking methods and their suitability for the medical marking task. To simplify the experimental setup, the experiment was conducted with simplified medical images, which may have led to optimistic results for the viability of the methods. Even then, there were some problems with Mouse interaction method. To further confirm that the results are similar also for more realistic content, a similar study should be conducted in future work with authentic material utilizing, for example, original CBCT images in VR instead of the simplified ones.
238
+
239
+ § 7 CONCLUSION
240
+
241
+ The 3D medical images can be used in VR environments to plan for surgeries with good results. During the planning process one needs to interact with the $3\mathrm{D}$ models and be able to make high accuracy markings on them. In this study, we evaluated the feasibility of three different VR interaction methods Mouse, Hands, and Controller+Stylus combination in virtual reality. Based on the results, we can state that Valve Index controller and Logitech VR Ink stylus combination was the most feasible for tasks that requires both 3D object manipulation and high marking accuracy in VR. This combination did not have issues with complex $3\mathrm{D}$ models and sensor fidelity was better than with Hands interaction. Statistically significant differences were found between the controller combination and the other methods.
242
+
243
+ Hand-based interaction was the least feasible for this kind of use according to the collected data. Hands and Mouse methods were evaluated almost equal in the feasibility by participants. With the current technology, free hands usage cannot be proposed for accurate marking tasks. Mouse interaction was more accurate than Controller+Stylus. In detailed tasks Mouse could replace the free hands interaction. The discrepancy between the $2\mathrm{D}$ mouse and the 3D environment needs to be solved before Mouse could be considered a viable interaction method in VR.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/BBrgJZF4pfc/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,293 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Context and Minimalism: User Evaluations on Two Interpersonal Telepresence Systems
2
+
3
+ Category: Research
4
+
5
+ ## Abstract
6
+
7
+ We present a field study with 10 pairs of adults to employ two prototypes of an interpersonal telepresence system: one catered to the Viewer, the individual experiencing a remote environment, and one catered to the Streamer, the individual sharing the remote experience to the Viewer through technological means. Based on previous work, our design choices reflect common values found from both perspectives of the Viewer and Streamer. We then seek to identify the key value tensions and trade-offs in the designs through the employment of similar design choices. We then demonstrate how, through an applied scenario, we learned that users converge on environmental context and minimalism being the prime factors that should influence a general framework for what considerations need to be held when designing a telepresence system catered to one-to-one interactions.
8
+
9
+ Keywords: Telepresence, Live-streaming, Remote experiences
10
+
11
+ Index Terms: Human-centered computing-Visualization-Visualization techniques;
12
+
13
+ ## 1 INTRODUCTION
14
+
15
+ Physical presence has become a point of contention in multiple communities due to the COVID-19 Pandemic. Many people who were previously present with others in the physical world are now either confined to their living space or are concerned about the health effects that re-involving themselves in society may yield [1]. To combat this confinement, many are limited to virtual forms of communication, such as video-conferencing applications like Zoom, and interactions. To mitigate the lack of in-person interactions, the need for remote collaboration and interaction techniques is becoming increasingly important. Due to this need, we are witnessing an emergence of remote interaction technologies ranging from extending video conferencing capabilities to mixed reality collaboration experiences.
16
+
17
+ The facilitation of developing methods to simulate presence in a remote environment has led to the ideation of telepresence, a term coined by Marvin Minsky in 1980 to denote the use of technology to simulate a remote physical location despite not being physically located there [11]. This provided the formal definition for researchers to begin conceptualizing and applying their techniques to further the research of telepresence-related tasks $\left\lbrack {9,{14}}\right\rbrack$ . The current state of telepresence is now spanning a multitude of fields and disciplines such as medical and industrial applications. Within each field, we are witnessing a range of novel implementations that collectively propose effective remote experiences. However, when considering interpersonal telepresence systems, telepresence systems that hone on one-to-one remote interactions in which one individual attempts to experience a remote environment with an individual in the remote environment, we are met with concerns regarding how socially present does an remote individual feel and how socially comfortable is the physically present individual when hosting the technology required to fulfill the remote experience $\left\lbrack {4 - 6,8,{12} - {16},{18}}\right\rbrack$ .
18
+
19
+ With novelty having been a main focal point within the telep-resence research community, we are now witnessing a divergence in what aspects of these telepresence systems need to be considered. The first is, are researchers considering the social effects of participants being involved in these telepresence systems, and, by extension, interpersonal telepresence systems? The second aspect calls researchers to consider if the technological implementations being employed are what end-users truly desire and value in a telepresence system [14].
20
+
21
+ To answer both these questions, our research intends to investigate the social and technological variables to consider in the development cycle of a future interpersonal telepresence system. To achieve this, we carried out a study with two interpersonal telepresence systems, each designed to be used by a pair of participants. The purpose for designing two interpersonal telepresence systems is to provide two platforms that cater to the two types of users of our systems: the Streamer and the Viewer. We denote the Streamer as the individual physically present in the environment to administer the technology necessary to fulfill the remote experience between them and the Viewer. We denote the Viewer as the individual utilizing technology to view and possibly interact with the remote environment.
22
+
23
+ Both systems we propose were designed with the idea of minimalism in terms of hardware and equipment needed to carry out the remote experience. We denote the two systems as either being Viewer-centric or Streamer-centric. Each design is based on techniques employed in previous literature. Our end goal was to expose our participants to two (2) variations of interpersonal telepresence setups and then prompt them on their experience as well as any improvements they would prefer in an effort to draw conclusions towards an optimal system. We propose the following research questions to be answered through our study:
24
+
25
+ - RQ1: How do Streamer participants approach social interactions while using the interpersonal telepresence systems?
26
+
27
+ - RQ2: In what real-world contexts do Streamers and Viewers truly value utilizing an interpersonal telepresence system?
28
+
29
+ - RQ3: How can we balance social presence of the Viewer and social comfort of the Streamer in a telepresence system?
30
+
31
+ As a result of our study, we found that users converged towards the idea that interpersonal telepresence setups should be context-sensitive. Furthermore, in regard to the interactions and technologies utilized, interpersonal telepresence setups should allow the Streamer and Viewer to maintain a level of interactive independence from one another. We discuss in future sections the finer details of our results as well as the implications for future interpersonal telepresence systems.
32
+
33
+ ## 2 RELATED WORK
34
+
35
+ Previous work on interpersonal telepresence systems show that the Viewer's needs are prioritized [14]. With this in mind, we sought to develop and deploy two systems catering to each stakeholder separately. In this section, we review the relevant literature in the areas of telepresence and computing devices.
36
+
37
+ ### 2.1 Identifying The Appropriate Telepresence Setup
38
+
39
+ Telepresence refers to the ability for someone to feel as if they are present in another physical space without actually being there. It can be supported in many forms, from the use of $3\mathrm{D}$ avatars in virtual reality [10] to conventional video-conferencing solutions [6] to full 360-degree video streaming [18]. Our study uses a form of video conferencing since it has been found to socially connect participants more than virtual reality along with providing a higher fidelity system that is more ubiquitous and familiar to participants, along with its ethical and social risks being more known than virtual reality telepresence [2]. As found by Tang et al. and Heshmat et al., 360-degree video-streaming provides more social connection between the Viewer and Streamer due to more visual immersion at the expense of verbal communication, which would require additional hardware to support such a form of communication [4, 18]. In contrast to these designs, Teo et al. conducted a collaboration study that compares the use of 360-degree systems to 3D scene reconstruction systems for collaboration [20]. With these systems employing nonverbal communication techniques (i.e. hand gestures and visual cues), social presence and task completion rate was found significantly higher for the 360-degree camera setup.
40
+
41
+ Communication is another prime element to these designs, with video conferencing delivering audio communication. Many forms of video conferencing exist, ranging from a regular video call using proprietary applications [16] or through off-the-shelf software such as Skype ${}^{1}\left\lbrack {8,{17},{19}}\right\rbrack$ .
42
+
43
+ In the past, interpersonal telepresence scenarios centralized on a certain task or series of tasks $\left\lbrack {5,6,8,{16},{18}}\right\rbrack$ . We are interested in observing the experiences that users undergo while applying such setups through a real-world experience and learn of what technologies and scenarios users truly value. Considering this, we employed two setups; one using both video conferencing and a 360 camera to cater to the Viewer since it provides more Viewer immersiveness, and the other using only video conferencing to cater to the Streamer as it requires less items for the Streamer to hold/carry.
44
+
45
+ ### 2.2 Choosing Necessary Hardware
46
+
47
+ The key piece of hardware used to provide the video feed in many systems is a camera. However, in various telepresence setups, at least one user of the system is mobile (like in our system), leading to the use of cameras such as 360-degree cameras $\left\lbrack {7,8,{18}}\right\rbrack$ , smartphone cameras [6], or even hand-held cameras [5]. Camera quality is important to consider when using mobile cameras, as Kim et al. [8] shows that when presented with various media types, the Viewer chose other media types over video feed since it was of notably poor quality, indicating that good camera quality would be vital to a system.
48
+
49
+ To facilitate communication, smartphones [6, 8, 16, 18], desktops $\left\lbrack {8,{15}}\right\rbrack$ , and even telepresence robots $\left\lbrack {4,5}\right\rbrack$ have been the main media that previous designs have centered on. Smartphones and desktops provide varying functionality that support communication efficiently (i.e. internet connection and an integrated camera), with desktops providing larger displays with little mobility and smart-phones vice versa, but telepresence robots more so serve as a medium of physically supporting and moving hardware.
50
+
51
+ With all these designs in mind, the hardware in both of our telep-resence setups are similar; they both share a desktop for the Viewer and a smartphone for the Streamer; the difference being that the Viewer centric setup also has an additional smartphone and 360- degree camera to facilitate immersiveness.
52
+
53
+ ## 3 METHODS
54
+
55
+ Previous work in the interpersonal telepresence space has shown that the design of these systems would prioritize the Viewer's needs over the Streamer's needs [14]. To address this concern, we pursued a field study based approach in which pairs of participants would be able to interact with two interpersonal telepresence systems wherein each interpersonal telepresence system catered to one stakeholder of the pair dynamic. In the following sections, we discuss the remote streaming activities that were aimed to give participants perspective and real-world experience with interperonsal telepresence systems, and the conclusions and values they ultimately converged towards in regards to experience and features.
56
+
57
+ ![01963e7e-e9e0-73de-8984-6186a1d70f17_1_1030_148_512_848_0.jpg](images/01963e7e-e9e0-73de-8984-6186a1d70f17_1_1030_148_512_848_0.jpg)
58
+
59
+ Figure 1: Streamer-Centric Design: Mobile device, adjustable low-profile backpack, and battery bank inside backpack
60
+
61
+ ### 3.1 Participant Demographics
62
+
63
+ We distributed information about our study through our local university and online messaging forums in our city. Interested individuals were asked to fill out an online form to ensure eligibility and indicate date and time of the session they would be available for. Participants were required to be at least 18 years old, have a mobile device that supports calling and headphones, normal or corrected-to-normal vision, speak English, be able to walk for 40 minutes, and able to lift and carry 10 pounds. We also asked participants to provide their age and gender. Each pair of participants were required to know each other as friends, family members, or significant others prior to the session. Table 2 highlights our participant demographics in detail.
64
+
65
+ ### 3.2 Apparatus
66
+
67
+ In the following sections we describe the apparatus utilized to carry out our Viewer/Streamer-centric remote streaming activities. In addition to basing our design choices on previous literature, we also utilized off-the-shelf technologies that would aid in user familiarity to the platforms and reduce possible steep learning curves from interacting with immersive technologies.
68
+
69
+ #### 3.2.1 Streamer-centric apparatus
70
+
71
+ For the Streamer-centric condition, Streamer participants wore a low-profile backpack and were equipped with a mobile device to facilitate a ${\text{Zoom}}^{2}$ video-conferencing call for verbal and visual presence of the Viewer. The setup also included a power bank stored in the backpack in order to charge the mobile device if needed during the remote streaming experience. These design choices were influenced by previous work that focused on utilizing ubiquitous devices, such as a cellular device, to facilitate a social telepresence experience $\left\lbrack {5,6,8,{16}}\right\rbrack$ . Figure 2 shows the setup the Streamer used in the Streamer-centric condition.
72
+
73
+ ---
74
+
75
+ ${}^{1}$ https://www.skype.com/en/
76
+
77
+ ${}^{2}$ https://zoom.us/
78
+
79
+ ---
80
+
81
+ ![01963e7e-e9e0-73de-8984-6186a1d70f17_2_253_147_513_832_0.jpg](images/01963e7e-e9e0-73de-8984-6186a1d70f17_2_253_147_513_832_0.jpg)
82
+
83
+ Figure 2: Viewer-Centric design: 360-degree camera, adjustable backpack, two (2) mobile devices (one in hand and one in backpack), and a battery bank inside backpack
84
+
85
+ #### 3.2.2 Viewer-centric apparatus
86
+
87
+ For the Viewer-centric condition, the Streamer was equipped with a backpack, two (2) mobile devices, a Insta360 one ${X2360}$ -degree camera ${}^{3}$ , and a power bank stored in the backpack for charging purposes. Similar to the Streamer-centric condition, a Zoom video-conferencing call was used to facilitate verbal communication and visual presence. The addition of the 360-degree camera required the need of a private ${\text{YouTube}}^{4}$ live-stream as the platform. The purpose for choosing Youtube as our live-streaming platform is due to the ability to support a 360-degree video feed as well as allow the Viewer to pan around and interact with varying viewing angles as they pleased. The design choices for the Viewer-centric condition were influenced by previous work that investigated the collaborative capabilities between a Streamer and Viewer through a 360-degree camera medium $\left\lbrack {4,{18}}\right\rbrack$ . Figure 3 shows the setup the Streamer utilized to host the stream in the Viewer-centric condition. For both conditions, wireless earbuds facilitated two-way audio communication between the Streamer and the Viewer.
88
+
89
+ ### 3.3 Study Procedure
90
+
91
+ Prior their session, we required pairs of participants to disclose a location of interest they would like to spend time together while using the interpersonal telepresence systems. The requirement for this location is that the location is either on our local university campus or within a 10-minute driving radius of our local university campus. We also required participants to choose their role in the Streamer and Viewer dynamic as the procedures differ between roles.
92
+
93
+ On the day of the session, the Streamer would meet with a researcher at the outlined location of interest, while the Viewer would meet another researcher in our lab. Each researcher provided a description of the study, a consent form they were required to sign, and instructions on how to use the Streamer-centric and Viewer-centric interpersonal telepresence setups. The participants were also informed that the entirety of the session will be both audio and video recorded and that all audio and video stored will be only viewed and accessible to the researchers and their research team.
94
+
95
+ During the remote streaming experience, the Streamer hosted both types of interpersonal telepresence setups. The order was randomized between groups. To facilitate a natural experience between the Streamer and Viewer, the researcher did not interfere or chaperone the Streamer throughout the remote streaming experience. The Streamer hosted the remote experience for 20 minutes per setup, for a total of 40 minutes of remote streaming.
96
+
97
+ For the Streamer-centric interpersonal telepresence setup, the Viewer viewed and communicated to the Streamer through a Zoom video-conference call. Audio communication was also hosted through the Zoom video-conference call. For the Viewer-centric interpersonal telepresence setup, the Viewer had the ability to view and pan around the Streamer's environment through a private Youtube 360-degree video stream and was able to view and communicate with the Streamer also through a Zoom video-conference call.
98
+
99
+ At the end of the remote streaming activity, the Streamer and Viewer were administered a semi-structured interview independently of one another and asked a series of questions related to their experiences with both interpersonal telepresence setups for approximately 30 minutes. Table 1 outlines the interview questions that participants were prompted based on their role in the interpersonal telepresence relationship. The questions were asked in the same order between each pair of participants. Participants were given a $\$ {15}$ Amazon gift card as compensation. All of the activities were audio and video recorded for analysis.
100
+
101
+ ### 3.4 Data Analysis Approach
102
+
103
+ All sessions were audio and video recorded, and were transcribed by the authors with the assistance of Zoom. We conducted an inductive thematic analysis on both the remote streaming activity and semi-structured interview to better understand what our users truly valued when reflecting on the proposed setups [3]. We utilized open coding to log participants' explicit and implicit values from their utterances. We are interested in what features users truly want in an interpersonal telepresence system as well as what real world contexts are best suited for a more novel systems.
104
+
105
+ ## 4 RESULTS
106
+
107
+ In this section we present our findings highlighted in Table 3 and provide further insight on our participants' experiences with the interpersonal telepresence setups
108
+
109
+ ### 4.1 Participant Preferences
110
+
111
+ Overall, participants generally expressed that they liked using at least one of the systems, whether it be Viewer or Streamer. Participants especially expressed that the use of the 360-degree camera was unusual yet interesting in the Viewer-centric setup. Participants also have generally distinct situations where they would use each setup.
112
+
113
+ For the Streamer-centric setup, participants would most realistically use such a prototype for personal or exploratory use, e.g. if they were talking with friends or family or if they were in a new place and would like to stream that new environment to people in remote locations. This is due to the immersive experience that the setup provides.
114
+
115
+ ---
116
+
117
+ ${}^{3}$ https://www.insta360.com/product/insta360-onex2
118
+
119
+ ${}^{4}$ https://www.youtube.com/
120
+
121
+ ---
122
+
123
+ Table 1: Interview Questions
124
+
125
+ <table><tr><td>Streamers</td><td>Viewers</td><td>Both Roles</td></tr><tr><td rowspan="2">Describe how you felt while live- streaming on the 360-degree system and the video-conferencing system.</td><td rowspan="2">Describe how well you were able to see and hear the remote environment in both systems.</td><td>How well were you able to communicate with your partner?</td></tr><tr><td>What features of the video-conferencing system did you like and not like?</td></tr><tr><td rowspan="2">How well were you able to communicate with your partner?</td><td rowspan="2">Describe how socially present your part- ner felt in both systems?</td><td>What features should be changed to the video-conferencing system?</td></tr><tr><td>What features of the 360-degree system did you like and not like?</td></tr><tr><td rowspan="3">Why did you choose the role of Streamer?</td><td rowspan="3">Why did you choose the role of Viewer?</td><td>What features should be added, removed or changed to the 360-degree system?</td></tr><tr><td>If you were to choose one of the setups, which setup do you prefer and why?</td></tr><tr><td>What scenarios in your daily life would you consider using technology like this?</td></tr></table>
126
+
127
+ Although participants generally mentioned such a use for this prototype, another use that was mentioned was for security reasons, as the view provided would be able to show everything in the environment and not just one specific area, which would provide multiple perspectives.
128
+
129
+ Like for security reasons that's also a good idea to have like one of these cameras set up somewhere. - Streamer 6
130
+
131
+ For the Viewer-centric setup, participants would most realistically use such a prototype for everyday or professional use, e.g. if they were attending a meeting with a colleague or an online lecture, or doing some simple shopping. This is due to the simplicity and familiarity of the setup when using it.
132
+
133
+ "Interacting on zoom in daily life is better, as compared to the 360 camera that what I would suggest in daily life, considering the daily life activities, and I would go with Zoom of course." - Viewer 3
134
+
135
+ ### 4.2 Participant Values
136
+
137
+ Participants held a wide range of values for both telepresence prototypes employed, which relate to both their own interests and the interests of the corresponding participant in the pair.
138
+
139
+ Viewers mainly valued interactive and independent viewing. This relates to the Viewer being able to immerse themselves as much as possible in the Streamer's environment in order to simulate the feeling that they are with the Streamer, along with being able to do as they please while using the prototype and not having to rely on the Streamer to do so.
140
+
141
+ "So I would definitely prefer that 360 when me or my partner and any family member is going on a trip as a road trip or as sightseeing or just go for a ride probably and they can actually spin the camera around and what within my choice I could see here and there, without having to ask them to "hey can you flip the camera" yeah that's one convenience of 360 camera that conference meeting doesn't have." - Viewer 6
142
+
143
+ Streamers mainly valued a minimalist telepresence prototype along with being able to interact with the physical environment while using the setup. This relates to the Streamer having to be mobile, and they wish to hold and wear as little as possible when doing so along with being able to both talk and see the Viewer. Along these lines, multiple improvements were recommended as changes to both systems to make them more minimalistic and user friendly.
144
+
145
+ "Oh yeah making it more minimalistic." - Streamer 3
146
+
147
+ Changes recommended for the Streamer-centric setup included attaching a wider lens angle for more Viewer immersion and having a mount for their smartphone. Changes to the Viewer-centric setup included having a single, multi-function device to serve as the central device of the setup to reduce the amount of equipment, reducing the backpack size, and having a mount for both the video conferencing smartphone and 360 degree camera.
148
+
149
+ "Have an accessory that kind of like connects to the body in some way so that the weight is not just on the fingers, it could also be supported by like the rest or the entire arm, so that, so the user can hold it for extended periods of time." - Streamer 10
150
+
151
+ Participants in general also held similar values such as wanting to be able to see each other during the use of either telepresence prototype, making sure that each other is safe, and wanting to directly interact with each other's surroundings.
152
+
153
+ "When I went to see my friend, it was, like me, trying to get both of them to see and hear, but they couldn't hear each other, because of the headphones." - Streamer 1
154
+
155
+ ### 4.3 Social Activity
156
+
157
+ Social activity was present in multiple forms throughout the use of both telepresence prototypes. It consisted of interaction between the Streamer and Viewer, between the Streamer and their environment, and between the Streamer/Viewer and each other's environment.
158
+
159
+ We found that social activity between the Viewer and Streamer was desired, which is exactly what the prototypes were meant to facilitate in the form of telepresence, though it was definitely apparent that the Viewer-centric setup provided more social immersion for the Viewer while the Streamer-centric setup provided more connection and intimacy between both the Viewer and Streamer.
160
+
161
+ "I think she was very socially present, I'd ask her questions, we'd have conversations, while I asked her like help me out with this choice should I get this or that and she'd answer back." - Streamer 8
162
+
163
+ "So for the 360 degree experience it's more of a fulfilled experience because not only do I get to hear the person so see the surrounding as if I was there, so, but for the second experience it's more um, I'd say it's more intimate but it's intimacy is not a very correct word because Facetime has become such a useful tool for everyone, and it has become a common tool for us to use, so I feel that I can readily use that tool, instead of a 360.360 is more of an experience, and I would prefer that over to second." - Viewer 6
164
+
165
+ "Like with conferencing like I was able to see my partner as well, so I felt more connected than the 360 where she was able to see me, but I wasn't. I was able to see like on them live conferencing, I was also able to see what I'm recording." - Streamer 7
166
+
167
+ Table 2: Participant Demographics
168
+
169
+ <table><tr><td>$\mathbf{{SessionNumber}}$</td><td>Role</td><td>Gender</td><td>$\mathbf{{Age}}$</td></tr><tr><td rowspan="2">One</td><td>Streamer</td><td>Female</td><td>21</td></tr><tr><td>Viewer</td><td>Female</td><td>21</td></tr><tr><td rowspan="2">Two</td><td>Streamer</td><td>Male</td><td>24</td></tr><tr><td>Viewer</td><td>Male</td><td>23</td></tr><tr><td rowspan="2">Three</td><td>Streamer</td><td>Male</td><td>30</td></tr><tr><td>Viewer</td><td>Female</td><td>27</td></tr><tr><td rowspan="2">Four</td><td>Streamer</td><td>Female</td><td>21</td></tr><tr><td>Viewer</td><td>Female</td><td>21</td></tr><tr><td rowspan="2">Five</td><td>Streamer</td><td>Male</td><td>28</td></tr><tr><td>Viewer</td><td>Female</td><td>29</td></tr><tr><td rowspan="2">Six</td><td>Streamer</td><td>Male</td><td>28</td></tr><tr><td>Viewer</td><td>Female</td><td>24</td></tr><tr><td rowspan="2">Seven</td><td>Streamer</td><td>Female</td><td>21</td></tr><tr><td>Viewer</td><td>Female</td><td>21</td></tr><tr><td rowspan="2">Eight</td><td>Streamer</td><td>Female</td><td>21</td></tr><tr><td>Viewer</td><td>Female</td><td>21</td></tr><tr><td rowspan="2">Nine</td><td>Streamer</td><td>Male</td><td>19</td></tr><tr><td>Viewer</td><td>Male</td><td>19</td></tr><tr><td rowspan="2">Ten</td><td>Streamer</td><td>Male</td><td>22</td></tr><tr><td>Viewer</td><td>Female</td><td>26</td></tr></table>
170
+
171
+ In terms of social activity between the Streamer and their environment, we found that there were three social relationships they had with their environment; either they sought to actively interact with bystanders, they remained neutral and neither tried to interact or avoided possible social interactions, or they actively tried to avoid possible social interactions. This was influenced by how much they wished to achieve during the use of the prototypes and how much they communicated with their partner.
172
+
173
+ Finally, in some rare instances, either the Streamer or Viewer wished to interact with the other's environment directly without the other participant serving as a means to facilitate interaction. These relationships of social activity are accompanied by the location choices that Streamers and Viewers made. Participants mainly chose for the Streamer to go a place that was intimate or familiar to them, or to partake in an activity together.
174
+
175
+ "That's kind of why I let her be Viewer, because I wanted to walk around and then I saw the arboretum, yeah yeah, and then I was like oh, this is a good chance to like walk around there, I think it was also if I bought myself a cookie and then I walked around like zones." - Streamer 8
176
+
177
+ ## 5 DISCUSSION
178
+
179
+ The results of our study provided insight into what confounding variables affect the Streamer and Viewer user experience in an interpersonal telepresence system. We also gained insight into the environmental contexts as well as the technological preferences that participants tended to converge towards for a future interpersonal telepresence system. The following section provides details on our findings.
180
+
181
+ ### 5.1 Streamer social management variation (RQ1)
182
+
183
+ Through our study, we found that our Streamers' response to social situations and interactions varied greatly between each Streamer. Given that previous work typically alluded to the idea that social pressure or awkwardness may be an inhibiting factor to pursue real world experiences or interactions, our Streamer participants took one of three approaches to social experiences or bystander collocation: they Pursued social experiences, acknowledged bystanders, or avoided bystanders.
184
+
185
+ #### 5.1.1 Context affects Streamer approaches
186
+
187
+ Taking into the consideration of the three varying approaches our Streamers employed in regard to bystander or social interactions, we cannot definitely confirm that all Streamers will experience social pressures or discomfort when interacting with an interpersonal telepresence system.
188
+
189
+ Our work implies that one of the confounding factors that influence a Streamer's willingness to pursue social interactions is environmental context. For example, in regard to highly populated areas of our local university campus, our Streamers did not converge on a common approach, but instead chose approaches that best fit what they wanted to achieve through the remote experience. This also leads into the idea that our Streamers had very differing goals and intents throughout their interpersonal telepresence experience.
190
+
191
+ #### 5.1.2 Streamers are not uniform
192
+
193
+ We have highlighted the idea that context is a highly influencing factor in how Streamers interact with interpersonal telepresence systems. Continuing the idea regarding that Streamer's personal goals and intents may influence their insertion into social contexts, we consider that this may possibly lend itself to the personality type and preferences of the Streamer's themselves. What we have learned from this study is that we cannot generalize the attitudes and preferences a Streamer may have. Therefore, when designing an interpersonal telepresence system, we need to account for nonuniform Streamers and unique treatment of the system to cater to a given Streamer's preferences.
194
+
195
+ ### 5.2 Novelty and Unique Experiences are linearly corre- lated (RQ2)
196
+
197
+ Across our pairs of participants, our two stakeholder groups held similar values with respect to the appropriate contexts where it is preferable or appropriate to utilize more unique and novel telepres-ence systems. We converge on two overarching contexts to generalize scenarios in which different telepresence systems may need to be employed: everyday experiences and unique experiences.
198
+
199
+ #### 5.2.1 System Familiarity is linked to Frequency
200
+
201
+ Everyday experiences are inclusive of events that a general user will encounter on a frequent basis. Such experiences, according to our participants, include casually connecting with friends/family/significant others, a one-on-one meeting in a professional context, and classroom environments. Participants also shared that they would typically prefer video-conferencing applications in general as they are easier to use and more familiar than more unique and novel systems. In cases where the remote experience is more focused on the human-to-human interaction and not inclusive of the environment, video-conferencing seemed to be ample enough for our participants to fulfill remote interactions and experiences with each other.
202
+
203
+ #### 5.2.2 Unique setups for Unique Experiences
204
+
205
+ On the opposite side of the spectrum, participants uniformly highlighted that in more infrequent, unique experiences (i.e. vacations, hiking trips, theme parks), a more interactive, novel system is more preferable to attempt to immerse an individual into a remote environment. Participants would further rationalize this concept by emphasizing the need for both the Streamer and Viewer should maintain a certain level of independence between one another. This would ultimately allow the Streamer and Viewer to interact with the remote environment of their own volition in an attempt to emulate having two individuals present in an environment instead of one. Despite this uniform consensus, we feel that this may be due in part to the sytems being rather novel. Pfeil et al. highlighted two important qualities regarding interpersonal telepresence systems: 1) more work is needed within this field, particularly in the form of logitudinal field studies, to learn of user values when novelty effects subside and 2) the concept of interpersonal telepresence systems is relatively new [14].
206
+
207
+ Table 3: Codebook Used For Qualitative Analysis
208
+
209
+ <table><tr><td>Categories $\left( {\mathrm{N} = {20}}\right)$</td><td>Codes $\left( {\mathrm{N} = {20}}\right)$</td><td>Exemplar Quote</td></tr><tr><td rowspan="2">Participants value more novel designs in more unique situations</td><td>360 camera better for exploratory/special occasion use $\left( {\mathrm{N} = {20}}\right)$</td><td>"The 360 I could see that being used, maybe when you're like a museum exhibit and you don't want to walk through every painting, you could just walk straight through and then you know the Viewer can just slide left or right, whatever." - Streamer 8, P15</td></tr><tr><td>Video conference is better for professional/everyday use (N $= {20})$</td><td>"For [video-conferencing] I see it, more for like school meeting because it's like one on one you don't really need to see if there's other people around or whatnot." - Viewer 7, P14</td></tr><tr><td rowspan="2">Participants valued a more minimalist and interactive interpersonal telepresence design</td><td>Independent and Interactive Viewing (N = 9)</td><td>"I could focus on enjoying what I'm doing and I have to worry about showing them the stuff. They can look around and do what they want to do and it's like they're actually there." - Viewer 9, P18</td></tr><tr><td>Minimalist Streaming Setup Design (N = 9)</td><td>"If I'm doing a livestream then [waist bag], because I just had like better access to the charger or the wires without having to take it off, with a backpack when you asked me to take out the phone and stuff I had to like take it off, but with [the waist bag] it was easy to access." - Streamer 7, P13</td></tr><tr><td rowspan="3">Participants varied in terms of how much social interaction they desired</td><td>Inserted themselves into social interactions $\left( {\mathrm{N} = 3}\right)$</td><td>"For example, I cross over by someone that was doing a TikTok dance, and I was able to to grab his Instagram account."" - Streamer 2, P3</td></tr><tr><td>Did not care about social interactions $\left( {\mathrm{N} = 4}\right)$</td><td>"I didn't mind actually I wasn't actually aware of people around me that well, maybe because I was distracted like talking with her or something." - Streamer 8, P15</td></tr><tr><td>Tried to minimize social interactions $\left( {\mathrm{N} = 3}\right)$</td><td>"I just didn't really want to get in the way of other people like that, they don't know what we're doing." - Viewer 7; P13</td></tr></table>
210
+
211
+ ### 5.3 Bridging the gap in design based on Stakeholder commonalities (RQ3)
212
+
213
+ From our study, we were able to derive general parameters to designing interpersonal telepresence systems that, in our estimation, would be able to satisfy both the Viewer and Streamer. We consider two major themes: Interactivity and Minimalism.
214
+
215
+ #### 5.3.1 Stronger Presence through Interactivity and Immersion
216
+
217
+ Our Viewer participants tended to converge on the idea of being able to interact with the streamed video in a higher capacity than that of a typical video conference. When prompted about any improvements that could be made to the Streamer-centric design (Video-Conferencing application), Viewers hinted that if they were to improve the system, it would ultiamtely evolve into the Viewer-centric system in which the Viewer is able to pan around and view the environment for themselves. Furthermore, when prompted about any possible improvements they would consider to the Viewer-centric system, most Viewers suggested if it was at all possible to add additional controls or make the experience more immersive.
218
+
219
+ #### 5.3.2 Minimizing weight for real-world interaction
220
+
221
+ From the Streamer's perspective, Streamer participants converged on the idea that all the technology and equipment involved in the setup should be low profile. This is to resolve two main concerns the Streamers encountered during their experiences: comfort and flexibility. Though supported through previous literature, we found this conclusion intriguing in the sense that comfort and flexibility in a physical sense seemed to be a higher priority for our Streamer participants over comfort due to socially awkward experiences.
222
+
223
+ Comfort also seemed to be a recurring element as Streamers wanted to ensure that both the bags and equipment would be comfortable throughout their experiences. To achieve this, Streamers would often manipulate the given equipment in configurations that best suited them. For example, in our original figures, we showcased the Streamer-centric design with a one-strap backpack slung across the chest, however, we had participants that opted to wear the bag as a waist bag or over the shoulder similar a purse.
224
+
225
+ Flexibility was also another recurring theme across our paired sessions. Streamers wanted the ability to interact with the environment while providing their Viewer counterpart an enjoyable remote experience. However, they were at times not able to due to holding the equipment. Additionally, we informed Streamers they would be able to view their partner through the Zoom video-conference during the Viewer-centric portion of the sessions. Despite the suggestion, Streamers opted to rely on Zoom audio for communication and only carried the 360-degree camera to facilitate the remote experience for their partner. This heavily implies that Streamers want to be able to maintain an ability to interact with the physical environment without being impaired by equipment.
226
+
227
+ ### 5.4 Converging towards an ideal telepresence system outlook
228
+
229
+ Based on our Streamer and Viewer participants collectively, it seems that the ideal interpersonal telepresence setup is comprised of a variety of factors. The overarching factor, however, is the context in which the interpersonal telepresence system is employed and whether or not the factor falls under a unique experience or an everyday experience.
230
+
231
+ Between the two contexts, there exists an overlap and consensus of what key values are important to users in an interpersonal telep-resence system: the Context and Minimalism. The context plays an important role in determining whether or not it is a priority for a user to want to experience a remote environment. Furthermore, minimalism extends further than the physical and social comfort; it is also inclusive of how familiar the users are with the systems and whether or not they are willing to take on any learning loads to better familiarize themselves with more unique systems. For these reasons, the factors that heavily influence the acceptance of an interpersonal telepresence system are the context in which they are employed and the technology remaining both physically and cognitively minimalist.
232
+
233
+ ## 6 LIMITATIONS AND FUTURE WORK
234
+
235
+ Our results are not applicable to all general situations, contexts, and scenarios in which each system would be used. In our study, we asked our participants to choose a location on our local university campus or within a 10-minute drive of the campus. Many participants chose to stay within the campus in favor of familiarity. We cannot speak broadly about more diverse scenarios and contexts outside of hypothetical situations. To mitigate this bias towards familiarity, our aim in a future study is to broaden the locations in which participants are able to select from, which would provide the opportunity for the Streamer or Viewer to share a location they value or is new to.
236
+
237
+ Our study was less considerate to the Viewers as the Viewing experience was more or less the same with the only difference being able to view the environment through a 360-degree video stream or a video stream through Zoom. Participants desired the ability to further immerse themselves into the environment and even compared the Viewer-centric system to a video game. In a future study, more viewing/interaction options need to be available to the Viewer so that they are able to use varying interaction techniques with the systems.
238
+
239
+ Lastly, our study design provided limited exposure to each stakeholder-centric system. With 20 minutes to interact with each system, we ask if our participants experienced a novelty effect. Our work falls short of identifying the effects of long-term usage of each system. A future improvement would allow participants to interact with the systems for an extended period of time. This would allow for the systems to be utilized for a variety of purposes and periods of time, and would yield a more holistic understanding of participants' perceptions.
240
+
241
+ ## 7 CONCLUSION
242
+
243
+ In this paper, we present our work towards identifying a balance of needs between Viewers and Streamers to apply towards a future interpersonal telepresence prototype system. Through a field study, we made use of prototypes based on prior literature and learned of the values and contexts Streamers and Viewers converged towards. Previous work has shown that telepresence designers have a tendency to create novel systems that augment the abilities of the Streamer but are rather obtrusive in a social contexts. By contrast, we found that our participants strongly favored a prototype that provided a balance of independence between the Streamer and Viewer. This prototype would allow the Streamer to freely interact with the physical environment in a minimalist fashion, while providing the Viewer with a highly interactive and entirely autonomous viewing experience of the remote environment. Through our work, we are moving towards the idea of integrating interpersonal telepresence systems within everyday life. With this goal in mind, our hope is to be able to strengthen interpersonal relationships between individuals and take the lessons learned from our experiences to apply towards more fulfilling and enriching interpersonal telepresence experiences.
244
+
245
+ ## REFERENCES
246
+
247
+ [1] Quarantine and Isolation. https://www.cdc.gov/coronavirus/ 2019-ncov/your-health/quarantine-isolation.html.
248
+
249
+ [2] A. Abdullah, J. Kolkmeier, V. Lo, and M. Neff. Videoconference and embodied vr: Communication patterns across task and medium.
250
+
251
+ Proc. ACM Hum.-Comput. Interact., 5(CSCW2), oct 2021. doi: 10.
252
+
253
+ 1145/3479597
254
+
255
+ [3] B. Friedman, P. H. Kahn, A. Borning, and A. Huldtgren. Value Sensitive Design and Information Systems, pp. 55-95. Springer Netherlands, Dordrecht, 2013. doi: 10.1007/978-94-007-7844-3_4
256
+
257
+ [4] Y. Heshmat, B. Jones, X. Xiong, C. Neustaedter, A. Tang, B. E. Riecke, and L. Yang. Geocaching with a beam: Shared outdoor activities through a telepresence robot with 360 degree viewing. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-13, 2018.
258
+
259
+ [5] C. Ishak, C. Neustaedter, D. Hawkins, J. Procyk, and M. Massimi. Human Proxies for Remote University Classroom Attendance, p. 931-943. Association for Computing Machinery, New York, NY, USA, 2016.
260
+
261
+ [6] B. Jones, A. Witcraft, S. Bateman, C. Neustaedter, and A. Tang. Mechanics of camera work in mobile video collaboration. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 957-966, 2015.
262
+
263
+ [7] R. Kachach, M. Orduna, J. Rodríguez, P. Pérez, A. Villegas, J. Cabrera, and $\mathrm{N}$ . García. Immersive telepresence in remote education. In Proceedings of the International Workshop on Immersive Mixed and Virtual Environment Systems (MMVE '21), MMVE '21, p. 21-24. Association for Computing Machinery, New York, NY, USA, 2021. doi: 10.1145/3458307.3460967
264
+
265
+ [8] S. Kim, S. Junuzovic, and K. Inkpen. The nomad and the couch potato: Enriching mobile shared experiences with contextual information. In Proceedings of the 18th International Conference on Supporting Group Work, pp. 167-177, 2014.
266
+
267
+ [9] A. Kristoffersson, S. Coradeschi, and A. Loutfi. A review of mobile robotic telepresence. Advances in Human-Computer Interaction, 2013, 2013.
268
+
269
+ [10] J. Li, V. Vinayagamoorthy, R. Schwartz, W. IJsselsteijn, D. A. Shamma, and P. Cesar. Social vr: A new medium for remote communication and collaboration. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI EA '20, p. 1-8. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10. 1145/3334480.3375160
270
+
271
+ [11] M. Minsky. Telepresence. 1980.
272
+
273
+ [12] K. Misawa, Y. Ishiguro, and J. Rekimoto. Ma petite chérie: What are you looking at? a small telepresence system to support remote collaborative work for intimate communication. In Proceedings of the 3rd augmented human international conference, pp. 1-5, 2012.
274
+
275
+ [13] K. Pfeil, P. Wisniewski, and J. J. LaViola Jr. An analysis of user perception regarding body-worn ${360}^{ \circ }$ camera placements and heights for telepresence. In ACM Symposium on Applied Perception 2019, pp. 1-10, 2019.
276
+
277
+ [14] K. P. Pfeil, N. Chatlani, J. J. LaViola Jr, and P. Wisniewski. Bridging the socio-technical gaps in body-worn interpersonal live-streaming telepresence through a critical review of the literature. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1):1-39, 2021.
278
+
279
+ [15] K. P. Pfeil, K. A. Kapalo, S. L. Koh, P. Wisniewski, and J. J. LaViola. Exploring human-to-human telepresence and the use of vibro-tactile commands to guide human streamers. In International Conference on Human-Computer Interaction, pp. 183-202. Springer, 2021.
280
+
281
+ [16] J. Procyk, C. Neustaedter, C. Pang, A. Tang, and T. K. Judge. Exploring video streaming in public settings: shared geocaching over distance using mobile video chat. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2163-2172, 2014.
282
+
283
+ [17] I. Rae, G. Venolia, J. C. Tang, and D. Molnar. A framework for understanding and designing telepresence. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work Social Computing, CSCW '15, p. 1552-1566. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2675133.2675141
284
+
285
+ [18] A. Tang, O. Fakourfar, C. Neustaedter, and S. Bateman. Collaboration in 360 videochat: Challenges and opportunities. Technical report, University of Calgary, 2017.
286
+
287
+ [19] J. C. Tang, C. Wei, and R. Kawal. Social telepresence bakeoff: Skype group video calling, google+ hangouts, and microsoft avatar kinect. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work Companion, CSCW '12, p. 37-40. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10.1145/
288
+
289
+ ## Online Submission ID: 0
290
+
291
+ 2141512.2141531
292
+
293
+ [20] T. Teo, L. Lawrence, G. A. Lee, M. Billinghurst, and M. Adcock. Mixed Reality Remote Collaboration Combining 360 Video and 3D Reconstruction, p. 1-14. Association for Computing Machinery, New York, NY, USA, 2019.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/BBrgJZF4pfc/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,349 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § CONTEXT AND MINIMALISM: USER EVALUATIONS ON TWO INTERPERSONAL TELEPRESENCE SYSTEMS
2
+
3
+ Category: Research
4
+
5
+ § ABSTRACT
6
+
7
+ We present a field study with 10 pairs of adults to employ two prototypes of an interpersonal telepresence system: one catered to the Viewer, the individual experiencing a remote environment, and one catered to the Streamer, the individual sharing the remote experience to the Viewer through technological means. Based on previous work, our design choices reflect common values found from both perspectives of the Viewer and Streamer. We then seek to identify the key value tensions and trade-offs in the designs through the employment of similar design choices. We then demonstrate how, through an applied scenario, we learned that users converge on environmental context and minimalism being the prime factors that should influence a general framework for what considerations need to be held when designing a telepresence system catered to one-to-one interactions.
8
+
9
+ Keywords: Telepresence, Live-streaming, Remote experiences
10
+
11
+ Index Terms: Human-centered computing-Visualization-Visualization techniques;
12
+
13
+ § 1 INTRODUCTION
14
+
15
+ Physical presence has become a point of contention in multiple communities due to the COVID-19 Pandemic. Many people who were previously present with others in the physical world are now either confined to their living space or are concerned about the health effects that re-involving themselves in society may yield [1]. To combat this confinement, many are limited to virtual forms of communication, such as video-conferencing applications like Zoom, and interactions. To mitigate the lack of in-person interactions, the need for remote collaboration and interaction techniques is becoming increasingly important. Due to this need, we are witnessing an emergence of remote interaction technologies ranging from extending video conferencing capabilities to mixed reality collaboration experiences.
16
+
17
+ The facilitation of developing methods to simulate presence in a remote environment has led to the ideation of telepresence, a term coined by Marvin Minsky in 1980 to denote the use of technology to simulate a remote physical location despite not being physically located there [11]. This provided the formal definition for researchers to begin conceptualizing and applying their techniques to further the research of telepresence-related tasks $\left\lbrack {9,{14}}\right\rbrack$ . The current state of telepresence is now spanning a multitude of fields and disciplines such as medical and industrial applications. Within each field, we are witnessing a range of novel implementations that collectively propose effective remote experiences. However, when considering interpersonal telepresence systems, telepresence systems that hone on one-to-one remote interactions in which one individual attempts to experience a remote environment with an individual in the remote environment, we are met with concerns regarding how socially present does an remote individual feel and how socially comfortable is the physically present individual when hosting the technology required to fulfill the remote experience $\left\lbrack {4 - 6,8,{12} - {16},{18}}\right\rbrack$ .
18
+
19
+ With novelty having been a main focal point within the telep-resence research community, we are now witnessing a divergence in what aspects of these telepresence systems need to be considered. The first is, are researchers considering the social effects of participants being involved in these telepresence systems, and, by extension, interpersonal telepresence systems? The second aspect calls researchers to consider if the technological implementations being employed are what end-users truly desire and value in a telepresence system [14].
20
+
21
+ To answer both these questions, our research intends to investigate the social and technological variables to consider in the development cycle of a future interpersonal telepresence system. To achieve this, we carried out a study with two interpersonal telepresence systems, each designed to be used by a pair of participants. The purpose for designing two interpersonal telepresence systems is to provide two platforms that cater to the two types of users of our systems: the Streamer and the Viewer. We denote the Streamer as the individual physically present in the environment to administer the technology necessary to fulfill the remote experience between them and the Viewer. We denote the Viewer as the individual utilizing technology to view and possibly interact with the remote environment.
22
+
23
+ Both systems we propose were designed with the idea of minimalism in terms of hardware and equipment needed to carry out the remote experience. We denote the two systems as either being Viewer-centric or Streamer-centric. Each design is based on techniques employed in previous literature. Our end goal was to expose our participants to two (2) variations of interpersonal telepresence setups and then prompt them on their experience as well as any improvements they would prefer in an effort to draw conclusions towards an optimal system. We propose the following research questions to be answered through our study:
24
+
25
+ * RQ1: How do Streamer participants approach social interactions while using the interpersonal telepresence systems?
26
+
27
+ * RQ2: In what real-world contexts do Streamers and Viewers truly value utilizing an interpersonal telepresence system?
28
+
29
+ * RQ3: How can we balance social presence of the Viewer and social comfort of the Streamer in a telepresence system?
30
+
31
+ As a result of our study, we found that users converged towards the idea that interpersonal telepresence setups should be context-sensitive. Furthermore, in regard to the interactions and technologies utilized, interpersonal telepresence setups should allow the Streamer and Viewer to maintain a level of interactive independence from one another. We discuss in future sections the finer details of our results as well as the implications for future interpersonal telepresence systems.
32
+
33
+ § 2 RELATED WORK
34
+
35
+ Previous work on interpersonal telepresence systems show that the Viewer's needs are prioritized [14]. With this in mind, we sought to develop and deploy two systems catering to each stakeholder separately. In this section, we review the relevant literature in the areas of telepresence and computing devices.
36
+
37
+ § 2.1 IDENTIFYING THE APPROPRIATE TELEPRESENCE SETUP
38
+
39
+ Telepresence refers to the ability for someone to feel as if they are present in another physical space without actually being there. It can be supported in many forms, from the use of $3\mathrm{D}$ avatars in virtual reality [10] to conventional video-conferencing solutions [6] to full 360-degree video streaming [18]. Our study uses a form of video conferencing since it has been found to socially connect participants more than virtual reality along with providing a higher fidelity system that is more ubiquitous and familiar to participants, along with its ethical and social risks being more known than virtual reality telepresence [2]. As found by Tang et al. and Heshmat et al., 360-degree video-streaming provides more social connection between the Viewer and Streamer due to more visual immersion at the expense of verbal communication, which would require additional hardware to support such a form of communication [4, 18]. In contrast to these designs, Teo et al. conducted a collaboration study that compares the use of 360-degree systems to 3D scene reconstruction systems for collaboration [20]. With these systems employing nonverbal communication techniques (i.e. hand gestures and visual cues), social presence and task completion rate was found significantly higher for the 360-degree camera setup.
40
+
41
+ Communication is another prime element to these designs, with video conferencing delivering audio communication. Many forms of video conferencing exist, ranging from a regular video call using proprietary applications [16] or through off-the-shelf software such as Skype ${}^{1}\left\lbrack {8,{17},{19}}\right\rbrack$ .
42
+
43
+ In the past, interpersonal telepresence scenarios centralized on a certain task or series of tasks $\left\lbrack {5,6,8,{16},{18}}\right\rbrack$ . We are interested in observing the experiences that users undergo while applying such setups through a real-world experience and learn of what technologies and scenarios users truly value. Considering this, we employed two setups; one using both video conferencing and a 360 camera to cater to the Viewer since it provides more Viewer immersiveness, and the other using only video conferencing to cater to the Streamer as it requires less items for the Streamer to hold/carry.
44
+
45
+ § 2.2 CHOOSING NECESSARY HARDWARE
46
+
47
+ The key piece of hardware used to provide the video feed in many systems is a camera. However, in various telepresence setups, at least one user of the system is mobile (like in our system), leading to the use of cameras such as 360-degree cameras $\left\lbrack {7,8,{18}}\right\rbrack$ , smartphone cameras [6], or even hand-held cameras [5]. Camera quality is important to consider when using mobile cameras, as Kim et al. [8] shows that when presented with various media types, the Viewer chose other media types over video feed since it was of notably poor quality, indicating that good camera quality would be vital to a system.
48
+
49
+ To facilitate communication, smartphones [6, 8, 16, 18], desktops $\left\lbrack {8,{15}}\right\rbrack$ , and even telepresence robots $\left\lbrack {4,5}\right\rbrack$ have been the main media that previous designs have centered on. Smartphones and desktops provide varying functionality that support communication efficiently (i.e. internet connection and an integrated camera), with desktops providing larger displays with little mobility and smart-phones vice versa, but telepresence robots more so serve as a medium of physically supporting and moving hardware.
50
+
51
+ With all these designs in mind, the hardware in both of our telep-resence setups are similar; they both share a desktop for the Viewer and a smartphone for the Streamer; the difference being that the Viewer centric setup also has an additional smartphone and 360- degree camera to facilitate immersiveness.
52
+
53
+ § 3 METHODS
54
+
55
+ Previous work in the interpersonal telepresence space has shown that the design of these systems would prioritize the Viewer's needs over the Streamer's needs [14]. To address this concern, we pursued a field study based approach in which pairs of participants would be able to interact with two interpersonal telepresence systems wherein each interpersonal telepresence system catered to one stakeholder of the pair dynamic. In the following sections, we discuss the remote streaming activities that were aimed to give participants perspective and real-world experience with interperonsal telepresence systems, and the conclusions and values they ultimately converged towards in regards to experience and features.
56
+
57
+ < g r a p h i c s >
58
+
59
+ Figure 1: Streamer-Centric Design: Mobile device, adjustable low-profile backpack, and battery bank inside backpack
60
+
61
+ § 3.1 PARTICIPANT DEMOGRAPHICS
62
+
63
+ We distributed information about our study through our local university and online messaging forums in our city. Interested individuals were asked to fill out an online form to ensure eligibility and indicate date and time of the session they would be available for. Participants were required to be at least 18 years old, have a mobile device that supports calling and headphones, normal or corrected-to-normal vision, speak English, be able to walk for 40 minutes, and able to lift and carry 10 pounds. We also asked participants to provide their age and gender. Each pair of participants were required to know each other as friends, family members, or significant others prior to the session. Table 2 highlights our participant demographics in detail.
64
+
65
+ § 3.2 APPARATUS
66
+
67
+ In the following sections we describe the apparatus utilized to carry out our Viewer/Streamer-centric remote streaming activities. In addition to basing our design choices on previous literature, we also utilized off-the-shelf technologies that would aid in user familiarity to the platforms and reduce possible steep learning curves from interacting with immersive technologies.
68
+
69
+ § 3.2.1 STREAMER-CENTRIC APPARATUS
70
+
71
+ For the Streamer-centric condition, Streamer participants wore a low-profile backpack and were equipped with a mobile device to facilitate a ${\text{ Zoom }}^{2}$ video-conferencing call for verbal and visual presence of the Viewer. The setup also included a power bank stored in the backpack in order to charge the mobile device if needed during the remote streaming experience. These design choices were influenced by previous work that focused on utilizing ubiquitous devices, such as a cellular device, to facilitate a social telepresence experience $\left\lbrack {5,6,8,{16}}\right\rbrack$ . Figure 2 shows the setup the Streamer used in the Streamer-centric condition.
72
+
73
+ ${}^{1}$ https://www.skype.com/en/
74
+
75
+ ${}^{2}$ https://zoom.us/
76
+
77
+ < g r a p h i c s >
78
+
79
+ Figure 2: Viewer-Centric design: 360-degree camera, adjustable backpack, two (2) mobile devices (one in hand and one in backpack), and a battery bank inside backpack
80
+
81
+ § 3.2.2 VIEWER-CENTRIC APPARATUS
82
+
83
+ For the Viewer-centric condition, the Streamer was equipped with a backpack, two (2) mobile devices, a Insta360 one ${X2360}$ -degree camera ${}^{3}$ , and a power bank stored in the backpack for charging purposes. Similar to the Streamer-centric condition, a Zoom video-conferencing call was used to facilitate verbal communication and visual presence. The addition of the 360-degree camera required the need of a private ${\text{ YouTube }}^{4}$ live-stream as the platform. The purpose for choosing Youtube as our live-streaming platform is due to the ability to support a 360-degree video feed as well as allow the Viewer to pan around and interact with varying viewing angles as they pleased. The design choices for the Viewer-centric condition were influenced by previous work that investigated the collaborative capabilities between a Streamer and Viewer through a 360-degree camera medium $\left\lbrack {4,{18}}\right\rbrack$ . Figure 3 shows the setup the Streamer utilized to host the stream in the Viewer-centric condition. For both conditions, wireless earbuds facilitated two-way audio communication between the Streamer and the Viewer.
84
+
85
+ § 3.3 STUDY PROCEDURE
86
+
87
+ Prior their session, we required pairs of participants to disclose a location of interest they would like to spend time together while using the interpersonal telepresence systems. The requirement for this location is that the location is either on our local university campus or within a 10-minute driving radius of our local university campus. We also required participants to choose their role in the Streamer and Viewer dynamic as the procedures differ between roles.
88
+
89
+ On the day of the session, the Streamer would meet with a researcher at the outlined location of interest, while the Viewer would meet another researcher in our lab. Each researcher provided a description of the study, a consent form they were required to sign, and instructions on how to use the Streamer-centric and Viewer-centric interpersonal telepresence setups. The participants were also informed that the entirety of the session will be both audio and video recorded and that all audio and video stored will be only viewed and accessible to the researchers and their research team.
90
+
91
+ During the remote streaming experience, the Streamer hosted both types of interpersonal telepresence setups. The order was randomized between groups. To facilitate a natural experience between the Streamer and Viewer, the researcher did not interfere or chaperone the Streamer throughout the remote streaming experience. The Streamer hosted the remote experience for 20 minutes per setup, for a total of 40 minutes of remote streaming.
92
+
93
+ For the Streamer-centric interpersonal telepresence setup, the Viewer viewed and communicated to the Streamer through a Zoom video-conference call. Audio communication was also hosted through the Zoom video-conference call. For the Viewer-centric interpersonal telepresence setup, the Viewer had the ability to view and pan around the Streamer's environment through a private Youtube 360-degree video stream and was able to view and communicate with the Streamer also through a Zoom video-conference call.
94
+
95
+ At the end of the remote streaming activity, the Streamer and Viewer were administered a semi-structured interview independently of one another and asked a series of questions related to their experiences with both interpersonal telepresence setups for approximately 30 minutes. Table 1 outlines the interview questions that participants were prompted based on their role in the interpersonal telepresence relationship. The questions were asked in the same order between each pair of participants. Participants were given a $\$ {15}$ Amazon gift card as compensation. All of the activities were audio and video recorded for analysis.
96
+
97
+ § 3.4 DATA ANALYSIS APPROACH
98
+
99
+ All sessions were audio and video recorded, and were transcribed by the authors with the assistance of Zoom. We conducted an inductive thematic analysis on both the remote streaming activity and semi-structured interview to better understand what our users truly valued when reflecting on the proposed setups [3]. We utilized open coding to log participants' explicit and implicit values from their utterances. We are interested in what features users truly want in an interpersonal telepresence system as well as what real world contexts are best suited for a more novel systems.
100
+
101
+ § 4 RESULTS
102
+
103
+ In this section we present our findings highlighted in Table 3 and provide further insight on our participants' experiences with the interpersonal telepresence setups
104
+
105
+ § 4.1 PARTICIPANT PREFERENCES
106
+
107
+ Overall, participants generally expressed that they liked using at least one of the systems, whether it be Viewer or Streamer. Participants especially expressed that the use of the 360-degree camera was unusual yet interesting in the Viewer-centric setup. Participants also have generally distinct situations where they would use each setup.
108
+
109
+ For the Streamer-centric setup, participants would most realistically use such a prototype for personal or exploratory use, e.g. if they were talking with friends or family or if they were in a new place and would like to stream that new environment to people in remote locations. This is due to the immersive experience that the setup provides.
110
+
111
+ ${}^{3}$ https://www.insta360.com/product/insta360-onex2
112
+
113
+ ${}^{4}$ https://www.youtube.com/
114
+
115
+ Table 1: Interview Questions
116
+
117
+ max width=
118
+
119
+ Streamers Viewers Both Roles
120
+
121
+ 1-3
122
+ 2*Describe how you felt while live- streaming on the 360-degree system and the video-conferencing system. 2*Describe how well you were able to see and hear the remote environment in both systems. How well were you able to communicate with your partner?
123
+
124
+ 3-3
125
+ What features of the video-conferencing system did you like and not like?
126
+
127
+ 1-3
128
+ 2*How well were you able to communicate with your partner? 2*Describe how socially present your part- ner felt in both systems? What features should be changed to the video-conferencing system?
129
+
130
+ 3-3
131
+ What features of the 360-degree system did you like and not like?
132
+
133
+ 1-3
134
+ 3*Why did you choose the role of Streamer? 3*Why did you choose the role of Viewer? What features should be added, removed or changed to the 360-degree system?
135
+
136
+ 3-3
137
+ If you were to choose one of the setups, which setup do you prefer and why?
138
+
139
+ 3-3
140
+ What scenarios in your daily life would you consider using technology like this?
141
+
142
+ 1-3
143
+
144
+ Although participants generally mentioned such a use for this prototype, another use that was mentioned was for security reasons, as the view provided would be able to show everything in the environment and not just one specific area, which would provide multiple perspectives.
145
+
146
+ Like for security reasons that's also a good idea to have like one of these cameras set up somewhere. - Streamer 6
147
+
148
+ For the Viewer-centric setup, participants would most realistically use such a prototype for everyday or professional use, e.g. if they were attending a meeting with a colleague or an online lecture, or doing some simple shopping. This is due to the simplicity and familiarity of the setup when using it.
149
+
150
+ "Interacting on zoom in daily life is better, as compared to the 360 camera that what I would suggest in daily life, considering the daily life activities, and I would go with Zoom of course." - Viewer 3
151
+
152
+ § 4.2 PARTICIPANT VALUES
153
+
154
+ Participants held a wide range of values for both telepresence prototypes employed, which relate to both their own interests and the interests of the corresponding participant in the pair.
155
+
156
+ Viewers mainly valued interactive and independent viewing. This relates to the Viewer being able to immerse themselves as much as possible in the Streamer's environment in order to simulate the feeling that they are with the Streamer, along with being able to do as they please while using the prototype and not having to rely on the Streamer to do so.
157
+
158
+ "So I would definitely prefer that 360 when me or my partner and any family member is going on a trip as a road trip or as sightseeing or just go for a ride probably and they can actually spin the camera around and what within my choice I could see here and there, without having to ask them to "hey can you flip the camera" yeah that's one convenience of 360 camera that conference meeting doesn't have." - Viewer 6
159
+
160
+ Streamers mainly valued a minimalist telepresence prototype along with being able to interact with the physical environment while using the setup. This relates to the Streamer having to be mobile, and they wish to hold and wear as little as possible when doing so along with being able to both talk and see the Viewer. Along these lines, multiple improvements were recommended as changes to both systems to make them more minimalistic and user friendly.
161
+
162
+ "Oh yeah making it more minimalistic." - Streamer 3
163
+
164
+ Changes recommended for the Streamer-centric setup included attaching a wider lens angle for more Viewer immersion and having a mount for their smartphone. Changes to the Viewer-centric setup included having a single, multi-function device to serve as the central device of the setup to reduce the amount of equipment, reducing the backpack size, and having a mount for both the video conferencing smartphone and 360 degree camera.
165
+
166
+ "Have an accessory that kind of like connects to the body in some way so that the weight is not just on the fingers, it could also be supported by like the rest or the entire arm, so that, so the user can hold it for extended periods of time." - Streamer 10
167
+
168
+ Participants in general also held similar values such as wanting to be able to see each other during the use of either telepresence prototype, making sure that each other is safe, and wanting to directly interact with each other's surroundings.
169
+
170
+ "When I went to see my friend, it was, like me, trying to get both of them to see and hear, but they couldn't hear each other, because of the headphones." - Streamer 1
171
+
172
+ § 4.3 SOCIAL ACTIVITY
173
+
174
+ Social activity was present in multiple forms throughout the use of both telepresence prototypes. It consisted of interaction between the Streamer and Viewer, between the Streamer and their environment, and between the Streamer/Viewer and each other's environment.
175
+
176
+ We found that social activity between the Viewer and Streamer was desired, which is exactly what the prototypes were meant to facilitate in the form of telepresence, though it was definitely apparent that the Viewer-centric setup provided more social immersion for the Viewer while the Streamer-centric setup provided more connection and intimacy between both the Viewer and Streamer.
177
+
178
+ "I think she was very socially present, I'd ask her questions, we'd have conversations, while I asked her like help me out with this choice should I get this or that and she'd answer back." - Streamer 8
179
+
180
+ "So for the 360 degree experience it's more of a fulfilled experience because not only do I get to hear the person so see the surrounding as if I was there, so, but for the second experience it's more um, I'd say it's more intimate but it's intimacy is not a very correct word because Facetime has become such a useful tool for everyone, and it has become a common tool for us to use, so I feel that I can readily use that tool, instead of a 360.360 is more of an experience, and I would prefer that over to second." - Viewer 6
181
+
182
+ "Like with conferencing like I was able to see my partner as well, so I felt more connected than the 360 where she was able to see me, but I wasn't. I was able to see like on them live conferencing, I was also able to see what I'm recording." - Streamer 7
183
+
184
+ Table 2: Participant Demographics
185
+
186
+ max width=
187
+
188
+ $\mathbf{{SessionNumber}}$ Role Gender $\mathbf{{Age}}$
189
+
190
+ 1-4
191
+ 2*One Streamer Female 21
192
+
193
+ 2-4
194
+ Viewer Female 21
195
+
196
+ 1-4
197
+ 2*Two Streamer Male 24
198
+
199
+ 2-4
200
+ Viewer Male 23
201
+
202
+ 1-4
203
+ 2*Three Streamer Male 30
204
+
205
+ 2-4
206
+ Viewer Female 27
207
+
208
+ 1-4
209
+ 2*Four Streamer Female 21
210
+
211
+ 2-4
212
+ Viewer Female 21
213
+
214
+ 1-4
215
+ 2*Five Streamer Male 28
216
+
217
+ 2-4
218
+ Viewer Female 29
219
+
220
+ 1-4
221
+ 2*Six Streamer Male 28
222
+
223
+ 2-4
224
+ Viewer Female 24
225
+
226
+ 1-4
227
+ 2*Seven Streamer Female 21
228
+
229
+ 2-4
230
+ Viewer Female 21
231
+
232
+ 1-4
233
+ 2*Eight Streamer Female 21
234
+
235
+ 2-4
236
+ Viewer Female 21
237
+
238
+ 1-4
239
+ 2*Nine Streamer Male 19
240
+
241
+ 2-4
242
+ Viewer Male 19
243
+
244
+ 1-4
245
+ 2*Ten Streamer Male 22
246
+
247
+ 2-4
248
+ Viewer Female 26
249
+
250
+ 1-4
251
+
252
+ In terms of social activity between the Streamer and their environment, we found that there were three social relationships they had with their environment; either they sought to actively interact with bystanders, they remained neutral and neither tried to interact or avoided possible social interactions, or they actively tried to avoid possible social interactions. This was influenced by how much they wished to achieve during the use of the prototypes and how much they communicated with their partner.
253
+
254
+ Finally, in some rare instances, either the Streamer or Viewer wished to interact with the other's environment directly without the other participant serving as a means to facilitate interaction. These relationships of social activity are accompanied by the location choices that Streamers and Viewers made. Participants mainly chose for the Streamer to go a place that was intimate or familiar to them, or to partake in an activity together.
255
+
256
+ "That's kind of why I let her be Viewer, because I wanted to walk around and then I saw the arboretum, yeah yeah, and then I was like oh, this is a good chance to like walk around there, I think it was also if I bought myself a cookie and then I walked around like zones." - Streamer 8
257
+
258
+ § 5 DISCUSSION
259
+
260
+ The results of our study provided insight into what confounding variables affect the Streamer and Viewer user experience in an interpersonal telepresence system. We also gained insight into the environmental contexts as well as the technological preferences that participants tended to converge towards for a future interpersonal telepresence system. The following section provides details on our findings.
261
+
262
+ § 5.1 STREAMER SOCIAL MANAGEMENT VARIATION (RQ1)
263
+
264
+ Through our study, we found that our Streamers' response to social situations and interactions varied greatly between each Streamer. Given that previous work typically alluded to the idea that social pressure or awkwardness may be an inhibiting factor to pursue real world experiences or interactions, our Streamer participants took one of three approaches to social experiences or bystander collocation: they Pursued social experiences, acknowledged bystanders, or avoided bystanders.
265
+
266
+ § 5.1.1 CONTEXT AFFECTS STREAMER APPROACHES
267
+
268
+ Taking into the consideration of the three varying approaches our Streamers employed in regard to bystander or social interactions, we cannot definitely confirm that all Streamers will experience social pressures or discomfort when interacting with an interpersonal telepresence system.
269
+
270
+ Our work implies that one of the confounding factors that influence a Streamer's willingness to pursue social interactions is environmental context. For example, in regard to highly populated areas of our local university campus, our Streamers did not converge on a common approach, but instead chose approaches that best fit what they wanted to achieve through the remote experience. This also leads into the idea that our Streamers had very differing goals and intents throughout their interpersonal telepresence experience.
271
+
272
+ § 5.1.2 STREAMERS ARE NOT UNIFORM
273
+
274
+ We have highlighted the idea that context is a highly influencing factor in how Streamers interact with interpersonal telepresence systems. Continuing the idea regarding that Streamer's personal goals and intents may influence their insertion into social contexts, we consider that this may possibly lend itself to the personality type and preferences of the Streamer's themselves. What we have learned from this study is that we cannot generalize the attitudes and preferences a Streamer may have. Therefore, when designing an interpersonal telepresence system, we need to account for nonuniform Streamers and unique treatment of the system to cater to a given Streamer's preferences.
275
+
276
+ § 5.2 NOVELTY AND UNIQUE EXPERIENCES ARE LINEARLY CORRE- LATED (RQ2)
277
+
278
+ Across our pairs of participants, our two stakeholder groups held similar values with respect to the appropriate contexts where it is preferable or appropriate to utilize more unique and novel telepres-ence systems. We converge on two overarching contexts to generalize scenarios in which different telepresence systems may need to be employed: everyday experiences and unique experiences.
279
+
280
+ § 5.2.1 SYSTEM FAMILIARITY IS LINKED TO FREQUENCY
281
+
282
+ Everyday experiences are inclusive of events that a general user will encounter on a frequent basis. Such experiences, according to our participants, include casually connecting with friends/family/significant others, a one-on-one meeting in a professional context, and classroom environments. Participants also shared that they would typically prefer video-conferencing applications in general as they are easier to use and more familiar than more unique and novel systems. In cases where the remote experience is more focused on the human-to-human interaction and not inclusive of the environment, video-conferencing seemed to be ample enough for our participants to fulfill remote interactions and experiences with each other.
283
+
284
+ § 5.2.2 UNIQUE SETUPS FOR UNIQUE EXPERIENCES
285
+
286
+ On the opposite side of the spectrum, participants uniformly highlighted that in more infrequent, unique experiences (i.e. vacations, hiking trips, theme parks), a more interactive, novel system is more preferable to attempt to immerse an individual into a remote environment. Participants would further rationalize this concept by emphasizing the need for both the Streamer and Viewer should maintain a certain level of independence between one another. This would ultimately allow the Streamer and Viewer to interact with the remote environment of their own volition in an attempt to emulate having two individuals present in an environment instead of one. Despite this uniform consensus, we feel that this may be due in part to the sytems being rather novel. Pfeil et al. highlighted two important qualities regarding interpersonal telepresence systems: 1) more work is needed within this field, particularly in the form of logitudinal field studies, to learn of user values when novelty effects subside and 2) the concept of interpersonal telepresence systems is relatively new [14].
287
+
288
+ Table 3: Codebook Used For Qualitative Analysis
289
+
290
+ max width=
291
+
292
+ Categories $\left( {\mathrm{N} = {20}}\right)$ Codes $\left( {\mathrm{N} = {20}}\right)$ Exemplar Quote
293
+
294
+ 1-3
295
+ 2*Participants value more novel designs in more unique situations 360 camera better for exploratory/special occasion use $\left( {\mathrm{N} = {20}}\right)$ "The 360 I could see that being used, maybe when you're like a museum exhibit and you don't want to walk through every painting, you could just walk straight through and then you know the Viewer can just slide left or right, whatever." - Streamer 8, P15
296
+
297
+ 2-3
298
+ Video conference is better for professional/everyday use (N $= {20})$ "For [video-conferencing] I see it, more for like school meeting because it's like one on one you don't really need to see if there's other people around or whatnot." - Viewer 7, P14
299
+
300
+ 1-3
301
+ 2*Participants valued a more minimalist and interactive interpersonal telepresence design Independent and Interactive Viewing (N = 9) "I could focus on enjoying what I'm doing and I have to worry about showing them the stuff. They can look around and do what they want to do and it's like they're actually there." - Viewer 9, P18
302
+
303
+ 2-3
304
+ Minimalist Streaming Setup Design (N = 9) "If I'm doing a livestream then [waist bag], because I just had like better access to the charger or the wires without having to take it off, with a backpack when you asked me to take out the phone and stuff I had to like take it off, but with [the waist bag] it was easy to access." - Streamer 7, P13
305
+
306
+ 1-3
307
+ 3*Participants varied in terms of how much social interaction they desired Inserted themselves into social interactions $\left( {\mathrm{N} = 3}\right)$ "For example, I cross over by someone that was doing a TikTok dance, and I was able to to grab his Instagram account."" - Streamer 2, P3
308
+
309
+ 2-3
310
+ Did not care about social interactions $\left( {\mathrm{N} = 4}\right)$ "I didn't mind actually I wasn't actually aware of people around me that well, maybe because I was distracted like talking with her or something." - Streamer 8, P15
311
+
312
+ 2-3
313
+ Tried to minimize social interactions $\left( {\mathrm{N} = 3}\right)$ "I just didn't really want to get in the way of other people like that, they don't know what we're doing." - Viewer 7; P13
314
+
315
+ 1-3
316
+
317
+ § 5.3 BRIDGING THE GAP IN DESIGN BASED ON STAKEHOLDER COMMONALITIES (RQ3)
318
+
319
+ From our study, we were able to derive general parameters to designing interpersonal telepresence systems that, in our estimation, would be able to satisfy both the Viewer and Streamer. We consider two major themes: Interactivity and Minimalism.
320
+
321
+ § 5.3.1 STRONGER PRESENCE THROUGH INTERACTIVITY AND IMMERSION
322
+
323
+ Our Viewer participants tended to converge on the idea of being able to interact with the streamed video in a higher capacity than that of a typical video conference. When prompted about any improvements that could be made to the Streamer-centric design (Video-Conferencing application), Viewers hinted that if they were to improve the system, it would ultiamtely evolve into the Viewer-centric system in which the Viewer is able to pan around and view the environment for themselves. Furthermore, when prompted about any possible improvements they would consider to the Viewer-centric system, most Viewers suggested if it was at all possible to add additional controls or make the experience more immersive.
324
+
325
+ § 5.3.2 MINIMIZING WEIGHT FOR REAL-WORLD INTERACTION
326
+
327
+ From the Streamer's perspective, Streamer participants converged on the idea that all the technology and equipment involved in the setup should be low profile. This is to resolve two main concerns the Streamers encountered during their experiences: comfort and flexibility. Though supported through previous literature, we found this conclusion intriguing in the sense that comfort and flexibility in a physical sense seemed to be a higher priority for our Streamer participants over comfort due to socially awkward experiences.
328
+
329
+ Comfort also seemed to be a recurring element as Streamers wanted to ensure that both the bags and equipment would be comfortable throughout their experiences. To achieve this, Streamers would often manipulate the given equipment in configurations that best suited them. For example, in our original figures, we showcased the Streamer-centric design with a one-strap backpack slung across the chest, however, we had participants that opted to wear the bag as a waist bag or over the shoulder similar a purse.
330
+
331
+ Flexibility was also another recurring theme across our paired sessions. Streamers wanted the ability to interact with the environment while providing their Viewer counterpart an enjoyable remote experience. However, they were at times not able to due to holding the equipment. Additionally, we informed Streamers they would be able to view their partner through the Zoom video-conference during the Viewer-centric portion of the sessions. Despite the suggestion, Streamers opted to rely on Zoom audio for communication and only carried the 360-degree camera to facilitate the remote experience for their partner. This heavily implies that Streamers want to be able to maintain an ability to interact with the physical environment without being impaired by equipment.
332
+
333
+ § 5.4 CONVERGING TOWARDS AN IDEAL TELEPRESENCE SYSTEM OUTLOOK
334
+
335
+ Based on our Streamer and Viewer participants collectively, it seems that the ideal interpersonal telepresence setup is comprised of a variety of factors. The overarching factor, however, is the context in which the interpersonal telepresence system is employed and whether or not the factor falls under a unique experience or an everyday experience.
336
+
337
+ Between the two contexts, there exists an overlap and consensus of what key values are important to users in an interpersonal telep-resence system: the Context and Minimalism. The context plays an important role in determining whether or not it is a priority for a user to want to experience a remote environment. Furthermore, minimalism extends further than the physical and social comfort; it is also inclusive of how familiar the users are with the systems and whether or not they are willing to take on any learning loads to better familiarize themselves with more unique systems. For these reasons, the factors that heavily influence the acceptance of an interpersonal telepresence system are the context in which they are employed and the technology remaining both physically and cognitively minimalist.
338
+
339
+ § 6 LIMITATIONS AND FUTURE WORK
340
+
341
+ Our results are not applicable to all general situations, contexts, and scenarios in which each system would be used. In our study, we asked our participants to choose a location on our local university campus or within a 10-minute drive of the campus. Many participants chose to stay within the campus in favor of familiarity. We cannot speak broadly about more diverse scenarios and contexts outside of hypothetical situations. To mitigate this bias towards familiarity, our aim in a future study is to broaden the locations in which participants are able to select from, which would provide the opportunity for the Streamer or Viewer to share a location they value or is new to.
342
+
343
+ Our study was less considerate to the Viewers as the Viewing experience was more or less the same with the only difference being able to view the environment through a 360-degree video stream or a video stream through Zoom. Participants desired the ability to further immerse themselves into the environment and even compared the Viewer-centric system to a video game. In a future study, more viewing/interaction options need to be available to the Viewer so that they are able to use varying interaction techniques with the systems.
344
+
345
+ Lastly, our study design provided limited exposure to each stakeholder-centric system. With 20 minutes to interact with each system, we ask if our participants experienced a novelty effect. Our work falls short of identifying the effects of long-term usage of each system. A future improvement would allow participants to interact with the systems for an extended period of time. This would allow for the systems to be utilized for a variety of purposes and periods of time, and would yield a more holistic understanding of participants' perceptions.
346
+
347
+ § 7 CONCLUSION
348
+
349
+ In this paper, we present our work towards identifying a balance of needs between Viewers and Streamers to apply towards a future interpersonal telepresence prototype system. Through a field study, we made use of prototypes based on prior literature and learned of the values and contexts Streamers and Viewers converged towards. Previous work has shown that telepresence designers have a tendency to create novel systems that augment the abilities of the Streamer but are rather obtrusive in a social contexts. By contrast, we found that our participants strongly favored a prototype that provided a balance of independence between the Streamer and Viewer. This prototype would allow the Streamer to freely interact with the physical environment in a minimalist fashion, while providing the Viewer with a highly interactive and entirely autonomous viewing experience of the remote environment. Through our work, we are moving towards the idea of integrating interpersonal telepresence systems within everyday life. With this goal in mind, our hope is to be able to strengthen interpersonal relationships between individuals and take the lessons learned from our experiences to apply towards more fulfilling and enriching interpersonal telepresence experiences.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/BTzgpgtNaGq/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,323 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Comparison of a VR Stylus with a Controller, Hand Tracking and a Mouse for Object Manipulation and Medical Marking Tasks in Virtual Reality
2
+
3
+ ## Abstract
4
+
5
+ For medical surgery planning, virtual reality (VR) provides a new kind of user experience, where 3D images of the operation area can be utilized. Using VR, it is possible to view the 3D models in a more realistic 3D environment, which would reduce the perception problems and increase spatial understanding. In the present experiment, We compared a mouse, hand tracking, and a combination of a VR stylus and a VR controller as interaction methods in VR. The purpose was to study the viability of the methods for tasks conducted in medical surgery planning in VR. The tasks required interaction with 3D objects and high marking accuracy. The stylus and controller combination was the most preferred interaction method. In subjective results, it was considered as the most appropriate, while in objective results, the mouse interaction method was the most accurate.
6
+
7
+ Index Terms: Human-centered computing-Human computer interaction (HCI)-Interaction devices-Pointing devices; Human-centered computing-Human computer interaction (HCI)— Empirical studies in HCI; Human-centered computing-Human computer interaction (HCI)-Interaction paradigms-Virtual reality
8
+
9
+ ## 1 INTRODUCTION
10
+
11
+ Virtual reality makes it possible to create computer-generated environments that replace the real world. For example, the user can interact with virtual object models more flexibly using various interaction methods than with real objects in the real environment. VR has become a standard technology in research, but it has not been fully exploited in professional use even if its potential has been demonstrated.
12
+
13
+ In the field of medicine, x-ray imaging is routinely used to diagnose diseases and anatomical changes as well as for scientific surveys [31]. In many cases 2D medical images are satisfactory, but they can be complemented with 3D images for more complex operations where detailed understanding of the 3D structures is needed.
14
+
15
+ When planning surgeries, medical doctors, surgeons, and radiologists study 3D images. Viewing the 3D images in 2D displays can present issues to control object position, orientation, and scaling. Using VR devices, like head mounted displays (HMD), 3D images can be more easily perceived when viewed and interacted with in a $3\mathrm{D}$ environment than with a 2D display. For the medical professionals to be able to do the same tasks in VR as they do in 2D, the interaction methods need to be studied properly. The interaction method needs to be accurate, reasonable, and suitable for the medical tasks. Because we talk about medical work, the accuracy is crucial to avoid as many mistakes as possible. König et al. [21] studied an adaptive pointing for the accuracy problems caused by hand tremor when pointing distant objects. The used interaction method needs also to be natural so that the doctors would use it in their daily work and so that they still can focus on their primary tasks without paying too much attention to the interaction method. One typical task for the doctors is marking anatomical structures and areas on the surface of the 3D model. The marked points create the operative area, or they can be used for training.
16
+
17
+ For 2D content, a mouse is one of the best options for interaction due to its capability to point at small targets with high accuracy and the fact that many users are already very experienced with this device [27]. Mouse cursor can be used for 3D pointing with ray-casting [34] which allows pointing of the distant objects as well. The familiarity and accuracy make the mouse a worthy input method in VR, even though it is not a 3D input device. In addition, controllers have been identified as an accurate interaction method [13, 17] and they are typically used in VR environments [22]. Controllers enable direct manipulation, and the reach of distant objects is different than with the mouse with ray-casting. Other devices, like styluses have been studied in pointing tasks previously $\left\lbrack {{27},{40}}\right\rbrack$ . Therefore we aimed to investigate performance of a stylus together with a controller in selected tasks.
18
+
19
+ The cameras and sensors on HMD devices also allow hand tracking without hand-held input devices. Pointing at objects with a finger is natural way of acting for humans, so hand interaction can be expected to be received well. Hand interaction was selected as one of the conditions based on interviews of medical professionals and their expectations for the supporting technology.
20
+
21
+ We decided to use a marking task to assess the three interaction conditions. The conditions were a standard mouse, bare hands, and a handheld controller with VR stylus. All methods were used in a VR environment to minimise additional variation between methods and to focus the comparison on interaction techniques. The use of the HMD also allowed the participants to easily study the target from different directions by moving their head. In the medical marking task the doctor will observe the anatomical structures by turning and moving the $3\mathrm{D}$ object and at the same time looking for the best location for the mark. The time spent for the manipulation is not easily separated from the time spent in the final marking. The doctor decides during the manipulation from which angle and how the marking will be done, which will affect the marking time. This made application of Fitts' law [11] not possible in our study, as it requires that a participant cannot influence target locations.
22
+
23
+ We had 12 participants who were asked to do simplified medical surgery marking tasks. To study the accuracy of the interaction methods, we created an experiment where in the 3D model there was a predefined target that was marked (pointed+selected). In the real medical case, the doctor would define the target, but then the accuracy cannot be easily measured. This study focused mainly on subjective evaluations of interaction methods, but also included objective measurements.
24
+
25
+ The paper is organized as follows: First, we go through background of object manipulation and marking, interaction methods in 3D environment, and jaw osteotomy surgery planning (Section 2). Then, we introduce the compared interaction methods and used measurements (Section 3), as well as go through the experiment (Section 4) including apparatus, participants, and study task. In the end the results are presented (Section 5) and discussed (Section 6).
26
+
27
+ ## 2 BACKGROUND
28
+
29
+ ### 2.1 Object manipulation and marking
30
+
31
+ Object manipulation, i.e. rotating and translating the object in 3D space, and object marking, i.e. putting a small mark on the surface of an object, have been used as separate task when different VR interaction methods have been studied. Sun et al. [32] had 3D positioning task that involved object manipulation. When a mouse and a controller were compared for precise 3D positioning the mouse was found as the more precise input device. Object marking has been studied without manipulation in [27]. Argelaguet and Andujar [1] studied 3D object selection techniques in VR and Dang et al. [9] have studied 3D pointing techniques. As there are no clear standard techniques for 3D object selection nor 3D pointing technique, Arge-laguet and Andujar and Dang et al. attempt to bring practices in studying new techniques in 3D UIs.
32
+
33
+ In earlier work using bimanual techniques, Balakrishnan and Kurtenbach [5] presented a study where dominant and non-dominant hand had their own tasks in a virtual 3D scene. The bimanual technique was found as faster and preferable. People typically use their both hands to cooperatively perform the most skilled tasks [5, 12] where the dominant hand is used for the more accurate functions, and the non-dominant hand sets the context such as holding a canvas when dominant hand is used to draw. The result is optimal when bimanual techniques are designed by utilizing the strengths of both dominant and non-dominant hands.
34
+
35
+ ### 2.2 Input devices for object manipulation and marking
36
+
37
+ #### 2.2.1 Mouse
38
+
39
+ A mouse is a common, familiar, and accurate device for $2\mathrm{D}$ content to point at small targets with high accuracy [27]. The mouse is also a common device to do medical surgery planning [22]. Many studies have used a mouse cursor for $3\mathrm{D}$ pointing with ray-casting $\left\lbrack {6,8,{22},{27},{34}}\right\rbrack$ . Ray-casting technique is easily understood, and it is a solution for reaching objects at a distance [25].
40
+
41
+ Compared to other interaction methods in VR, the issue of the discrepancy between the $2\mathrm{D}$ mouse and a $3\mathrm{D}$ environment has been reported [1], and manipulation in 3D requires a way to switch between dimensions [4]. Balakrishnan et al. presented Rocking'Mouse to select in 3D environment while avoiding a hand fatigue. Kim and Choi [20] mentioned that the discrepancy creates a low user immersion. In addition, use of a mouse usually forces the user to sit down next to a table instead of standing. The user can rest their arms on the table while interacting with the mouse which decrease hand fatigue. Johnson et al. [18] stated that fatigue with mouse interaction will appear only after 3 hours.
42
+
43
+ Bachmann et al. [3] found that Leap Motion controller has a higher error rate and higher movement time than the mouse. Kim and Choi [20] showed in their study that 2D mouse have high performance in working time, accuracy, ease of learning, and ease of use in VR. Both Bachmann et al. and Kim and Choi found the mouse to be accurate but on the other hand Li et al. [22] pointed that with difficult marking tasks small displacement of a physical mouse would lead to a large displacement on the 3D model in the 3D environment.
44
+
45
+ #### 2.2.2 Hands
46
+
47
+ Hand interaction is a common VR interaction method. Voigt-Antons et al. [39] compared free hand interaction and controller interaction with different visualizations. Huang et al. [17] compared different interaction combinations between free hands and controllers. Both found that hand interaction has lower precision than the controller interaction. With alternative solutions like a Leap Motion controller $\left\lbrack {{28},{41}}\right\rbrack$ or using wearable gloves $\left\lbrack {42}\right\rbrack$ the hand interaction can be done more accurately. Physical hand movements create a natural and realistic experience of interaction $\left\lbrack {{10},{17}}\right\rbrack$ , and therefore hand interaction is still an area of interest.
48
+
49
+ #### 2.2.3 Controllers
50
+
51
+ Controllers are the leading control inputs for VR [17]. When using controllers as the interaction method, marking, and selecting are usually made with some of the triggers or buttons on the controller. Handheld controllers are described as stable and accurate devices [13, 17]. However, holding extra devices in hands may become inconvenient, if the hands are needed for other tasks between different actions. When interacting with hands or controllers in VR, the fatigue in arms is one of the main issues $\left\lbrack {1,{15}}\right\rbrack$ . Upholding arms and carrying the devices also increase the arm fatigue.
52
+
53
+ #### 2.2.4 VR stylus
54
+
55
+ A VR stylus is penlike handheld device that is used in VR environment as a controller. The physical appearance of Logitech VR Ink stylus [23] is close to a regular pen except it has buttons which enables different interaction e.g., selecting, in VR. Batmaz et al. [7] have studied Logitech VR Ink stylus for a selection method in virtual reality. They found that using a precision grip there is no statistical differences on the marking if the distance of the target is changing. Wacker et al. [40] presented as one of their design VR stylus for midair pointing and selection happened pressing a button. For object selection, the users preferred a 3D pen over a controller in VR [27].
56
+
57
+ ### 2.3 Jaw osteotomy surgery planning
58
+
59
+ Cone Beam Computed Tomography (CBCT) is a medical imaging technique that produce $3\mathrm{D}$ images that can be used in virtual surgery planning. Compared to previous techniques that were used in medical surgery planning like cast models, virtual planning with CBCT images has extra costs and time requirements [14]. However, the technique offers several advantages for planning accuracy and reliability [31]. CBCT images can be used as 3D objects in VR for surgery planning with excellent match to real objects [14]. Ayoub and Pulijala [2] reviewed different studies about virtual and augmented reality applications in oral and maxillofacial surgeries.
60
+
61
+ In virtual surgery planning, the procedures for surgery are implemented and planned beforehand. The real surgery is done based on the virtual plan. Common tasks in dental planning are specifying the location of impacted teeth, preventing nerve injuries, or preparing guiding flanges [31]. In VR this can be done by marking critical areas or drawing cutting lines on to the models. Virtual planning can be used in student education as well, where the procedures can be realistically practiced. Reymus et al. [29] found that students understood the mouth anatomy better after studying 3D models in VR environment than from regular 2D image. The objects can be closer, bigger, and they can move in depth direction in 3D environment compared to 2D environment [19].
62
+
63
+ Tasks, like understanding the 3D object and marking critical areas on it need to be done in medical surgery planning. However, working with 3D objects in 2D environment makes the task more difficult Hinckley et al. [16] studied issues for developing effective free-space $3\mathrm{D}$ user interfaces. Appropriate interaction and marking methods help to understand 3D objects and perform the required tasks in VR. In this study, we evaluated three methods for VR object manipulation and marking and examined the performances in simplified medical surgery planning tasks.
64
+
65
+ ## 3 Method
66
+
67
+ ### 3.1 Mouse
68
+
69
+ In the first interaction method a regular mouse was used inside a VR environment (Figure 1). In VR environment there was a visualized mouse model that the participant was able to move by manipulating the physical mouse and to control the direction of a ray starting from the model. The ray was always visible in Mouse interaction.
70
+
71
+ Mouse was used one-handed when the other two methods were two-handed. Mouse was used to perform two functions, manipulation and marking, while these functions had been separated in other methods into different hands. In addition, Mouse used ray-casting, ray from the mouse, while the two other methods did not use it. The other methods used direct mid-air object manipulation.
72
+
73
+ ![01963e77-a16f-715e-adcc-f9a4af5e6243_2_277_146_1239_498_0.jpg](images/01963e77-a16f-715e-adcc-f9a4af5e6243_2_277_146_1239_498_0.jpg)
74
+
75
+ Figure 1: Mouse interaction method outside VR (left). Mouse marking method inside VR and the study task (right).
76
+
77
+ The participant could rotate the object in 3 dimensions using the mouse movements with right click. For the 3D translations the participant used the scroll button. Using the scroll wheel, the user can zoom in and out (translate in $\mathrm{Z}$ ) and when the user presses the scroll button and moves the mouse, the user can translate up-down and sideways (translate in $\mathrm{X}$ and $\mathrm{Y}$ ). Markings were made by pointing the target with the ray and pressing the left button.
78
+
79
+ For the real-world mouse to be visible inside VR, pass through is not really required even though the mouse was visible in our study. After wearing the headset, the user could see the virtual mouse that is positioned to where the physical mouse is located to be able to find and reach the device. When the user moved the physical mouse sideways, the movement was converted to a horizontal rotation of the beam from the virtual mouse, and when the mouse was moved back and forth, the movement was converted to a vertical rotation of the beam. This way the user can cover a large space similar to using mouse in 2D displays. To improve ergonomics, the user could configure the desk and chair for their comfort.
80
+
81
+ ### 3.2 Hands
82
+
83
+ As the second interaction method, the participant used bare hands. The left hand was for object manipulation and the right hand for object marking. The participant could pick up the 3D object by a pinch gesture with their left hand, to rotate and move the object. Marking was done with a virtual pen. In the VR environment the participant had the virtual pen attached to their right palm, near to the index finger (Figure 2 right). As the palm was moved the pen moved accordingly. When the virtual pen tip was close to the target, the tip changed its color to green to show that the pen was touching the surface of the object. The mark was put on the surface by bending the index finger and pressing the pen's virtual button. The participant had to keep their palm steady when pressing the button to prevent the pen from moving.
84
+
85
+ ### 3.3 Controller and VR stylus
86
+
87
+ The third interaction method was based on having a controller on participant’s left hand for the object manipulation and a VR stylus on the right hand for the marking (Figure 3). The participant grabbed the 3D object with hand grab gesture around the controller to rotate and move the object. The markings were made with the physical VR stylus. The VR stylus was visualized in VR as was the mouse, so the participant knew where the device was located. The participant pointed the target with the stylus and pressed its physical button to make the mark. The act of press was identical to the virtual pen press in Hands method. There was a passive haptic feedback when touching the physical VR stylus, which did not happen with the virtual pen.
88
+
89
+ There have been some supporting results for using mouse in VR $\left\lbrack {3,{20},{22},{25}}\right\rbrack$ but $2\mathrm{D}$ mouse is not fully compatible with the $3\mathrm{D}$ environment [20]. We studied the ray method with Mouse to compare it against Hands and Controller+Stylus for 3D object marking. We also compared Hands without any devices to a method with a device in one or two hands. The marking gesture was designed to be similar in Hands and Controller+Stylus methods to be able to compare the effect of the devices.
90
+
91
+ ### 3.4 Measurements and the pilot study
92
+
93
+ The participant was asked to make a marking as close to the target location as possible. We used Euclidean distance to measure the distance between the target and the participant's marking. The task completion times were measured. The participant was able to remark the target if s/he was dissatisfied with the current marking. We counted how many remarkings were made to see if any of the interaction methods required more remarking than the other methods. We measured accuracy in these two ways, as a distance from the target and as the number of dissatisfied markings.
94
+
95
+ A satisfaction questionnaire was filled after each interaction method trial. There were a question and seven satisfaction statements that were evaluated on a Likert scale from 1 (strongly disagree) to 5 (strongly agree). The statements were grouped so that the question and the first statement were about the overall feeling and the rest of the statements were about object manipulation and marking separately. The statements were:
96
+
97
+ - Would you think to use this method daily?
98
+
99
+ - Your hands are NOT tired.
100
+
101
+ - It was natural to perform the given tasks with this interaction method.
102
+
103
+ - It was easy to handle the $3\mathrm{D}$ objects with this interaction method.
104
+
105
+ - The interaction method was accurate.
106
+
107
+ - The marking method was natural.
108
+
109
+ - It was easy to make the marking with this marking method.
110
+
111
+ - The marking method was accurate.
112
+
113
+ ![01963e77-a16f-715e-adcc-f9a4af5e6243_3_257_145_1287_498_0.jpg](images/01963e77-a16f-715e-adcc-f9a4af5e6243_3_257_145_1287_498_0.jpg)
114
+
115
+ Figure 2: Hands interaction method outside VR (left). Hands marking method inside VR and the study task (right).
116
+
117
+ ![01963e77-a16f-715e-adcc-f9a4af5e6243_3_369_722_1054_492_0.jpg](images/01963e77-a16f-715e-adcc-f9a4af5e6243_3_369_722_1054_492_0.jpg)
118
+
119
+ Figure 3: Controller interaction method outside VR (left). Stylus marking method inside VR and the study task (right).
120
+
121
+ The statements were designed to measure fatigue, naturalness, and accuracy as they have been measured in earlier studies $\left\lbrack {1,{10},{17}}\right\rbrack$ as well. Accuracy was measured also from data to see if the objective and subjective results are consistent. With these statements, it was possible to measure the easiness and ability to use the method daily unlike from objective data.
122
+
123
+ In the questionnaire there were also open-ended questions about positive and negative aspects of the interaction method. In the end the participant was asked to rank the interaction methods in order from the most liked to the least liked.
124
+
125
+ A pilot study was arranged to ensure that tasks and the study procedure were feasible. Based on the findings in the pilot study, we modified the introduction to be more specific and added a mention about the measured features. We also added the ability to rotate the $3\mathrm{D}$ object even after the mouse ray moved out of the object. The speed of the mouse ray in VR environment was increased to better match the movements of the real mouse.
126
+
127
+ ### 3.5 Statistical measures
128
+
129
+ We used two different statistical tests to analyze possible statistically significant differences between different parameter sets. For objective data (completion times, number of markings, and accuracy) we used the paired t-test. For data from evaluation questionnaires (fatigue, daily use, naturalness, easiness, and subjective accuracy) we first used Friedman test to see if any statistically significant differences appeared, and then we used the Wilcoxon signed rank test as it does not assume the numbers to be in ratio scale or to have normal distribution.
130
+
131
+ The study software saved the resolution of time in milliseconds and the resolution of distances in meters. To clarify the analysis, we transferred these to seconds and millimeters.
132
+
133
+ ## 4 EXPERIMENT
134
+
135
+ ### 4.1 Participants
136
+
137
+ We recruited 12 participants for the study. The number of participants was decided based on a power analysis for paired t-test and the Wilcoxon signed rank test, assuming large effect size, a power level of 0.8 and an alpha level of 0.05 . The post hoc calculated effect sizes (Cohen's d or R value, for paired t-test or Wilcoxon signed rank test, respectively) are reported together with the p-values in Results Section 5 for comparison to the assumption of large effect size. 10 of the participants were university students and two were full time employees, on the field not related to medicine or dentistry. The ages varied from 21 to 30 years, mean age was 25 years. There were 6 female participants and 6 male participants. Earlier VR experience was asked on a scale from 0 to 5 , and the mean was 1.75 . Two participants did not have any earlier experience. One participant was left-handed but was used to use the mouse with the right hand. Other participants were right-handed.
138
+
139
+ ### 4.2 Apparatus
140
+
141
+ #### 4.2.1 Software, hardware, and hand tracking
142
+
143
+ The experiment software was built using the Unity software [35]. With all methods we used Varjo VR2 Pro headset [37], which has an integrated vision based hand tracking system that was used for Hands interaction. Hands were tracked by Ultraleap Stereo IR 170 sensor mounted on a Varjo VR2 Pro. For the Controller+Stylus, we used Valve Index Controller [36] together with Logitech VR Ink stylus [23]. These were tracked by SteamVR 2.0 base stations [38] around the experiment area.
144
+
145
+ #### 4.2.2 Object manipulation and object marking
146
+
147
+ The study task combined two phases: object manipulation phase where the object was rotated and translated in 3D space and object marking phase where a small mark was put on the surface of an object. In object manipulation phase the participant either selected the 3D object by mouse ray or pinched or grabbed the 3D object with hand gesture. The 3D objects did not have any physics and laid in mid-air. By rotating and translating the object the participant can view the object from different angles. The participant can also use head moves to change their point-of-view.
148
+
149
+ Instead of only pointing the target, the marking needs to be confirmed. This allows us to measure the marking accuracy and if the user understood the 3D target's location related to the pointing device. The participant could either release the 3D object in mid-air or hold it in their hand when Hands or Controller+Stylus was used in object marking task. The marking was done either pointing by mouse ray and clicking with Left click, touching the target with virtual pen, and marked with a hand gesture, or touching and marking with the VR stylus.
150
+
151
+ ### 4.3 Procedure
152
+
153
+ First, the participant was introduced to the study, s/he was asked to read and sign a consent form, and fill in a background information form. For all conditions the facilitator would demonstrate him/herself the system functions and the controls. Each participant had an opportunity to practice before every condition. The practice task was to move and rotate a cube having several target spheres, and to mark those targets as many times as needed to get to know both the interaction and the marking methods. After the participant felt confident with the used method, s/he was asked to press the Done button, and the real study task appeared.
154
+
155
+ The participant was asked to find and mark a hidden target on the surface of each 3D object model. The target was visible all the time whereas the participant's marking was created by the participant. When the target was found it was first pointed and then marked. The aim was to place the participant's mark (a yellow sphere) inside the target sphere (red) (see Figures 1 right, 2 right, and 3 right). Each 3D object had one target on it and the task was repeated five times per each condition. The order of $3\mathrm{D}$ objects was the same to all participants: lower jaw, heart, skull, tooth, and skull. The order of interaction methods was counter-balanced between the participants using balanced Latin Squares. This was done to compensate possible learning effects. The target locations on the 3D object were predefined and presented in the same order for the participants.
156
+
157
+ The used task needed both object manipulation (rotating and translating) and marking (pointing and selecting). By combining the manipulation and marking tasks together, we wanted to create a task that simulates a task that medical professionals would do during virtual surgery planning. Both the object manipulation and marking are needed by the medical professionals. The marking is relevant when selecting specific locations and areas of a 3D model and it requires accuracy to make the marks in relevant locations. This medical marking task does not differ from regular marking tasks in other contexts as such, but the accuracy requirements are higher. By manipulating the $3\mathrm{D}$ model, the professional has an option to look at the pointed area from different angles to verify its specific location in $3\mathrm{D}$ environment.
158
+
159
+ A satisfaction questionnaire was filled after each interaction method trial, and after all three trials, a questionnaire was used to rank the conditions.
160
+
161
+ ## 5 RESULTS
162
+
163
+ In this section, we report the findings of the study. First, we present the objective results from data collected during the experiment, and then the subjective results from the questionnaires.
164
+
165
+ ### 5.1 Objective results
166
+
167
+ The task completion times (Figure 4, top left) include both object manipulation and marking, and it had some variation, but the distributions of median values for each interaction method were similar and there were no significant differences. The completion time varied slightly depending on how much VR experience the participant had before, but there were no statistically significant differences.
168
+
169
+ The number of markings done before the task completion varied between the interaction methods (Figure 4, top right). The median values for Mouse, Hands, and Controller+Stylus conditions were 6.5, 12 , and 7 markings, respectively. However, there were no statistically significant differences. Some participants did many markings in a fast pace (2-3 markings per second) leading to a high number of total markings.
170
+
171
+ There were some clear differences in final marking accuracy between the interaction methods (Figure 4, bottom). The median values for Mouse, Hands, and Controller+Stylus methods were 3.2, 5.9, and 4.2 millimeters, respectively. The variability between participants was highest with Hands method. We found statistically significant difference between the Mouse and Hands methods (p-value 0.004, Cohen’s d ${1.178}^{1}$ ) using a paired t-test and Bonferroni corrected p-value limit ${0.017}\left( { = {0.05}/3}\right)$ . There were no statistically significant differences between the Mouse and Controller+Stylus methods or Hands and Controller+Stylus methods.
172
+
173
+ ### 5.2 Subjective data
174
+
175
+ Friedman tests showed statistically significant differences in daily use (p-value 0.002), interaction naturalness (p-value 0.000), interaction easiness (p-value 0.001), interaction accuracy (p-value 0.007), marking easiness (p-value 0.039), and ranking (p-value 0.000). There were no significant differences in marking naturalness or marking accuracy. In evaluations for tiredness there were no significant differences (Figure 5, left). Most participants did not feel tired using any of the methods, but the experiment was rather short.
176
+
177
+ In pairwise tests of everyday use using Wilcoxon signed rank test we found significant differences (Figure 5, right). We found statistically significant differences between the Mouse and Controller+Stylus methods (p-value ${0.015},\mathrm{R}{0.773}^{2}$ ) and between Hands and Controller+Stylus methods (p-value 0.003, R 1.000). There were no statistically significant differences between the Hands and Mouse methods or Hands and Controller+Stylus methods.
178
+
179
+ We asked the participants to evaluate both object manipulation and marking separately. In object manipulation evaluation, there were statistically significant differences in naturalness between Controller+Stylus and Mouse (p-value 0.003, R 1.000) and Controller+Stylus and Hands (p-value 0.009, R 0.879). There was no statistically significant difference between Mouse and Hands. In object manipulation easiness Controller+Stylus had statistically significant difference between Mouse and Hands (p-values 0.003, R 1.000 in both methods), see Figure 6. There were no no statistically significant differences between Mouse and Controller+Stylus or Hands and Controller+Stylus. In manipulation accuracy evaluation we found statistically significant difference between Controller+Stylus method and Hands method (p-value 0.003, R 1.000). There were no no statistically significant differences between Mouse and Controller+Stylus or Hands and Mouse. In the object marking evaluation (Figure 7), the only significant difference was measured between Controller+Stylus method and Mouse method in easiness (p-value 0.009, R 1.000). There were no no statistically significant differences between Hands and Controller+Stylus or Hands and Mouse.
180
+
181
+ ---
182
+
183
+ ${}^{1}$ Cohen’s $\mathrm{d} \geq {0.8}$ is considered a large effect size
184
+
185
+ ${}^{2}\mathrm{R}$ value $\geq {0.5}$ is considered a large effect size
186
+
187
+ ---
188
+
189
+ ![01963e77-a16f-715e-adcc-f9a4af5e6243_5_266_175_1229_834_0.jpg](images/01963e77-a16f-715e-adcc-f9a4af5e6243_5_266_175_1229_834_0.jpg)
190
+
191
+ Figure 4: The task completion times for different conditions (top left). The median values for each participant are rather similar between the methods. There were two outlier values (by the same participant, for Mouse and Hands conditions) that are removed from the visualization. The number of markings per five targets (top right). There were some differences between the interaction methods (the median value for Hands was higher than for the other methods), but no significant differences. The marking accuracy (bottom). There were some clear differences between the interaction methods in the final marking accuracy.
192
+
193
+ ![01963e77-a16f-715e-adcc-f9a4af5e6243_5_285_1240_1203_403_0.jpg](images/01963e77-a16f-715e-adcc-f9a4af5e6243_5_285_1240_1203_403_0.jpg)
194
+
195
+ Figure 5: The evaluation of fatigue (left). None of the methods were found to be particularly tiring. The evaluation of possible daily use (right). Controller+Stylus was significantly more usable for daily use than the other methods.
196
+
197
+ Multiple participants commented that the controller interaction felt stable and that it was easy to move and rotate the 3D model with the controller. The participants also commented that holding a physical device in hand so that its weight could be felt increased the feel of naturalness. Not all comments agreed, when one participant felt VR stylus as accurate while another participant said it felt clumsy.
198
+
199
+ When asked 11 out of 12 participants ranked Controller+Stylus the most liked method. The distribution of ranking values is shown in Table 1. The ranking values of Controller+Stylus method were statistically significantly different to Mouse (p-value 0.008, R 0.885) and Hands (p-value 0.003, R 1.000). There was no statistically significant difference between Mouse and Hands.
200
+
201
+ ![01963e77-a16f-715e-adcc-f9a4af5e6243_6_234_179_1321_421_0.jpg](images/01963e77-a16f-715e-adcc-f9a4af5e6243_6_234_179_1321_421_0.jpg)
202
+
203
+ Figure 6: The evaluation of interaction method naturalness (left), easiness (middle), and accuracy (right). Controller+Stylus was the most liked method in these features.
204
+
205
+ ![01963e77-a16f-715e-adcc-f9a4af5e6243_6_231_745_1323_421_0.jpg](images/01963e77-a16f-715e-adcc-f9a4af5e6243_6_231_745_1323_421_0.jpg)
206
+
207
+ Figure 7: The evaluation of marking method naturalness (left), easiness (middle), and accuracy (right). Median values in these features are rather similar, and significant difference was found only in marking easiness.
208
+
209
+ Table 1: The number of mentions of different rankings of the interaction methods when asked for the most liked $\left( {1}^{st}\right)$ , the second most liked $\left( {2}^{nd}\right)$ , and the least liked $\left( {3}^{rd}\right)$ method.
210
+
211
+ <table><tr><td rowspan="2">Condition</td><td colspan="3">Ranking</td></tr><tr><td>${1}^{st}$</td><td>${2}^{nd}$</td><td>${3}^{rd}$</td></tr><tr><td>Mouse</td><td>1</td><td>7</td><td>4</td></tr><tr><td>Hands</td><td>0</td><td>4</td><td>8</td></tr><tr><td>Controller+Stylus</td><td>11</td><td>1</td><td>0</td></tr></table>
212
+
213
+ ## 6 Discussion
214
+
215
+ In this study, we were looking for the most feasible interaction method in VR for object manipulation and marking in a medical context. Controller+Stylus method was overall the most suitable for a task that requires both object manipulation and marking. Controller+Stylus method was the most liked in all subjective features, while Mouse and Hands conditions were evaluated very similarly. The smallest number of markings were done with Controller+Stylus, but no significant differences were found. There were statistically significant differences between the methods in daily use, interaction naturalness, and easiness. Controller+Stylus was statistically significantly more accurate in object manipulation than Hands (p-value 0.003 ), and easier to use than Mouse (p-value 0.003). Without earlier experience with the VR stylus, the participants had difficulties in finding the correct button when marking with the stylus. The physical stylus device cannot be seen when wearing the VR headset and the button could not be felt clearly. Even though Controller+Stylus combination was evaluated as natural and the most liked method in this study, the hand-held devices may feel inconvenient [17]. In our study, some participants liked the physical feel of devices. However, our result was based on the subjective opinions of participants, and that might change depending on the use case or devices.
216
+
217
+ There are many possible reasons for the low hand tracking accuracy. Hand inaccuracy can be seen in the large number of markings and large distribution in task completion times with Hands as the participants were not satisfied with their first marking. Hands were the only method where only one participant succeeded with a minimum of 5 markings, when by other methods, several participants succeeded in the task with 5 markings. One explanatory factor can be the lack of hand tracking fidelity that also has been noticed in other studies $\left\lbrack {{17},{42}}\right\rbrack$ . In addition, inaccuracy in human motor system leads to the inaccuracy of hands [15]. The vision based hand tracking system that uses camera on HMD does not recognize the hand gesture well enough and as a result, the participant must repeat the same gesture or movement multiple times to succeed. This extra work also increases the fatigue in Hands. Even though the fatigue were low with all interaction methods, this study did not measure the fatigue of long-term activity. These are clear indications that Hands interaction needs further development before it can be used in tasks that needs high marking accuracy. Several earlier studies have reported the hands inaccuracy compared to controllers [15, 17,42].
218
+
219
+ Passive haptics were available with Mouse and when marking with VR stylus. With Hands there was only visual feedback. The lack of any haptic feedback might have affected the marking accuracy as well because the accuracy was much better with the physical stylus. Li et al. [22] found that with the low marking difficulty, the mouse with $2\mathrm{D}$ display was faster than the kinesthetic force feedback device in VR. For high marking difficulty the other VR interface that used a VR controller with vibrotactile feedback was better than the $2\mathrm{D}$ interface. They found that mouse in $2\mathrm{D}$ display has fast pointing capability but in our study, the task completion times did not vary between Mouse and the other methods. Li et al. described the fact that manipulating viewing angle is more flexible when wearing HMD than with a mouse in 2D display. In VR interfaces the participant can rotate the $3\mathrm{D}$ object while changing the viewing angle by moving their head. In our study, all methods used HMD, so change of viewing angle was as equally flexible.
220
+
221
+ Mouse was statistically significantly more accurate marking method than Hands. Mouse was not affected by some of the issues that were noticed with Hands or Controller+Stylus. With Mouse it was not felt problematic that the device cannot be seen during the use. There were no sensor fidelity issues with Mouse, and Mouse was a familiar device to all participants. Only the ray that replaced the cursor was an unfamiliar feature and caused some problems. We found that the ray worked well with simple 3D models but there were a lot of difficulties with complex models where the viewing angle needed to be exactly right to reach the target. If any part of the 3D model blocked the ray, the target could not be marked. When the target was easy to mark the accuracy using Mouse was high. It can be stated that Mouse was an accurate method in VR but for all other measured properties of Controller+Stylus were measured to be better.
222
+
223
+ Both the target and the marking were spheres in 3D environment. During the study, it was noticed that when a participant made their marking in the same location as the target, the marking sphere disappeared inside the target sphere. This caused uncertainty if the marking was lost or if it was in the center of the target. This may have affected the results when the participants needed to make remarking to be able to see their marking that was not in the center of the target anymore. In future studies the marking sphere should be designed bigger size than the target and transparent so that the participant can be sure about the location of both spheres.
224
+
225
+ Our focus was in comparing three different interaction and marking methods and their suitability for the medical marking task. To simplify the experimental setup, the experiment was conducted with simplified medical images, which may have led to optimistic results for the viability of the methods. Even then, there were some problems with Mouse interaction method. To further confirm that the results are similar also for more realistic content, a similar study should be conducted in future work with authentic material utilizing, for example, original CBCT images in VR instead of the simplified ones.
226
+
227
+ Future research may investigate multimodal interaction methods to support even more natural alternatives. Speech is the primary mode for human communication [30]. Suresh et al. [33] used three voice commands to control gestures of a robotic arm in VR. Voice is a well suitable input method in cases where hands and eyes are continuously busy [15]. Pfeuffer et al. [26] studied gaze as an interaction method together with hand gestures but found that both hand and gaze tracking still lack tracking fidelity. More work is still needed, as Nukarinen et al. [24] stated that human factor issues made the gaze as the least preferred input method in an object selection task in VR.
228
+
229
+ ## 7 CONCLUSION
230
+
231
+ The 3D medical images can be viewed in VR environments to plan for surgeries with expected results. During the planning process one needs to interact with the 3D models and be able to make markings of high accuracy on them. In this study, we evaluated the feasibility of three different VR interaction methods Mouse, Hands, and Controller+Stylus combination in virtual reality. Based on the results, we can state that Valve Index controller and Logitech VR Ink stylus combination was the most feasible for tasks that require both 3D object manipulation and high marking accuracy in VR. This combination did not have issues with complex $3\mathrm{D}$ models and sensor fidelity was better than with Hands interaction. Statistically significant differences were found between the controller combination and the other methods.
232
+
233
+ Hand-based interaction was the least feasible for this kind of use according to the collected data. Hands and Mouse methods were evaluated almost equal in the feasibility by participants. With the current technology, free hands usage cannot be proposed for accurate marking tasks. Mouse interaction was more accurate than Controller+Stylus. In detailed tasks Mouse could replace the free hands interaction. The discrepancy between the 2D mouse and the 3D environment needs to be solved before Mouse could be considered a viable interaction method in VR.
234
+
235
+ ## REFERENCES
236
+
237
+ [1] F. Argelaguet and C. Andujar. A survey of 3d object selection techniques for virtual environments. Computers & Graphics, 37(3):121- 136, 2013.
238
+
239
+ [2] A. Ayoub and Y. Pulijala. The application of virtual reality and augmented reality in oral & maxillofacial surgery. BMC Oral Health, 19(1):1-8, 2019.
240
+
241
+ [3] D. Bachmann, F. Weichert, and G. Rinkenauer. Evaluation of the leap motion controller as a new contact-free pointing device. Sensors, 15(1):214-233, 2015.
242
+
243
+ [4] R. Balakrishnan, T. Baudel, G. Kurtenbach, and G. Fitzmaurice. The rockin'mouse: integral 3d manipulation on a plane. In Proceedings of the ACM SIGCHI Conference on Human factors in computing systems, pp. 311-318, 1997.
244
+
245
+ [5] R. Balakrishnan and G. Kurtenbach. Exploring bimanual camera control and object manipulation in $3\mathrm{\;d}$ graphics interfaces. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, pp. 56-62, 1999.
246
+
247
+ [6] M. Baloup, T. Pietrzak, and G. Casiez. Raycursor: A 3d pointing facilitation technique based on raycasting. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-12, 2019.
248
+
249
+ [7] A. U. Batmaz, A. K. Mutasim, and W. Stuerzlinger. Precision vs. power grip: A comparison of pen grip styles for selection in virtual reality. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 23-28. IEEE, 2020.
250
+
251
+ [8] J. C. Coelho and F. J. Verbeek. Pointing task evaluation of leap motion controller in $3\mathrm{\;d}$ virtual environment. Creating the difference,78:78-85, 2014.
252
+
253
+ [9] N.-T. Dang. A survey and classification of $3\mathrm{\;d}$ pointing techniques. In 2007 IEEE international conference on research, innovation and vision for the future, pp. 71-80. IEEE, 2007.
254
+
255
+ [10] S. Esmaeili, B. Benda, and E. D. Ragan. Detection of scaled hand interactions in virtual reality: The effects of motion direction and task complexity. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 453-462. IEEE, 2020.
256
+
257
+ [11] P. M. Fitts. The information capacity of the human motor system in controlling the amplitude of movement. Journal of experimental psychology, 47(6):381, 1954.
258
+
259
+ [12] Y. Guiard. Asymmetric division of labor in human skilled bimanual action: The kinematic chain as a model. Journal of motor behavior, 19(4):486-517, 1987.
260
+
261
+ [13] E. Gusai, C. Bassano, F. Solari, and M. Chessa. Interaction in an immersive collaborative virtual reality environment: a comparison between leap motion and htc controllers. In International Conference on Image Analysis and Processing, pp. 290-300. Springer, 2017.
262
+
263
+ [14] H. Hanken, C. Schablowsky, R. Smeets, M. Heiland, S. Sehner, B. Riecke, I. Nourwali, O. Vorwig, A. Gröbe, and A. Al-Dam. Virtual planning of complex head and neck reconstruction results in satisfac-
264
+
265
+ tory match between real outcomes and virtual models. Clinical oral investigations, 19(3):647-656, 2015.
266
+
267
+ [15] D. Hannema. Interaction in virtual reality. Interaction in Virtual Reality,
268
+
269
+ 2001.
270
+
271
+ [16] K. Hinckley, R. Pausch, J. C. Goble, and N. F. Kassell. A survey of design issues in spatial input. In Proceedings of the 7th annual ACM symposium on User interface software and technology, pp. 213-222, 1994.
272
+
273
+ [17] Y.-J. Huang, K.-Y. Liu, S.-S. Lee, and I.-C. Yeh. Evaluation of a hybrid of hand gesture and controller inputs in virtual reality. International Journal of Human-Computer Interaction, 37(2):169-180, 2021.
274
+
275
+ [18] P. W. Johnson, S. L. Lehman, and D. M. Rempel. Measuring muscle fatigue during computer mouse use. In Proceedings of 18th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, vol. 4, pp. 1454-1455. IEEE, 1996.
276
+
277
+ [19] M. Khamis, C. Oechsner, F. Alt, and A. Bulling. Vrpursuits: interaction in virtual reality using smooth pursuit eye movements. In Proceedings of the 2018 International Conference on Advanced Visual Interfaces, pp. 1-8, 2018.
278
+
279
+ [20] H. Kim and Y. Choi. Performance comparison of user interface devices for controlling mining software in virtual reality environments. Applied Sciences, 9(13):2584, 2019.
280
+
281
+ [21] W. A. König, J. Gerken, S. Dierdorf, and H. Reiterer. Adaptive pointing: implicit gain adaptation for absolute pointing devices. In CHI'09 Extended Abstracts on Human Factors in Computing Systems, pp. 4171-4176. 2009.
282
+
283
+ [22] Z. Li, M. Kiiveri, J. Rantala, and R. Raisamo. Evaluation of haptic virtual reality user interfaces for medical marking on $3\mathrm{\;d}$ models. International Journal of Human-Computer Studies, 147:102561, 2021.
284
+
285
+ [23] Logitech. Vr ink pilot edition, 2021.
286
+
287
+ [24] T. Nukarinen, J. Kangas, J. Rantala, O. Koskinen, and R. Raisamo. Evaluating ray casting and two gaze-based pointing techniques for object selection in virtual reality. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, pp. 1-2, 2018.
288
+
289
+ [25] J. Petford, M. A. Nacenta, and C. Gutwin. Pointing all around you: selection performance of mouse and ray-cast pointing in full-coverage displays. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-14, 2018.
290
+
291
+ [26] K. Pfeuffer, B. Mayer, D. Mardanbegi, and H. Gellersen. Gaze+ pinch interaction in virtual reality. In Proceedings of the 5th Symposium on Spatial User Interaction, pp. 99-108, 2017.
292
+
293
+ [27] D.-M. Pham and W. Stuerzlinger. Is the pen mightier than the controller? a comparison of input devices for selection in virtual and augmented reality. In 25th ACM Symposium on Virtual Reality Software and Technology, pp. 1-11, 2019.
294
+
295
+ [28] L. E. Potter, J. Araullo, and L. Carter. The leap motion controller: a view on sign language. In Proceedings of the 25th Australian computer-human interaction conference: augmentation, application, innovation, collaboration, pp. 175-178, 2013.
296
+
297
+ [29] M. Reymus, A. Liebermann, and C. Diegritz. Virtual reality: an effective tool for teaching root canal anatomy to undergraduate dental students-a preliminary study. International Endodontic Journal, 53(11):1581-1587, 2020.
298
+
299
+ [30] K. Samudravijaya. Automatic speech recognition. Tata Institute of Fundamental Research Archives, 2004.
300
+
301
+ [31] A. Shokri, K. Ramezani, F. Vahdatinia, E. Karkazis, and L. Tayebi. $3\mathrm{\;d}$ imaging in dentistry and oral tissue engineering. Applications of Biomedical Engineering in Dentistry, pp. 43-87, 2020.
302
+
303
+ [32] J. Sun, W. Stuerzlinger, and B. E. Riecke. Comparing input methods and cursors for $3\mathrm{\;d}$ positioning with head-mounted displays. In Proceedings of the 15th ACM Symposium on Applied Perception, pp. 1-8, 2018.
304
+
305
+ [33] A. Suresh, D. Gaba, S. Bhambri, and D. Laha. Intelligent multi-fingered dexterous hand using virtual reality (vr) and robot operating system (ros). In International Conference on Robot Intelligence Technology and Applications, pp. 459-474. Springer, 2017.
306
+
307
+ [34] R. J. Teather and W. Stuerzlinger. Pointing at 3d target projections with one-eyed and stereo cursors. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 159-168, 2013.
308
+
309
+ [35] Unity. Unity real-time development platform, 2020. https://unity.com/.
310
+
311
+ [36] Valve. The valve index controller, 2021. https://www.valvesoftware.com/en/index/controllers.
312
+
313
+ [37] Varjo. Varjo vr-2 pro, 2020. https://varjo.com/products/vr-2-pro/.
314
+
315
+ [38] Vive. Steamvr base station 2.0, 2021. https://www.vive.com/eu/accessory/base-station2/.
316
+
317
+ [39] J.-N. Voigt-Antons, T. Kojic, D. Ali, and S. Möller. Influence of hand tracking as a way of interaction in virtual reality on user experience. In 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX), pp. 1-4. IEEE, 2020.
318
+
319
+ [40] P. Wacker, O. Nowak, S. Voelker, and J. Borchers. Evaluating menu techniques for handheld ar with a smartphone & mid-air pen. In 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services, pp. 1-10, 2020.
320
+
321
+ [41] F. Weichert, D. Bachmann, B. Rudak, and D. Fisseler. Analysis of the accuracy and robustness of the leap motion controller. Sensors, 13(5):6380-6393, 2013.
322
+
323
+ [42] L. Yang, J. Huang, T. Feng, W. Hong-An, and D. Guo-Zhong. Gesture interaction in virtual reality. Virtual Reality & Intelligent Hardware, $1\left( 1\right) : {84} - {112},{2019}$ .
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/BTzgpgtNaGq/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,245 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § COMPARISON OF A VR STYLUS WITH A CONTROLLER, HAND TRACKING AND A MOUSE FOR OBJECT MANIPULATION AND MEDICAL MARKING TASKS IN VIRTUAL REALITY
2
+
3
+ § ABSTRACT
4
+
5
+ For medical surgery planning, virtual reality (VR) provides a new kind of user experience, where 3D images of the operation area can be utilized. Using VR, it is possible to view the 3D models in a more realistic 3D environment, which would reduce the perception problems and increase spatial understanding. In the present experiment, We compared a mouse, hand tracking, and a combination of a VR stylus and a VR controller as interaction methods in VR. The purpose was to study the viability of the methods for tasks conducted in medical surgery planning in VR. The tasks required interaction with 3D objects and high marking accuracy. The stylus and controller combination was the most preferred interaction method. In subjective results, it was considered as the most appropriate, while in objective results, the mouse interaction method was the most accurate.
6
+
7
+ Index Terms: Human-centered computing-Human computer interaction (HCI)-Interaction devices-Pointing devices; Human-centered computing-Human computer interaction (HCI)— Empirical studies in HCI; Human-centered computing-Human computer interaction (HCI)-Interaction paradigms-Virtual reality
8
+
9
+ § 1 INTRODUCTION
10
+
11
+ Virtual reality makes it possible to create computer-generated environments that replace the real world. For example, the user can interact with virtual object models more flexibly using various interaction methods than with real objects in the real environment. VR has become a standard technology in research, but it has not been fully exploited in professional use even if its potential has been demonstrated.
12
+
13
+ In the field of medicine, x-ray imaging is routinely used to diagnose diseases and anatomical changes as well as for scientific surveys [31]. In many cases 2D medical images are satisfactory, but they can be complemented with 3D images for more complex operations where detailed understanding of the 3D structures is needed.
14
+
15
+ When planning surgeries, medical doctors, surgeons, and radiologists study 3D images. Viewing the 3D images in 2D displays can present issues to control object position, orientation, and scaling. Using VR devices, like head mounted displays (HMD), 3D images can be more easily perceived when viewed and interacted with in a $3\mathrm{D}$ environment than with a 2D display. For the medical professionals to be able to do the same tasks in VR as they do in 2D, the interaction methods need to be studied properly. The interaction method needs to be accurate, reasonable, and suitable for the medical tasks. Because we talk about medical work, the accuracy is crucial to avoid as many mistakes as possible. König et al. [21] studied an adaptive pointing for the accuracy problems caused by hand tremor when pointing distant objects. The used interaction method needs also to be natural so that the doctors would use it in their daily work and so that they still can focus on their primary tasks without paying too much attention to the interaction method. One typical task for the doctors is marking anatomical structures and areas on the surface of the 3D model. The marked points create the operative area, or they can be used for training.
16
+
17
+ For 2D content, a mouse is one of the best options for interaction due to its capability to point at small targets with high accuracy and the fact that many users are already very experienced with this device [27]. Mouse cursor can be used for 3D pointing with ray-casting [34] which allows pointing of the distant objects as well. The familiarity and accuracy make the mouse a worthy input method in VR, even though it is not a 3D input device. In addition, controllers have been identified as an accurate interaction method [13, 17] and they are typically used in VR environments [22]. Controllers enable direct manipulation, and the reach of distant objects is different than with the mouse with ray-casting. Other devices, like styluses have been studied in pointing tasks previously $\left\lbrack {{27},{40}}\right\rbrack$ . Therefore we aimed to investigate performance of a stylus together with a controller in selected tasks.
18
+
19
+ The cameras and sensors on HMD devices also allow hand tracking without hand-held input devices. Pointing at objects with a finger is natural way of acting for humans, so hand interaction can be expected to be received well. Hand interaction was selected as one of the conditions based on interviews of medical professionals and their expectations for the supporting technology.
20
+
21
+ We decided to use a marking task to assess the three interaction conditions. The conditions were a standard mouse, bare hands, and a handheld controller with VR stylus. All methods were used in a VR environment to minimise additional variation between methods and to focus the comparison on interaction techniques. The use of the HMD also allowed the participants to easily study the target from different directions by moving their head. In the medical marking task the doctor will observe the anatomical structures by turning and moving the $3\mathrm{D}$ object and at the same time looking for the best location for the mark. The time spent for the manipulation is not easily separated from the time spent in the final marking. The doctor decides during the manipulation from which angle and how the marking will be done, which will affect the marking time. This made application of Fitts' law [11] not possible in our study, as it requires that a participant cannot influence target locations.
22
+
23
+ We had 12 participants who were asked to do simplified medical surgery marking tasks. To study the accuracy of the interaction methods, we created an experiment where in the 3D model there was a predefined target that was marked (pointed+selected). In the real medical case, the doctor would define the target, but then the accuracy cannot be easily measured. This study focused mainly on subjective evaluations of interaction methods, but also included objective measurements.
24
+
25
+ The paper is organized as follows: First, we go through background of object manipulation and marking, interaction methods in 3D environment, and jaw osteotomy surgery planning (Section 2). Then, we introduce the compared interaction methods and used measurements (Section 3), as well as go through the experiment (Section 4) including apparatus, participants, and study task. In the end the results are presented (Section 5) and discussed (Section 6).
26
+
27
+ § 2 BACKGROUND
28
+
29
+ § 2.1 OBJECT MANIPULATION AND MARKING
30
+
31
+ Object manipulation, i.e. rotating and translating the object in 3D space, and object marking, i.e. putting a small mark on the surface of an object, have been used as separate task when different VR interaction methods have been studied. Sun et al. [32] had 3D positioning task that involved object manipulation. When a mouse and a controller were compared for precise 3D positioning the mouse was found as the more precise input device. Object marking has been studied without manipulation in [27]. Argelaguet and Andujar [1] studied 3D object selection techniques in VR and Dang et al. [9] have studied 3D pointing techniques. As there are no clear standard techniques for 3D object selection nor 3D pointing technique, Arge-laguet and Andujar and Dang et al. attempt to bring practices in studying new techniques in 3D UIs.
32
+
33
+ In earlier work using bimanual techniques, Balakrishnan and Kurtenbach [5] presented a study where dominant and non-dominant hand had their own tasks in a virtual 3D scene. The bimanual technique was found as faster and preferable. People typically use their both hands to cooperatively perform the most skilled tasks [5, 12] where the dominant hand is used for the more accurate functions, and the non-dominant hand sets the context such as holding a canvas when dominant hand is used to draw. The result is optimal when bimanual techniques are designed by utilizing the strengths of both dominant and non-dominant hands.
34
+
35
+ § 2.2 INPUT DEVICES FOR OBJECT MANIPULATION AND MARKING
36
+
37
+ § 2.2.1 MOUSE
38
+
39
+ A mouse is a common, familiar, and accurate device for $2\mathrm{D}$ content to point at small targets with high accuracy [27]. The mouse is also a common device to do medical surgery planning [22]. Many studies have used a mouse cursor for $3\mathrm{D}$ pointing with ray-casting $\left\lbrack {6,8,{22},{27},{34}}\right\rbrack$ . Ray-casting technique is easily understood, and it is a solution for reaching objects at a distance [25].
40
+
41
+ Compared to other interaction methods in VR, the issue of the discrepancy between the $2\mathrm{D}$ mouse and a $3\mathrm{D}$ environment has been reported [1], and manipulation in 3D requires a way to switch between dimensions [4]. Balakrishnan et al. presented Rocking'Mouse to select in 3D environment while avoiding a hand fatigue. Kim and Choi [20] mentioned that the discrepancy creates a low user immersion. In addition, use of a mouse usually forces the user to sit down next to a table instead of standing. The user can rest their arms on the table while interacting with the mouse which decrease hand fatigue. Johnson et al. [18] stated that fatigue with mouse interaction will appear only after 3 hours.
42
+
43
+ Bachmann et al. [3] found that Leap Motion controller has a higher error rate and higher movement time than the mouse. Kim and Choi [20] showed in their study that 2D mouse have high performance in working time, accuracy, ease of learning, and ease of use in VR. Both Bachmann et al. and Kim and Choi found the mouse to be accurate but on the other hand Li et al. [22] pointed that with difficult marking tasks small displacement of a physical mouse would lead to a large displacement on the 3D model in the 3D environment.
44
+
45
+ § 2.2.2 HANDS
46
+
47
+ Hand interaction is a common VR interaction method. Voigt-Antons et al. [39] compared free hand interaction and controller interaction with different visualizations. Huang et al. [17] compared different interaction combinations between free hands and controllers. Both found that hand interaction has lower precision than the controller interaction. With alternative solutions like a Leap Motion controller $\left\lbrack {{28},{41}}\right\rbrack$ or using wearable gloves $\left\lbrack {42}\right\rbrack$ the hand interaction can be done more accurately. Physical hand movements create a natural and realistic experience of interaction $\left\lbrack {{10},{17}}\right\rbrack$ , and therefore hand interaction is still an area of interest.
48
+
49
+ § 2.2.3 CONTROLLERS
50
+
51
+ Controllers are the leading control inputs for VR [17]. When using controllers as the interaction method, marking, and selecting are usually made with some of the triggers or buttons on the controller. Handheld controllers are described as stable and accurate devices [13, 17]. However, holding extra devices in hands may become inconvenient, if the hands are needed for other tasks between different actions. When interacting with hands or controllers in VR, the fatigue in arms is one of the main issues $\left\lbrack {1,{15}}\right\rbrack$ . Upholding arms and carrying the devices also increase the arm fatigue.
52
+
53
+ § 2.2.4 VR STYLUS
54
+
55
+ A VR stylus is penlike handheld device that is used in VR environment as a controller. The physical appearance of Logitech VR Ink stylus [23] is close to a regular pen except it has buttons which enables different interaction e.g., selecting, in VR. Batmaz et al. [7] have studied Logitech VR Ink stylus for a selection method in virtual reality. They found that using a precision grip there is no statistical differences on the marking if the distance of the target is changing. Wacker et al. [40] presented as one of their design VR stylus for midair pointing and selection happened pressing a button. For object selection, the users preferred a 3D pen over a controller in VR [27].
56
+
57
+ § 2.3 JAW OSTEOTOMY SURGERY PLANNING
58
+
59
+ Cone Beam Computed Tomography (CBCT) is a medical imaging technique that produce $3\mathrm{D}$ images that can be used in virtual surgery planning. Compared to previous techniques that were used in medical surgery planning like cast models, virtual planning with CBCT images has extra costs and time requirements [14]. However, the technique offers several advantages for planning accuracy and reliability [31]. CBCT images can be used as 3D objects in VR for surgery planning with excellent match to real objects [14]. Ayoub and Pulijala [2] reviewed different studies about virtual and augmented reality applications in oral and maxillofacial surgeries.
60
+
61
+ In virtual surgery planning, the procedures for surgery are implemented and planned beforehand. The real surgery is done based on the virtual plan. Common tasks in dental planning are specifying the location of impacted teeth, preventing nerve injuries, or preparing guiding flanges [31]. In VR this can be done by marking critical areas or drawing cutting lines on to the models. Virtual planning can be used in student education as well, where the procedures can be realistically practiced. Reymus et al. [29] found that students understood the mouth anatomy better after studying 3D models in VR environment than from regular 2D image. The objects can be closer, bigger, and they can move in depth direction in 3D environment compared to 2D environment [19].
62
+
63
+ Tasks, like understanding the 3D object and marking critical areas on it need to be done in medical surgery planning. However, working with 3D objects in 2D environment makes the task more difficult Hinckley et al. [16] studied issues for developing effective free-space $3\mathrm{D}$ user interfaces. Appropriate interaction and marking methods help to understand 3D objects and perform the required tasks in VR. In this study, we evaluated three methods for VR object manipulation and marking and examined the performances in simplified medical surgery planning tasks.
64
+
65
+ § 3 METHOD
66
+
67
+ § 3.1 MOUSE
68
+
69
+ In the first interaction method a regular mouse was used inside a VR environment (Figure 1). In VR environment there was a visualized mouse model that the participant was able to move by manipulating the physical mouse and to control the direction of a ray starting from the model. The ray was always visible in Mouse interaction.
70
+
71
+ Mouse was used one-handed when the other two methods were two-handed. Mouse was used to perform two functions, manipulation and marking, while these functions had been separated in other methods into different hands. In addition, Mouse used ray-casting, ray from the mouse, while the two other methods did not use it. The other methods used direct mid-air object manipulation.
72
+
73
+ < g r a p h i c s >
74
+
75
+ Figure 1: Mouse interaction method outside VR (left). Mouse marking method inside VR and the study task (right).
76
+
77
+ The participant could rotate the object in 3 dimensions using the mouse movements with right click. For the 3D translations the participant used the scroll button. Using the scroll wheel, the user can zoom in and out (translate in $\mathrm{Z}$ ) and when the user presses the scroll button and moves the mouse, the user can translate up-down and sideways (translate in $\mathrm{X}$ and $\mathrm{Y}$ ). Markings were made by pointing the target with the ray and pressing the left button.
78
+
79
+ For the real-world mouse to be visible inside VR, pass through is not really required even though the mouse was visible in our study. After wearing the headset, the user could see the virtual mouse that is positioned to where the physical mouse is located to be able to find and reach the device. When the user moved the physical mouse sideways, the movement was converted to a horizontal rotation of the beam from the virtual mouse, and when the mouse was moved back and forth, the movement was converted to a vertical rotation of the beam. This way the user can cover a large space similar to using mouse in 2D displays. To improve ergonomics, the user could configure the desk and chair for their comfort.
80
+
81
+ § 3.2 HANDS
82
+
83
+ As the second interaction method, the participant used bare hands. The left hand was for object manipulation and the right hand for object marking. The participant could pick up the 3D object by a pinch gesture with their left hand, to rotate and move the object. Marking was done with a virtual pen. In the VR environment the participant had the virtual pen attached to their right palm, near to the index finger (Figure 2 right). As the palm was moved the pen moved accordingly. When the virtual pen tip was close to the target, the tip changed its color to green to show that the pen was touching the surface of the object. The mark was put on the surface by bending the index finger and pressing the pen's virtual button. The participant had to keep their palm steady when pressing the button to prevent the pen from moving.
84
+
85
+ § 3.3 CONTROLLER AND VR STYLUS
86
+
87
+ The third interaction method was based on having a controller on participant’s left hand for the object manipulation and a VR stylus on the right hand for the marking (Figure 3). The participant grabbed the 3D object with hand grab gesture around the controller to rotate and move the object. The markings were made with the physical VR stylus. The VR stylus was visualized in VR as was the mouse, so the participant knew where the device was located. The participant pointed the target with the stylus and pressed its physical button to make the mark. The act of press was identical to the virtual pen press in Hands method. There was a passive haptic feedback when touching the physical VR stylus, which did not happen with the virtual pen.
88
+
89
+ There have been some supporting results for using mouse in VR $\left\lbrack {3,{20},{22},{25}}\right\rbrack$ but $2\mathrm{D}$ mouse is not fully compatible with the $3\mathrm{D}$ environment [20]. We studied the ray method with Mouse to compare it against Hands and Controller+Stylus for 3D object marking. We also compared Hands without any devices to a method with a device in one or two hands. The marking gesture was designed to be similar in Hands and Controller+Stylus methods to be able to compare the effect of the devices.
90
+
91
+ § 3.4 MEASUREMENTS AND THE PILOT STUDY
92
+
93
+ The participant was asked to make a marking as close to the target location as possible. We used Euclidean distance to measure the distance between the target and the participant's marking. The task completion times were measured. The participant was able to remark the target if s/he was dissatisfied with the current marking. We counted how many remarkings were made to see if any of the interaction methods required more remarking than the other methods. We measured accuracy in these two ways, as a distance from the target and as the number of dissatisfied markings.
94
+
95
+ A satisfaction questionnaire was filled after each interaction method trial. There were a question and seven satisfaction statements that were evaluated on a Likert scale from 1 (strongly disagree) to 5 (strongly agree). The statements were grouped so that the question and the first statement were about the overall feeling and the rest of the statements were about object manipulation and marking separately. The statements were:
96
+
97
+ * Would you think to use this method daily?
98
+
99
+ * Your hands are NOT tired.
100
+
101
+ * It was natural to perform the given tasks with this interaction method.
102
+
103
+ * It was easy to handle the $3\mathrm{D}$ objects with this interaction method.
104
+
105
+ * The interaction method was accurate.
106
+
107
+ * The marking method was natural.
108
+
109
+ * It was easy to make the marking with this marking method.
110
+
111
+ * The marking method was accurate.
112
+
113
+ < g r a p h i c s >
114
+
115
+ Figure 2: Hands interaction method outside VR (left). Hands marking method inside VR and the study task (right).
116
+
117
+ < g r a p h i c s >
118
+
119
+ Figure 3: Controller interaction method outside VR (left). Stylus marking method inside VR and the study task (right).
120
+
121
+ The statements were designed to measure fatigue, naturalness, and accuracy as they have been measured in earlier studies $\left\lbrack {1,{10},{17}}\right\rbrack$ as well. Accuracy was measured also from data to see if the objective and subjective results are consistent. With these statements, it was possible to measure the easiness and ability to use the method daily unlike from objective data.
122
+
123
+ In the questionnaire there were also open-ended questions about positive and negative aspects of the interaction method. In the end the participant was asked to rank the interaction methods in order from the most liked to the least liked.
124
+
125
+ A pilot study was arranged to ensure that tasks and the study procedure were feasible. Based on the findings in the pilot study, we modified the introduction to be more specific and added a mention about the measured features. We also added the ability to rotate the $3\mathrm{D}$ object even after the mouse ray moved out of the object. The speed of the mouse ray in VR environment was increased to better match the movements of the real mouse.
126
+
127
+ § 3.5 STATISTICAL MEASURES
128
+
129
+ We used two different statistical tests to analyze possible statistically significant differences between different parameter sets. For objective data (completion times, number of markings, and accuracy) we used the paired t-test. For data from evaluation questionnaires (fatigue, daily use, naturalness, easiness, and subjective accuracy) we first used Friedman test to see if any statistically significant differences appeared, and then we used the Wilcoxon signed rank test as it does not assume the numbers to be in ratio scale or to have normal distribution.
130
+
131
+ The study software saved the resolution of time in milliseconds and the resolution of distances in meters. To clarify the analysis, we transferred these to seconds and millimeters.
132
+
133
+ § 4 EXPERIMENT
134
+
135
+ § 4.1 PARTICIPANTS
136
+
137
+ We recruited 12 participants for the study. The number of participants was decided based on a power analysis for paired t-test and the Wilcoxon signed rank test, assuming large effect size, a power level of 0.8 and an alpha level of 0.05 . The post hoc calculated effect sizes (Cohen's d or R value, for paired t-test or Wilcoxon signed rank test, respectively) are reported together with the p-values in Results Section 5 for comparison to the assumption of large effect size. 10 of the participants were university students and two were full time employees, on the field not related to medicine or dentistry. The ages varied from 21 to 30 years, mean age was 25 years. There were 6 female participants and 6 male participants. Earlier VR experience was asked on a scale from 0 to 5, and the mean was 1.75 . Two participants did not have any earlier experience. One participant was left-handed but was used to use the mouse with the right hand. Other participants were right-handed.
138
+
139
+ § 4.2 APPARATUS
140
+
141
+ § 4.2.1 SOFTWARE, HARDWARE, AND HAND TRACKING
142
+
143
+ The experiment software was built using the Unity software [35]. With all methods we used Varjo VR2 Pro headset [37], which has an integrated vision based hand tracking system that was used for Hands interaction. Hands were tracked by Ultraleap Stereo IR 170 sensor mounted on a Varjo VR2 Pro. For the Controller+Stylus, we used Valve Index Controller [36] together with Logitech VR Ink stylus [23]. These were tracked by SteamVR 2.0 base stations [38] around the experiment area.
144
+
145
+ § 4.2.2 OBJECT MANIPULATION AND OBJECT MARKING
146
+
147
+ The study task combined two phases: object manipulation phase where the object was rotated and translated in 3D space and object marking phase where a small mark was put on the surface of an object. In object manipulation phase the participant either selected the 3D object by mouse ray or pinched or grabbed the 3D object with hand gesture. The 3D objects did not have any physics and laid in mid-air. By rotating and translating the object the participant can view the object from different angles. The participant can also use head moves to change their point-of-view.
148
+
149
+ Instead of only pointing the target, the marking needs to be confirmed. This allows us to measure the marking accuracy and if the user understood the 3D target's location related to the pointing device. The participant could either release the 3D object in mid-air or hold it in their hand when Hands or Controller+Stylus was used in object marking task. The marking was done either pointing by mouse ray and clicking with Left click, touching the target with virtual pen, and marked with a hand gesture, or touching and marking with the VR stylus.
150
+
151
+ § 4.3 PROCEDURE
152
+
153
+ First, the participant was introduced to the study, s/he was asked to read and sign a consent form, and fill in a background information form. For all conditions the facilitator would demonstrate him/herself the system functions and the controls. Each participant had an opportunity to practice before every condition. The practice task was to move and rotate a cube having several target spheres, and to mark those targets as many times as needed to get to know both the interaction and the marking methods. After the participant felt confident with the used method, s/he was asked to press the Done button, and the real study task appeared.
154
+
155
+ The participant was asked to find and mark a hidden target on the surface of each 3D object model. The target was visible all the time whereas the participant's marking was created by the participant. When the target was found it was first pointed and then marked. The aim was to place the participant's mark (a yellow sphere) inside the target sphere (red) (see Figures 1 right, 2 right, and 3 right). Each 3D object had one target on it and the task was repeated five times per each condition. The order of $3\mathrm{D}$ objects was the same to all participants: lower jaw, heart, skull, tooth, and skull. The order of interaction methods was counter-balanced between the participants using balanced Latin Squares. This was done to compensate possible learning effects. The target locations on the 3D object were predefined and presented in the same order for the participants.
156
+
157
+ The used task needed both object manipulation (rotating and translating) and marking (pointing and selecting). By combining the manipulation and marking tasks together, we wanted to create a task that simulates a task that medical professionals would do during virtual surgery planning. Both the object manipulation and marking are needed by the medical professionals. The marking is relevant when selecting specific locations and areas of a 3D model and it requires accuracy to make the marks in relevant locations. This medical marking task does not differ from regular marking tasks in other contexts as such, but the accuracy requirements are higher. By manipulating the $3\mathrm{D}$ model, the professional has an option to look at the pointed area from different angles to verify its specific location in $3\mathrm{D}$ environment.
158
+
159
+ A satisfaction questionnaire was filled after each interaction method trial, and after all three trials, a questionnaire was used to rank the conditions.
160
+
161
+ § 5 RESULTS
162
+
163
+ In this section, we report the findings of the study. First, we present the objective results from data collected during the experiment, and then the subjective results from the questionnaires.
164
+
165
+ § 5.1 OBJECTIVE RESULTS
166
+
167
+ The task completion times (Figure 4, top left) include both object manipulation and marking, and it had some variation, but the distributions of median values for each interaction method were similar and there were no significant differences. The completion time varied slightly depending on how much VR experience the participant had before, but there were no statistically significant differences.
168
+
169
+ The number of markings done before the task completion varied between the interaction methods (Figure 4, top right). The median values for Mouse, Hands, and Controller+Stylus conditions were 6.5, 12, and 7 markings, respectively. However, there were no statistically significant differences. Some participants did many markings in a fast pace (2-3 markings per second) leading to a high number of total markings.
170
+
171
+ There were some clear differences in final marking accuracy between the interaction methods (Figure 4, bottom). The median values for Mouse, Hands, and Controller+Stylus methods were 3.2, 5.9, and 4.2 millimeters, respectively. The variability between participants was highest with Hands method. We found statistically significant difference between the Mouse and Hands methods (p-value 0.004, Cohen’s d ${1.178}^{1}$ ) using a paired t-test and Bonferroni corrected p-value limit ${0.017}\left( { = {0.05}/3}\right)$ . There were no statistically significant differences between the Mouse and Controller+Stylus methods or Hands and Controller+Stylus methods.
172
+
173
+ § 5.2 SUBJECTIVE DATA
174
+
175
+ Friedman tests showed statistically significant differences in daily use (p-value 0.002), interaction naturalness (p-value 0.000), interaction easiness (p-value 0.001), interaction accuracy (p-value 0.007), marking easiness (p-value 0.039), and ranking (p-value 0.000). There were no significant differences in marking naturalness or marking accuracy. In evaluations for tiredness there were no significant differences (Figure 5, left). Most participants did not feel tired using any of the methods, but the experiment was rather short.
176
+
177
+ In pairwise tests of everyday use using Wilcoxon signed rank test we found significant differences (Figure 5, right). We found statistically significant differences between the Mouse and Controller+Stylus methods (p-value ${0.015},\mathrm{R}{0.773}^{2}$ ) and between Hands and Controller+Stylus methods (p-value 0.003, R 1.000). There were no statistically significant differences between the Hands and Mouse methods or Hands and Controller+Stylus methods.
178
+
179
+ We asked the participants to evaluate both object manipulation and marking separately. In object manipulation evaluation, there were statistically significant differences in naturalness between Controller+Stylus and Mouse (p-value 0.003, R 1.000) and Controller+Stylus and Hands (p-value 0.009, R 0.879). There was no statistically significant difference between Mouse and Hands. In object manipulation easiness Controller+Stylus had statistically significant difference between Mouse and Hands (p-values 0.003, R 1.000 in both methods), see Figure 6. There were no no statistically significant differences between Mouse and Controller+Stylus or Hands and Controller+Stylus. In manipulation accuracy evaluation we found statistically significant difference between Controller+Stylus method and Hands method (p-value 0.003, R 1.000). There were no no statistically significant differences between Mouse and Controller+Stylus or Hands and Mouse. In the object marking evaluation (Figure 7), the only significant difference was measured between Controller+Stylus method and Mouse method in easiness (p-value 0.009, R 1.000). There were no no statistically significant differences between Hands and Controller+Stylus or Hands and Mouse.
180
+
181
+ ${}^{1}$ Cohen’s $\mathrm{d} \geq {0.8}$ is considered a large effect size
182
+
183
+ ${}^{2}\mathrm{R}$ value $\geq {0.5}$ is considered a large effect size
184
+
185
+ < g r a p h i c s >
186
+
187
+ Figure 4: The task completion times for different conditions (top left). The median values for each participant are rather similar between the methods. There were two outlier values (by the same participant, for Mouse and Hands conditions) that are removed from the visualization. The number of markings per five targets (top right). There were some differences between the interaction methods (the median value for Hands was higher than for the other methods), but no significant differences. The marking accuracy (bottom). There were some clear differences between the interaction methods in the final marking accuracy.
188
+
189
+ < g r a p h i c s >
190
+
191
+ Figure 5: The evaluation of fatigue (left). None of the methods were found to be particularly tiring. The evaluation of possible daily use (right). Controller+Stylus was significantly more usable for daily use than the other methods.
192
+
193
+ Multiple participants commented that the controller interaction felt stable and that it was easy to move and rotate the 3D model with the controller. The participants also commented that holding a physical device in hand so that its weight could be felt increased the feel of naturalness. Not all comments agreed, when one participant felt VR stylus as accurate while another participant said it felt clumsy.
194
+
195
+ When asked 11 out of 12 participants ranked Controller+Stylus the most liked method. The distribution of ranking values is shown in Table 1. The ranking values of Controller+Stylus method were statistically significantly different to Mouse (p-value 0.008, R 0.885) and Hands (p-value 0.003, R 1.000). There was no statistically significant difference between Mouse and Hands.
196
+
197
+ < g r a p h i c s >
198
+
199
+ Figure 6: The evaluation of interaction method naturalness (left), easiness (middle), and accuracy (right). Controller+Stylus was the most liked method in these features.
200
+
201
+ < g r a p h i c s >
202
+
203
+ Figure 7: The evaluation of marking method naturalness (left), easiness (middle), and accuracy (right). Median values in these features are rather similar, and significant difference was found only in marking easiness.
204
+
205
+ Table 1: The number of mentions of different rankings of the interaction methods when asked for the most liked $\left( {1}^{st}\right)$ , the second most liked $\left( {2}^{nd}\right)$ , and the least liked $\left( {3}^{rd}\right)$ method.
206
+
207
+ max width=
208
+
209
+ 2*Condition 3|c|Ranking
210
+
211
+ 2-4
212
+ ${1}^{st}$ ${2}^{nd}$ ${3}^{rd}$
213
+
214
+ 1-4
215
+ Mouse 1 7 4
216
+
217
+ 1-4
218
+ Hands 0 4 8
219
+
220
+ 1-4
221
+ Controller+Stylus 11 1 0
222
+
223
+ 1-4
224
+
225
+ § 6 DISCUSSION
226
+
227
+ In this study, we were looking for the most feasible interaction method in VR for object manipulation and marking in a medical context. Controller+Stylus method was overall the most suitable for a task that requires both object manipulation and marking. Controller+Stylus method was the most liked in all subjective features, while Mouse and Hands conditions were evaluated very similarly. The smallest number of markings were done with Controller+Stylus, but no significant differences were found. There were statistically significant differences between the methods in daily use, interaction naturalness, and easiness. Controller+Stylus was statistically significantly more accurate in object manipulation than Hands (p-value 0.003 ), and easier to use than Mouse (p-value 0.003). Without earlier experience with the VR stylus, the participants had difficulties in finding the correct button when marking with the stylus. The physical stylus device cannot be seen when wearing the VR headset and the button could not be felt clearly. Even though Controller+Stylus combination was evaluated as natural and the most liked method in this study, the hand-held devices may feel inconvenient [17]. In our study, some participants liked the physical feel of devices. However, our result was based on the subjective opinions of participants, and that might change depending on the use case or devices.
228
+
229
+ There are many possible reasons for the low hand tracking accuracy. Hand inaccuracy can be seen in the large number of markings and large distribution in task completion times with Hands as the participants were not satisfied with their first marking. Hands were the only method where only one participant succeeded with a minimum of 5 markings, when by other methods, several participants succeeded in the task with 5 markings. One explanatory factor can be the lack of hand tracking fidelity that also has been noticed in other studies $\left\lbrack {{17},{42}}\right\rbrack$ . In addition, inaccuracy in human motor system leads to the inaccuracy of hands [15]. The vision based hand tracking system that uses camera on HMD does not recognize the hand gesture well enough and as a result, the participant must repeat the same gesture or movement multiple times to succeed. This extra work also increases the fatigue in Hands. Even though the fatigue were low with all interaction methods, this study did not measure the fatigue of long-term activity. These are clear indications that Hands interaction needs further development before it can be used in tasks that needs high marking accuracy. Several earlier studies have reported the hands inaccuracy compared to controllers [15, 17,42].
230
+
231
+ Passive haptics were available with Mouse and when marking with VR stylus. With Hands there was only visual feedback. The lack of any haptic feedback might have affected the marking accuracy as well because the accuracy was much better with the physical stylus. Li et al. [22] found that with the low marking difficulty, the mouse with $2\mathrm{D}$ display was faster than the kinesthetic force feedback device in VR. For high marking difficulty the other VR interface that used a VR controller with vibrotactile feedback was better than the $2\mathrm{D}$ interface. They found that mouse in $2\mathrm{D}$ display has fast pointing capability but in our study, the task completion times did not vary between Mouse and the other methods. Li et al. described the fact that manipulating viewing angle is more flexible when wearing HMD than with a mouse in 2D display. In VR interfaces the participant can rotate the $3\mathrm{D}$ object while changing the viewing angle by moving their head. In our study, all methods used HMD, so change of viewing angle was as equally flexible.
232
+
233
+ Mouse was statistically significantly more accurate marking method than Hands. Mouse was not affected by some of the issues that were noticed with Hands or Controller+Stylus. With Mouse it was not felt problematic that the device cannot be seen during the use. There were no sensor fidelity issues with Mouse, and Mouse was a familiar device to all participants. Only the ray that replaced the cursor was an unfamiliar feature and caused some problems. We found that the ray worked well with simple 3D models but there were a lot of difficulties with complex models where the viewing angle needed to be exactly right to reach the target. If any part of the 3D model blocked the ray, the target could not be marked. When the target was easy to mark the accuracy using Mouse was high. It can be stated that Mouse was an accurate method in VR but for all other measured properties of Controller+Stylus were measured to be better.
234
+
235
+ Both the target and the marking were spheres in 3D environment. During the study, it was noticed that when a participant made their marking in the same location as the target, the marking sphere disappeared inside the target sphere. This caused uncertainty if the marking was lost or if it was in the center of the target. This may have affected the results when the participants needed to make remarking to be able to see their marking that was not in the center of the target anymore. In future studies the marking sphere should be designed bigger size than the target and transparent so that the participant can be sure about the location of both spheres.
236
+
237
+ Our focus was in comparing three different interaction and marking methods and their suitability for the medical marking task. To simplify the experimental setup, the experiment was conducted with simplified medical images, which may have led to optimistic results for the viability of the methods. Even then, there were some problems with Mouse interaction method. To further confirm that the results are similar also for more realistic content, a similar study should be conducted in future work with authentic material utilizing, for example, original CBCT images in VR instead of the simplified ones.
238
+
239
+ Future research may investigate multimodal interaction methods to support even more natural alternatives. Speech is the primary mode for human communication [30]. Suresh et al. [33] used three voice commands to control gestures of a robotic arm in VR. Voice is a well suitable input method in cases where hands and eyes are continuously busy [15]. Pfeuffer et al. [26] studied gaze as an interaction method together with hand gestures but found that both hand and gaze tracking still lack tracking fidelity. More work is still needed, as Nukarinen et al. [24] stated that human factor issues made the gaze as the least preferred input method in an object selection task in VR.
240
+
241
+ § 7 CONCLUSION
242
+
243
+ The 3D medical images can be viewed in VR environments to plan for surgeries with expected results. During the planning process one needs to interact with the 3D models and be able to make markings of high accuracy on them. In this study, we evaluated the feasibility of three different VR interaction methods Mouse, Hands, and Controller+Stylus combination in virtual reality. Based on the results, we can state that Valve Index controller and Logitech VR Ink stylus combination was the most feasible for tasks that require both 3D object manipulation and high marking accuracy in VR. This combination did not have issues with complex $3\mathrm{D}$ models and sensor fidelity was better than with Hands interaction. Statistically significant differences were found between the controller combination and the other methods.
244
+
245
+ Hand-based interaction was the least feasible for this kind of use according to the collected data. Hands and Mouse methods were evaluated almost equal in the feasibility by participants. With the current technology, free hands usage cannot be proposed for accurate marking tasks. Mouse interaction was more accurate than Controller+Stylus. In detailed tasks Mouse could replace the free hands interaction. The discrepancy between the 2D mouse and the 3D environment needs to be solved before Mouse could be considered a viable interaction method in VR.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/BU9lJWKNTG9/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,421 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A Visual Tool for Interactively Privacy Analysis and Preservation on Order-Dynamic Tabular Data
2
+
3
+ ![01963e7b-a373-7132-b668-9261e4db307d_0_223_393_1352_698_0.jpg](images/01963e7b-a373-7132-b668-9261e4db307d_0_223_393_1352_698_0.jpg)
4
+
5
+ Figure 1: Tabular Privacy Assistant (TPA), a visual tool for the risk analysis and privacy preservation of tabular data with dynamic attribute order. (a) A widget that allows personalized attribute order setting and dynamic adjustment. (b) Statistics of different attributes for overall distribution analysis. (c) The main view for tabular data presentation (box plot means abstract of several items) and interactive privacy enhancement (e.g., choosing five items to merge). (d) Privacy risk tree under the current attribute order (red: privacy breach items on K-anonymity). (e) Historical privacy enhancement operations (allowing backtrack and comparison). (f) Data utility dynamics during interactions.
6
+
7
+ ## Abstract
8
+
9
+ The practice of releasing individual data, usually in tabular form, is obligated to prevent privacy leakage. With rendered privacy risks, visualization techniques have greatly prompted the user-friendly data sanitization process. Yet, we point out, for the first time, the attribute order (i.e., schema) of tabular data inherently determines the risk situation and the output utility, while is ignored in previous efforts. To mitigate this gap, this work proposes the design and pipeline of a visual tool (TPA) for nuanced privacy analysis and preservation on order-dynamic tabular data. By adapting data cube structure as the flexible backbone, TPA manages to support real-time risk analysis in response to attribute order adjustment. Novel visual designs, i.e., data abstract, risk tree, integrated privacy enhancement, are developed to explore data correlations and acquire privacy awareness. We demonstrate TPA's effectiveness with two case studies on the prototype and qualitatively discuss the pros and cons with domain experts for future improvement.
10
+
11
+ Index Terms: Human-centered computing-Visualization-Visualization techniques-Treemaps; Human-centered computing-Visualization-Visualization design and evaluation methods
12
+
13
+ ## 1 INTRODUCTION
14
+
15
+ We are all providers and beneficiaries of the collection and release of individual data. Generally maintained as multi-attribute tables, the collected data can be used in various learning, statistic, and decision-making tasks (e.g., disease diagnosing, product recommendation). Alongside the well-known benefits, privacy issues in the publish of data have raised massive concerns recently, as more and more real-world safety violation caused by data leakage and abuse are witnessed $\left\lbrack {9,{34}}\right\rbrack$ and the promulgation of regulations (e.g., GDPR).
16
+
17
+ The privacy risk stems from the fact that individual identity, although usually anonymized, is correlated and may be re-identified by the other seemingly harmless attributes. As a result, it is obligated for data holders (e.g., organizations, companies) to properly sanitize data before releasing it. Research communities have responded to this critical requirement with many privacy protection techniques, including anonymity [21], differential privacy [10], and synthetic data mixture $\left\lbrack {1,{42}}\right\rbrack$ . With such technical basis, visualization has been introduced recently to facilitate illustrative, understandable, and easy-to-use privacy analysis tools on behalf of the users $\left\lbrack {6,7,{37} - {39},{41}}\right\rbrack$ . For example, in [39], visual presentations on privacy exposure level and utility preservation degree are provided for detecting and mitigating privacy issues in tabular data.
18
+
19
+ Previous visual methods for privacy analysis build on the setting of fixed attribute order, i.e., the target table has fixed columns. However, we find that the currently unexplored attribute order (i.e., schema) inherently determines the privacy risk situation and the output utility (Detail analysis in $\$ {3.2}$ ). For example, in the process of checking K-anonymity privacy constraints [36] on a sheet, whereas we may find privacy breach on the 3rd attribute and have 5 values changed during protection in an order of 'Age, Work, Disease', we would face a totally different (thornier) privacy context, like privacy breach on the 1 st attribute with 10 values changed, in order 'Work, Disease, Age'. As a result, randomly choosing an attribute order, as the existing proposals do, may unfortunately lead to over-protection and unnecessary utility losses.
20
+
21
+ We are thus motivated to design a flexible visual tool (TPA) that can support and explore order adjustment for nuanced (user-specific, reactive) privacy investigation. The most challenging part for dynamic order is that one should dynamically re-perform risk analysis (e.g., equivalent class parsing) according to the new attribute order. This can be a disaster for existing implementations as it involves aggregation calculations for all combinations under additive prefixes, especially when the sheet owns vast amounts of data items and lots of attributes, indicating significant interaction latency. As a remedy, we adapt the data cube structure with flexibly pre-aggregation to organize the table and use an operation tree to handle order adjustment in real-time ( $§{4.1}$ ). Additionally, we present a data abstract function for statistically analyzing attribute correlation (§ 5.3) and provide fine-grained utility quantification that estimates the differential impact of each privacy-preserving operation (§4.2).
22
+
23
+ Combined with various privacy enhancement technologies, TPA guides data holders on the risks in their data, and prompt utility losses of preserving operations. The main contributions are as follows:
24
+
25
+ - We identify the impact of tabular attribute order on privacy analysis, utility loss, and processing costs. We propose a new tool to explore such a property by adopting data cube to guarantee real-time interaction.
26
+
27
+ - We leverage multi-dimensional value distance to measure utility change at the back-end. We use abstract extraction for inter-attribute relationship analysis and design an intuitive risk tree that semantically bonded with data items for interactive privacy analysis preservation at the front-end.
28
+
29
+ - We implement the prototype of TPA and evaluate its effectiveness with two use cases from the insurance and medical domain, respectively. A qualitative interview points out the pros and cons of TPA from the perspective of domain experts.
30
+
31
+ ## 2 RELATED WORK
32
+
33
+ In this section, we provide the background of the privacy preserving and review the literature on visualization.
34
+
35
+ ### 2.1 Privacy Preserving Techniques
36
+
37
+ Data providers will make data sanitization before making it public. There are three dominant technologies:
38
+
39
+ Anonymity method. The most widely used technique for dealing with linking attacks is k -anonymity [36], which is one of the most representative methods. K-anonymity calls all records with the same quasi-identifier an equivalence class. It requires each equivalence class has at least $k$ records. The k -anonymity avoids attackers to identify users by quasi-identifiers with a confidence level no more than $\frac{1}{k}$ . However, it cannot prevent homogeneity attacks. For example, the sensitive attributes in a equivalence classes are identical, and the attackers can still confirm their sensitive information. Hence, 1-diversity was proposed [25]. If a sensitive attribute of an equivalence class has at least $l$ well-represented count, then the equivalence class is said to be l-diversity. Similarly, if all the equivalence classes meet l-diversity, the dataset can be considered to meet l-diversity. If the distance between the distribution of the sensitive attribute in the equivalence class and the distribution of the sensitive attribute in the whole dataset does not exceed the threshold $\mathrm{t}$ , it is considered to meet t-closeness [22]. Unlike the first two methods, it considers the overall distribution of data rather than the specific count, which can balance privacy preserving and data utility. In addition, there are many other variants based on these three methods $\left\lbrack {{20},{35}}\right\rbrack$ . But anonymity methods are parameter sensitive, and apply to specific constraints.
40
+
41
+ Differential Privacy. Differential privacy $\left\lbrack {{10},{11},{27}}\right\rbrack$ is widely used which has no disadvantages of anonymous methods (only applicable to attackers with specific background knowledge). If the absence of a data item does not significantly affect the output result, the function conforms differential privacy definition. For example, if there is a function that queries 100 items in a certain way and results in the same results as queries for the 99 items, there is no way for an attacker to find information about the 100th item. Therefore, the core idea of differential privacy is that there is only one record for the difference between two data sets, making the probability of the result is almost the same.
42
+
43
+ Synthetic Data. The intuitive advantage of synthetic data is that it is 'artificial data', so synthetic data does not contain real information Synthesizing data is also presented to protect publishing data from traditional attacks $\left\lbrack {1,{29},{42}}\right\rbrack$ . Therefore, many studies $\left\lbrack {2,4,{33}}\right\rbrack$ work on similarity between real and synthetic records to measure privacy leakage in synthetic datasets. True, these techniques avoid exposing real data, but as Stadler argues [33], these studies seriously overestimate the ability of privacy preserving. They can't always prevent attacks. Synthetic data is far from the holy grail of privacy preserving data publishing.
44
+
45
+ ### 2.2 Privacy Visualization
46
+
47
+ Privacy preserving is a part of data processing. Visualization plays a key role in data analysis and processing. Recent literature proves that visualization is gaining momentum in the domain of privacy preserving. Much work has assimilated and expanded the concept of privacy and data mining, analyzed how to reduce privacy leaks, maintain utility, and provide the preserving pipeline.
48
+
49
+ Visualization in data analysis. Data analysis $\left\lbrack {{23},{40}}\right\rbrack$ mainly analyzes the relationship between samples from the perspective of distribution, correlation and clustering. Many visualization tools, such as Hierarchical Cluster Explorer [32], PermutMatrix [5] and Clustergrammer [14], are used to analyze the relationship between samples.
50
+
51
+ Elmqvist and Fekete propose several aggregation designs that aggregate data and convey information about the underlying data [13]. Aggregation can better reflect the statistical information of the data and hide the visual interference caused by individual differences. Aggregation visualization techniques are used more on parallel coordinates, analyzing the utility of a cluster, using summaries of histogram statistics [18].
52
+
53
+ Tabular is the main way to represent the binary relationship of data. Tabular visualization $\left\lbrack {{15},{16},{30}}\right\rbrack$ is extremely scalable because cells in table can be divided into many pixels to show more information about data. Taggle has became one of the most beloved tool [15], which is an item-centric (cells in the sheet) visualization technique that provides a seamless combination of details through data-driven aggregation.
54
+
55
+ Visualization for privacy analysis. In recent years, a growing number of studies focus on visualizing-specific contributions for privacy preserving. Chou et al. proposed a visualization tool to help avoid privacy risks in vision, and designed a visual method based on anonymity technology for social network graphs [6] and time series [7]. GraphProtector [38] guides users to protect privacy by using graphical interactive visualizations to the connection of sensitive and non-sensitive nodes and observe structural changes in the graph of utility.
56
+
57
+ There are also some studies [3] that analyze how existing visualizations of privacy preserving affect data and how effective they are. Dasgupta et al. analyze the disclosure risks associated with vulnerabilities in privacy-preserving parallel coordinates and scatter plots [8], and present a case study to demonstrate the effectiveness of the model. Zhang et al. investigate visual analysis of differential privacy [43]. They analyze effectiveness of task-based visualization techniques and a dichotomy model is proposed to define and measure the success of tasks.
58
+
59
+ Visualization for privacy preserving. The preserving pipeline is designed to provide users with a complete processing framework from analysis to protection. Xiao et al. proposed a visualization tool named VISEE [41] to help protect privacy in the case of sensor data sharing. VISEE makes a trade-off between utility and privacy by visualizing the degree of mutual information between different pairs of variables. Overlook [37] was developed for differential privacy preserving of big data. It allows data analysts and data administrators to explore noised data in the face of acceptable delays, while ensuring query accuracy comparable to other synopsis-based systems. Wang et al. [39] propose UPD-Matrix (utility preservation degree matrix) and PER-Tree (privacy exposure risk tree) and developed a visualization tool for multi-attribute tabular based on them. It provides a five-step pipeline of user interaction and iterative processing of data.
60
+
61
+ These visualization tools are designed to help users troubleshoot potential risks. However, most of them are based on a single exploration domain or automatic algorithms. We find that different schema has great difference in risk analysis and handling (§ 3.2). Our approach allows users to explore the tabular data of different attribute order more flexibly and support different granularity of privacy preserving operations.
62
+
63
+ ![01963e7b-a373-7132-b668-9261e4db307d_2_155_1039_713_223_0.jpg](images/01963e7b-a373-7132-b668-9261e4db307d_2_155_1039_713_223_0.jpg)
64
+
65
+ Figure 2: Aggregation for privacy analysis under a schema example. (a) Original sheet. (b) Reordered records by schema. (c) Privacy-enhanced sheet based on merging attribute values.
66
+
67
+ ## 3 Preliminaries, Motivations, and Requirements
68
+
69
+ For ease of exposition, we first denote the relevant entities: 1) Individuals whose information are recorded in a sheet. One individual can corresponds to several items. 2) Data holders ${}^{1}$ (e.g., institutions, administrators) that own, maintain, and release the sheet.
70
+
71
+ ### 3.1 Preliminaries on Tabular Privacy
72
+
73
+ The basic privacy risk of sharing tabular data is that individual identity is correlated and may be revealed by the other seemingly harmless attributes. We denote such a unique combination of attributes a quasi-identifier for individual privacy. We first give some key definitions in this privacy context.
74
+
75
+ (Definition 1) Equivalent Class: A subset of items with equal values on all the focused attributes.
76
+
77
+ For example, in Fig. 2, if 'Gender' is the focused attribute, then all the items with value ’ $M$ ’ form an equivalent class, while all items with ’ $\mathrm{F}$ ’ form another class. According to $\mathrm{K}$ -anonymity argument, if the size of a equivalent class is smaller than $k$ , then the inside items form quasi-identifiers that identify their owners' identities.
78
+
79
+ Given a tabular set, a general privacy analysis process (e.g., [39]) involves calculating the size of equivalent classes under additive attribute prefix. For example, measuring the size of equivalent class of prefix ’Gender=F’ and raising a privacy risk if the number is smaller than $k$ ; then checking prefix ’Gender=F, Children = 1’, et al.
80
+
81
+ (Definition 2) Schema: The order of attributes assigned for measuring the size of equivalent class to find a privacy breach.
82
+
83
+ We denote the process of finding equivalent classes according to the schema as aggregation, so that privacy investigation turns into aggregation in the given order of attributes. Fig. 2 shows the aggregation under a specific schema (from the left column to the right ones). Attribute of the left-most column (Gender) is first investigated by measuring the equivalent class of Male ('M') and Female (‘F’) separately. Wherein, the ‘M’ class would breach K-anonymity if $k > 1$ , which can be mitigated by merging values ’ $\mathrm{M}$ ’ and ’ $\mathrm{F}$ ’ together to loose the quasi-identifier $\left( {k = 5}\right)$ . Then the next dimension (Children) is measured by counting items under prefix 'M, 1', 'F, 1', and 'F, 2' separately.
84
+
85
+ ### 3.2 Motivations: Self-defined and Dynamic Schema
86
+
87
+ Previous studies use a fixed schema during privacy analysis for that analyzing equivalent class for a tabular set is computation complex, especially when there are large amounts of data items with many attributes. As a result, they usually perform analysis according to the original attribute order in the sheet. However, we find that:
88
+
89
+ Remark: Different schema will yield different privacy risk situations, facilitate distinguished privacy-preserving granularity, and introduce distinct risk handling overhead.
90
+
91
+ ![01963e7b-a373-7132-b668-9261e4db307d_2_925_950_706_385_0.jpg](images/01963e7b-a373-7132-b668-9261e4db307d_2_925_950_706_385_0.jpg)
92
+
93
+ Figure 3: An example of performing equivalent class merging for privacy enhancement under different schemas (O1 and O2). Obviously, the yielded sheets are different.
94
+
95
+ Taking the merging operation as an example, different schemas correspond to different aggregations and result in different equivalence classes after merging. In the case of Fig. 3, O1 and O2 are the same sheet with different schema. When checking the first attribute, O1 will merge the ’Gender’ into $M \mid F$ (operation a), whereas ’Children’ in O2 satisfies $k = 2$ -anonymity and will be retained. Then, O1 continues to check 'Children' without merging operation and merge 'Cancer' under the prefix of 'Gender-Children'. O2, on the other hand, merges the 'Cancer' and 'Gender' attributes under the prefix of 'Children'. Finally, O1 has 10 values altered, while O2 got eight changed during aggregation. That is, the schema has a explicit impact on the privacy situations and enhancement level.
96
+
97
+ In particular, we note that the latter attribute in the schema order has a higher chance of breaching privacy as the equivalent class of a longer prefix (finer granularity) gradually gets smaller, namely, easier to go below the constraint $k$ . As a result, the latter attribute may be heavily merged for privacy preserving, losing more utility. Considering this, we point out that the schema should be assigned by the data holder according to their privacy/utility preference, e.g., subjectively retaining information of some attributes by putting them in the front. Furthermore, as we will show in $§6$ , data holders will dynamically adjust the schema to analyze the risky attributes in coarser granularity for flexible merging operations. For example, instead of studying many equivalent classes for a latter attribute, one can move it to the front to perform merging on fewer classes.
98
+
99
+ ---
100
+
101
+ ${}^{1}$ We use data holder and user interchangeably.
102
+
103
+ ---
104
+
105
+ Yet, existing visualization cannot meet the above dynamic schema intentions, as the change in schema requires a new round of aggregation, which will cause significant latency in online interaction. We are thus motivated to design a new privacy visualization tool that supports schema dynamics.
106
+
107
+ ### 3.3 Requirement Analysis
108
+
109
+ Through meetings with domain experts, we acknowledge that they are familiar with common privacy enhancement technologies, such as k-anonymity and differential privacy. In fact, they have actively applied these techniques to mitigate risk before data released. On this basis, we discussed the insufficiency in current privacy practice and have identified four main requirements:
110
+
111
+ R1: Ability to control schema. As indicated above, different order of attributes shows different preference on attributes and has different granularity when applying privacy preserving operations. Flexible support for adjustable schema is widely required.
112
+
113
+ R2: Multidimensional data analysis. Users' prior knowledge is important in risk analysis. Even professional data analysts cannot find relation between attributes by simply glancing at a sheet. Heuristic algorithms for risk assessing do not know the semantic knowledge and their results are unreliable. Therefore, domain experts believe that a sketch view on for exploring attribute relations is beneficial.
114
+
115
+ R3: Intuitive risk cue. Prior privacy preserving studies have addressed visual designs for privacy risk. In these realizations, users are reminded that there are risks somewhere without mapping directly to the specific records on the sheet. Thus, it is expected to provide an integrated process for risk presentation and mitigation.
116
+
117
+ R4: Operation-granularity utility evaluation. The sanitization of data inevitably discard some information details. It would blur the data, deviate the statistics, and reduce releasing utility. In particular, as different schema and sanitization operations lead to different utilities, users generally want to attain the utility outputs of the current settings for further involvement.
118
+
119
+ ## 4 BACK-END ENGINE
120
+
121
+ This section introduces the key techniques of TPA and how they are used in the system.
122
+
123
+ ### 4.1 Data Structure for Order-dynamic Schema
124
+
125
+ Section 3 discusses the necessity and challenges of adjusting schema dynamically. We propose to adapt data cube as the basis for data management, which has efforts in addressing $\mathbf{{R1}}$ and $\mathbf{{R2}}$ . With data cube, we design the operation tree and facilitate users to change schema and perform operations with almost no perceptible latency.
126
+
127
+ ![01963e7b-a373-7132-b668-9261e4db307d_3_152_1645_716_274_0.jpg](images/01963e7b-a373-7132-b668-9261e4db307d_3_152_1645_716_274_0.jpg)
128
+
129
+ Figure 4: Data cube of a dataset with attributes $D = A, B, C$ . (a) Tree-based data cube. (b) Aggregation relationships of the data cube.
130
+
131
+ Data cube. Data Cube [17] is well suited to handle online analytical processing queries. It is a data processing form for statistics of data, such as SUM, MIN, MAX and AVERAGE. Queries have low latencies by pre-aggregated. We introduce the data cube into TPA, and pre-calculated aggregations are used to reduce query latency.
132
+
133
+ As shown in Fig. 4, (a) shows a tree-based data cube. Similar to a search tree, records are added as a node based on the value of each dimension. However, the data cube also stores the aggregated results (i.e., the subtree pointed to by the blue arrows). The aggregation results store records when a dimension value was ignored. When a query does not care about the values of specific dimensions, data cube can response quickly by access aggregation. For example, The user wants to find all records which attribute $C$ is ${c}_{1}$ and doesn’t care about other attributes. The result can be obtained by accessing the aggregation of $\left\{ {{all},{all},{c}_{1}}\right\}$ , which is the green node in the figure.
134
+
135
+ We did not calculate statistics of records, but store records' indexes into aggregations. TPA use the Nanocubes [24], which is an implementation of the data cube. Nanocubes proposes shared links to avoid duplicate aggregations, thus using less memory. TPA stores categorical attribute and numeric attribute in different ways. For the categorical attribute, creating branches directly based on their values. For the numeric attribute, will have a default branch to store the all original data and others branches have a specific split range. Data cube is built on the server side, which can quickly access aggregations according to the schema.
136
+
137
+ Operation tree. We propose the operation tree for interaction and visualization on the client side. When the user specifies a schema, the operation tree is generated quickly based on the data cube. The operation tree is similar to the data cube in that each node denotes an aggregation and stores indexes of all records in the aggregation. The nodes in the operation tree have the same aggregated order as the schema, only dimensions involved in the schema are stored, which is a lightweight tree designed for front-end (client) interaction. All operations from TPA are performed in the operation tree, such as merging, noise adding, fake data adding and so on. Besides, the node stores privacy-related parameters for visualization (e.g., number of equivalence classes, whether the noise is added, and so on).
138
+
139
+ When requesting a new schema, TPA will quickly create the operation tree by accessing the aggregations from the data cube. Fig. 5 (a) abd (b) illustrates the process of creating an operation tree, when a user is interested in two attributes and given an schema of Cancer $\rightarrow$ Gender. Taking advantages of the data cube, TPA no need to walk through all records to build the operation tree. TPA can quickly find out records of each node (operation tree) just by looking at aggregations of the data cube. TPA is able to create the operation tree with little overhead, even if the user frequently reorders the dimensions.
140
+
141
+ When we apply certain preserving operations, they are directly performed to the operation tree. Fig. 5 (c) show how the operation tree updated after a merging operation performed. Updating may add or delete nodes and branches, and update the values of node records. Since these changes only tweak the tree structure, they do not incur much computing overhead on the client side, while changes to the operation tree are synchronized to the data cube of the server side, again with no additional delay in interaction.
142
+
143
+ ### 4.2 Utility Quantification
144
+
145
+ Utility is a summary term describing the value of a given data release as an analytical resource [12], which is essentially a measure of the information obtained from the dataset. There is no accepted measure of utility and few studies focus on utility quantification of tabular data. According to the definition of utility, we consider using distance and distribution to measure the utility loss. For any values ${f}_{a}\left( x\right)$ in original dataset (where ${f}_{a}\left( x\right)$ is the value of the attribute $a$ ) and handling values ${f}_{a}^{\prime }\left( x\right)$ which is obtained after privacy preserving, we use ${L}_{\text{distance }}\left( {{F}_{a},{F}_{a}^{\prime }}\right)$ and ${L}_{\text{distribution }}\left( {{F}_{a},{F}_{a}^{\prime }}\right)$ to denote the utility loss according to the difference in distance and distribution between them. We propose different algorithms to calculate utility losses for numerical data and categorical data, respectively.
146
+
147
+ ![01963e7b-a373-7132-b668-9261e4db307d_4_256_147_1288_447_0.jpg](images/01963e7b-a373-7132-b668-9261e4db307d_4_256_147_1288_447_0.jpg)
148
+
149
+ Figure 5: An illustration of creating and updating the operation tree. (a) Data cube is built at the back-end based on the tabular data above. (b) Operation tree is generated based on the data cube. (c) An example of updating the operation tree.
150
+
151
+ Numerical Distance. Inspired by EMD [31], Earth Mover's Distance is used to compare the distance between two datasets. Sort all records of two datasets, and calculate the distance of the corresponding records:
152
+
153
+ $$
154
+ {L}_{\text{distance }}\left( {{F}_{a},{F}_{a}^{\prime }}\right) = \mathop{\sum }\limits_{{i = 1}}^{n}\frac{\left| i - j\right| }{n}, \tag{1}
155
+ $$
156
+
157
+ where $i$ and $j$ refer to the sorted index of ${f}_{a}\left( x\right)$ and ${f}_{a}^{\prime }\left( x\right)$ and $n$ is the number of records.
158
+
159
+ Categorical Distance. Since the Categorical data may be fuzzy, the value of ${f}_{a}\left( x\right)$ is actually a set. For example, ${f}_{\text{gender }} =$ \{male, female\} represents an uncertain value that the gender of this record may be male or female. First, we calculate $I$ of these two fuzzy sets, where $I$ denotes the number of individual values that can only be taken from one of the sets. Taking $\{ a, b, c\}$ and $\{ b, c, d\}$ as an example, $a$ and $d$ are individual values, hence the $I$ is 2 . Then the distance between sets can be calculated by:
160
+
161
+ $$
162
+ {L}_{\text{distance }}\left( {{F}_{a},{F}_{a}^{\prime }}\right) = \mathop{\sum }\limits_{{i = 1}}^{n}\frac{2I}{\left| {{f}_{a}\left( x\right) }\right| + \left| {{f}_{a}^{\prime }\left( x\right) }\right| }, \tag{2}
163
+ $$
164
+
165
+ where $\left| {{f}_{a}\left( x\right) }\right|$ refers to the size of the set (i.e., the number of fuzzy values contained).
166
+
167
+ Numerical Distribution. As a nonparametric test method, K-S test [26] is applicable to compare the distribution of two datasets when the distribution is unknown. We use the K-S test to measure the distribution of numerical distribution and use the p-value to represent the utility loss:
168
+
169
+ $$
170
+ {L}_{\text{distribution }}\left( {{F}_{a},{F}_{a}^{\prime }}\right) = 1 - p. \tag{3}
171
+ $$
172
+
173
+ Categorical Distribution. To measure the distribution of fuzzy sets, we first get the global distribution of all possible values. For an attribute $a$ , count the number of occurrences of all values $C =$ $\left\{ {{c}_{{a}_{1}},{c}_{{a}_{2}},\ldots ,{c}_{{a}_{n}}}\right\}$ , where ${c}_{{a}_{n}}$ refers to the number of values with ${a}_{n}$ . Given a fuzzy set ${f}_{a}\left( x\right)$ , counting each possible value ${a}_{n}$ by ${c}_{{a}_{n}} = {c}_{{a}_{n}} + \frac{1}{\left| {f}_{a}\left( x\right) \right| }$ . After getting the global count, the distance of each value can be calculated by:
174
+
175
+ $$
176
+ {L}_{\text{distribution }}\left( {{F}_{a},{F}_{a}^{\prime }}\right) = \mathop{\sum }\limits_{{i = 1}}^{n}\frac{\left| C - {C}^{\prime }\right| }{n}. \tag{4}
177
+ $$
178
+
179
+ ## 5 FRONT-END VISUALIZATION
180
+
181
+ As shown in Fig. 6, the front-end of TPA works in 5 steps: importing, building data cube, privacy analysis, privacy preserving, and exporting. Among them, (c) and (d) are of the most concern for data holders. Being at the heart of visualization and interaction, these two steps work iteratively by presenting risks and performing enhancement until privacy and utility are both satisfactory.
182
+
183
+ ### 5.1 Importing
184
+
185
+ As the first step in the pipeline, the user needs to upload the data sheet here. TPA will attempt to automatically identify the attributes type (categorical or numeric), and user can correct possible misjudgments by manually setting the type. Once the attribute type is determined, it cannot be modified in subsequent steps.
186
+
187
+ ### 5.2 Building Data Cube
188
+
189
+ After receiving the sheet uploaded at the first step, TPA will build the data cube for management and create a session. The session created is used to respond to requests for schemas and to keep track of updates to the operation tree.
190
+
191
+ ### 5.3 Privacy Analysis
192
+
193
+ Fig. 1 (a) shows how the schema is modified. This widget has two boxes (left and right), and user changes the order by moving the attributes in these two boxes. The first time got to this step all the attributes are in the right side area, and users can select the interested attributes and move them to the left. Users can also add or remove interested attributes at any time. The attributes of the left box can be dragged at will to adjust the aggregated order. Thanks to data cube applied, any changes to the schema will instantly generate the corresponding operation tree. In addition, clicking an attribute can mark it as sensitive (used for 1-diversity and t-closeness).
194
+
195
+ Abstract. Aggregations have sorted records in equivalent classes according to the schema, but dozens or even hundreds of lines of records are hardly to be summarized. To help data holders understand and analyze the relation between attributes (R2), we design the visual abstract. As shown in Fig. 1 (b), TPA provides a global abstract, which shows the distribution and proportion of values. In addition to the global summary, TPA supports draw abstract for any aggregation selected. Clicking on the left of the record to collapse or expand the aggregation, and draw abstract for the collapse one. There are two types of the abstracts, as shown in Fig. 7:
196
+
197
+ - The categorical abstract in (a). Its value distribution is represented by the percentage of color block. For fuzzy values, such as null values and uncertain values, are bisected among all possible blocks. The light (upper) part of the color block refers to the uncertain value. By observing the proportion of the light part, users can know how many records apply the merging operation.
198
+
199
+ ![01963e7b-a373-7132-b668-9261e4db307d_5_297_148_1205_350_0.jpg](images/01963e7b-a373-7132-b668-9261e4db307d_5_297_148_1205_350_0.jpg)
200
+
201
+ Figure 6: TPA visualization framework, a 5-step pipeline: import the data sheet, build the data cube, iterate to analyze and deal with privacy risks, and finally export the data sheet.
202
+
203
+ ![01963e7b-a373-7132-b668-9261e4db307d_5_164_629_691_264_0.jpg](images/01963e7b-a373-7132-b668-9261e4db307d_5_164_629_691_264_0.jpg)
204
+
205
+ Figure 7: An abstract design for focused data items (e.g., equivalent class) summarizing. An example abstract of the categorical attributes (a) and numeric attributes (b).
206
+
207
+ - The numeric abstract in (b), based on a box-plot design. The box-plot clearly shows the extreme, quartile, and mean values of the aggregation.
208
+
209
+ With summaries by the abstract, users can quickly get the information of the selected aggregation and the relation between the data of different attributes, which is helpful for data analysis.
210
+
211
+ Privacy risk tree. Abstract can guide data analysis, and help to explore data relations, but users also like to tell them directly where the privacy risks are (R3). We came up with a more intuitive visual design, the risk tree, locating privacy risks according to anonymity technologies. Fig. 8 illustrates the risk tree widget. A selector on the top left of the widget allow user to select a specific anonymity technology from k-anonymity, l-diversity and t-closeness. The constraint parameters are set by the slider. Risk Tree consists of layers of pie charts, with the layers from inside to outside corresponding to the given schema. The division of piece at each layer represents the distribution of the value of this attribute, and each piece is a corresponding node (aggregation) from the operation tree. Calculate whether each piece satisfies the constraint based on the parameters set by the user, and map the privacy risk of aggregations to different colors. When the block does not meet the constraint, the color is calculated by linear interpolation, and the color can relay the degree of risk of each aggregation. Users can hover to view specific aggregation information, and click to quickly jump to it's location in the main view.
212
+
213
+ Due to the different granularity of each layer, the actual priority of risks are different. Obviously, the aggregation node of the outer layer has fewer records, exposing fine-grained privacy risk easily. On the contrary, the high risk color of the inner aggregation indicates that the aggregation has large-scale leakage and should be paid more attention to. A simple understanding is that the risk of inner aggregations means that an attacker can use less information to identify items and should be dealt with first.
214
+
215
+ ![01963e7b-a373-7132-b668-9261e4db307d_5_941_636_693_373_0.jpg](images/01963e7b-a373-7132-b668-9261e4db307d_5_941_636_693_373_0.jpg)
216
+
217
+ Figure 8: A risk tree design for intuitive perception of privacy risks in aggregations.
218
+
219
+ ### 5.4 Privacy Preservation
220
+
221
+ The privacy risks identified in the previous step could be addressed in the this step. TPA provides four operations for privacy enhancement: merging, noise injection, fake data injection, and removing. Operations other than merging require a selection of records. As shown in Fig. 9 (b), user can select records by ctrl+clicking equivalent classes or records.
222
+
223
+ ![01963e7b-a373-7132-b668-9261e4db307d_5_941_1484_690_280_0.jpg](images/01963e7b-a373-7132-b668-9261e4db307d_5_941_1484_690_280_0.jpg)
224
+
225
+ Figure 9: Preserving operations. (a) Merging operation. (b) Select records. (c) Open operations menu.
226
+
227
+ Merging . Merging is the primary preserving operation, which prevents an attacker from identifying items by making the value fuzzy. Two equivalent classes can be merged when all prior attributes of them have same values (i.e., the two nodes in the operation tree have the same parent node). As shown in Fig. 9(a), dragging one folded class onto another to merge them. The value of these two classes will be updated from a concrete value(A, B)to an uncertain value $\left( {\mathbf{A} \mid \mathbf{B}}\right)$ . Besides, the two aggregations that are merged will exist as a new class in the operation tree, so user can continue to merge it with other classes which have the same parent.
228
+
229
+ Adding noise. The noise operation applies to numeric attributes. By clicking the 'Add noise' of the menu, a new view for noise operation is shown in Fig. 10. The view shows the histogram for all numeric attributes, and the number of bars in the histogram determines the granularity of bins, which can be set by tweaking the slider above. The noise operation adds Laplace noise to the data based on differential privacy. One can click the switch in the upper right corner to set the noise parameter, and drag the white dot in Fig. 9(b) to set $\lambda$ of Laplace for each bin that how much noise to add. After parameters are set, view will prompt some red lines which denote the fluctuations of each bin after adding noise.
230
+
231
+ ![01963e7b-a373-7132-b668-9261e4db307d_6_163_565_682_406_0.jpg](images/01963e7b-a373-7132-b668-9261e4db307d_6_163_565_682_406_0.jpg)
232
+
233
+ Figure 10: A visual design for adding noise.
234
+
235
+ It is unreasonable to add noise to all data indiscriminately. As shown in Fig. 11, TPA provides a matrix view for data analysis and filter data of interest.It shows the two-dimensional distribution of attributes, where the $\mathrm{x}$ -axis of the chart is the attribute above the view and the y-axis is the corresponding numeric attribute. A Scatter-plot is used for numeric-numeric combinations and a grouping Box-plot is used for numeric-categorical combinations. User can select data by brushing and clicking, at which point the noise operation will only be applied to the selected data.
236
+
237
+ ![01963e7b-a373-7132-b668-9261e4db307d_6_218_1365_580_583_0.jpg](images/01963e7b-a373-7132-b668-9261e4db307d_6_218_1365_580_583_0.jpg)
238
+
239
+ Figure 11: The matrix view presents the two-dimensional distribution of data and provides data filters.
240
+
241
+ Adding fake data. The fake operation uses CTGAN [42] to generate synthetic records and adds these records into sheet to confuse attackers. After clicking 'Generate fake data' in the menu, TPA will use the selected records as training inputs to generate synthetic records. Synthetic data is not always effective in preventing leakages, but it provides a method that does not require other prior knowledge. Since the synthetic data have similar distribution as the training inputs, the utility loss can be controlled to some extent.
242
+
243
+ Removing records. Sometimes, users want to remove records directly (e.g., outlier data). The removing operation can be applied to remove the selected records from the sheet.
244
+
245
+ History View. The history view records all privacy enhancement operations applied. As shown in Fig. 12, the view lists historical states and their detail, allowing user to go back to the historical state. It also provides the user the number of records that are affected. This helps users understand the granularity of each operation. In addition, users can compare utility losses by selecting two historical states.
246
+
247
+ ![01963e7b-a373-7132-b668-9261e4db307d_6_978_634_615_278_0.jpg](images/01963e7b-a373-7132-b668-9261e4db307d_6_978_634_615_278_0.jpg)
248
+
249
+ Figure 12: The history view records the historical states.
250
+
251
+ Utility analysis. For any preserving operation, whether it is modifying the original value or adding/removing records, will result in a loss of utility. Thus, user want to see how utility changes with each operation (R4). TPA uses the measure introduced in section 5.2 to estimate the utility loss by calculating the distance and distribution. To compare utility changes in each operation, we propose the utility comparison view (Fig. 13). Users can select two historical states at history view to compare. When we select a historical state, TPA will compute the result from applying the first operation to the operation selected, and then calculates the difference in utility between selected state and the original sheet.
252
+
253
+ To compare two different states, we utilize a superimposed matrix to visualize the changes in two historical states. The rows represent algorithms to be compared and the columns represent attributes. Each cell is divided into an outer region and an inner region, with the background color saturation representing the difference of the utility in two different state. The higher the saturation, the more the differs from the original data in this attribute (high utility loss). The view is designed to help users understand changes in utility.
254
+
255
+ ![01963e7b-a373-7132-b668-9261e4db307d_6_944_1589_690_143_0.jpg](images/01963e7b-a373-7132-b668-9261e4db307d_6_944_1589_690_143_0.jpg)
256
+
257
+ Figure 13: Comparison of two historical states, indicating the difference in utility loss.
258
+
259
+ ### 5.5 Exporting
260
+
261
+ The analysis and preserving loop stops if data holder considers the privacy and utility situation are satisfactory. The corresponding sheet is such downloaded and released.
262
+
263
+ ## 6 CASE STUDIES
264
+
265
+ We conduct two case studies with the prototype of TPA, with data from the insurance domain and medical domain.
266
+
267
+ ### 6.1 Analyzing the Medical Cost Dataset
268
+
269
+ The medical cost dataset is an insurance-billed personal medical cost obtained from a book [19]. It has a sheet which shows the age, gender, bmi, children (number of children), smoker, region and charges of 1,339 personal information. This dataset has been sanitized before releasing. In this example, we assume that the dataset collects data from the same hospital and the attacker is most likely to identify individuals through linking attacks.
270
+
271
+ The records of children are numerical. Obviously, the number of children doesn't vary that much and people focus more on whether the patient has a child. Therefore, we mark children as categorical in step 1. After completing the basic setup, we continue to conduct the preserving pipeline.
272
+
273
+ ![01963e7b-a373-7132-b668-9261e4db307d_7_189_591_651_653_0.jpg](images/01963e7b-a373-7132-b668-9261e4db307d_7_189_591_651_653_0.jpg)
274
+
275
+ Figure 14: Process of dealing with privacy risks in the Medical Cost Dataset. (a) Privacy risks are associated with smokers. (b) Adjust the schema to identify high risk aggregations. (c) Use the merging operation to address risks. (d) Affected records.
276
+
277
+ In the case of unfamiliar data, anonymous analysis can be considered first. By setting the k -anonymity constraint of the risk tree as $k = 7$ and observing the visualization in Fig. 14 (a), we find that there are two prominent high risk aggregations. By jumping to the specific aggregations in the sheet, we find they are all smokers. In this case, we can merge 'yes' and 'no' of the attribute "smoker", but many non-smoking data will also be blurred. Comparing aggregations of the previous attribute 'children', we find that both of them have more than four children. It's easy to understand that people with more children are the minority that are easier to identify. Thus, we adjust 'smoker' and 'children' to the front of the order. At this schema in Fig. 14 (b), the new risk-tree indicates their high risk aggregations. As a result, the patients with high risk are those who have more than four children. As shown in Fig. 14 (c) and (d), by merging aggregations of smokers who have ' 4 ' and '5 children, risks have been reduced, and only four records related to risks are modified.
278
+
279
+ When we adjust the 'charges' and 'smoker' to the front of the order and collapse the aggregation of 'smoker' (Fig. 15), abstracts in (a) show an interesting pattern that smokers have much higher charges than non-smokers. This pattern indicates that attacker can simply predict their charges by whether patients smoke or not, with a high degree of confidence. To prevent potential background knowledge attacks, we focus on smokers to protect their privacy, since smokers are a small group. Therefore, we set a filter to find out high charges of both non-smokers and smokers, and merge aggregations of them in (b). (c) indicates that we make high charges records fuzzy which protect the privacy of smokers. Besides, it is reasonable to keep the low charges data which are mostly non-smokers (the majority of people).
280
+
281
+ ![01963e7b-a373-7132-b668-9261e4db307d_7_957_145_657_429_0.jpg](images/01963e7b-a373-7132-b668-9261e4db307d_7_957_145_657_429_0.jpg)
282
+
283
+ Figure 15: Identify potential risks through abstracts. (a) Smokers have extremely high charges. (b) Filter the records to be protected by a filter. (c) and (d) Result of applying preserving operations.
284
+
285
+ ### 6.2 Analyzing Personal Key Indicators of Heart Disease
286
+
287
+ This dataset comes from the CDC (Centers for Disease Control and Prevention) [28], which collects data on the health of U.S. residents. Each record has 300 attributes, including various indicators of the body. According to a CDC report, heart disease is the leading cause of death in the United States. Considering indicators related to heart disease, we narrowed it down to 12 attributes and randomly selected 20,000 records for this example.
288
+
289
+ ![01963e7b-a373-7132-b668-9261e4db307d_7_938_1205_693_549_0.jpg](images/01963e7b-a373-7132-b668-9261e4db307d_7_938_1205_693_549_0.jpg)
290
+
291
+ Figure 16: For a hight-dimensional complex dataset, t-clossness is used to explore dimensional correlations and locate high risk aggregations. (a), (b) and (c) Iterate through the schema to find high correlated attributes. (d) and (e) Compare the utility loss after using the preserving operation on 'Race' and 'Sex'. (f) Result of applying preserving operations.
292
+
293
+ Patients certainly don't want to expose their disease. In this example, we focus on analyzing and dealing with privacy risks related to 'HeartDisease' attribute. From the perspective of publishers, we should first find out what other attributes are related to the disease. We set the 'HeartDisease' as a sensitive attribute and adjust it to the end of the order. As a result in Fig. 16 (a), the t-closeness view in risk-tree points out that the distribution of heart disease among drinkers was clearly different from the global distribution. It can be considered that drinking is highly correlated with heart disease. We move 'AlocholDrinking' to the front of the order and look at the risk-tree again. The new view (b) shows that "Stoke" also has a significant effect on the distribution. Thus, we move 'Stroke' after 'AlocholDrinking',
294
+
295
+ We have moved high correlated attributes to the front of the order, and adjusted schema is easier to locate risks than a random schema (c). After switching to the K-anonymity view, we find some aggregations with salient high risk in branches of the 'Sex' and 'Race'. To reduce risk, we merge aggregations of 'Sex', which is shown in Fig. 16(d). Jump to the high risk aggregation and try to deal with two attributes separately by merging operation. Fig. 16 (e) indicates the comparison of the feedback from utility view, we find that to merge 'Sex' has less utility loss than to merge 'Race'. Therefore, merging aggregations of 'Sex' is a better choice to reduce risks.
296
+
297
+ ![01963e7b-a373-7132-b668-9261e4db307d_8_173_744_679_179_0.jpg](images/01963e7b-a373-7132-b668-9261e4db307d_8_173_744_679_179_0.jpg)
298
+
299
+ Figure 17: The result of abstracts indicates that people with stroke have more physical and mental health problems.
300
+
301
+ To further explore the risks, we collapse the attribute 'Stroke' (Fig. 17). Abstracts show that people who have had a stroke tend to have high value of mental and physical problems. The proportion of people who had both stroke and alcohol drinking is small, and stroke is highly associated with heart disease. Although health scores are less sensitive. That also means health scores are also more likely to be collected by attackers, which should be blurred for patients with heart disease. As shown in Fig. 18, We filtered and selected stroke and alcohol drinking among patients with heart disease, adding noise to the high values of mental and physical problems.
302
+
303
+ ![01963e7b-a373-7132-b668-9261e4db307d_8_164_1371_690_666_0.jpg](images/01963e7b-a373-7132-b668-9261e4db307d_8_164_1371_690_666_0.jpg)
304
+
305
+ Figure 18: Add noise to blur the health score and protect the privacy of people who smoke, stroke and have heart disease.
306
+
307
+ For a dataset with 12 dimensions and 20,000 records, taking a long time to calculate once aggregations. Taking advantage of the data cube, even such a high-dimensional dataset can still interact in real-time and dynamically adjust the aggregated order.
308
+
309
+ ## 7 QUALITATIVE DISCUSSIONS
310
+
311
+ We conducted interview with four domain experts on the applicability of TPA in real-world scenarios. These users are experienced in data analysis and often work with tabular data. They commented positively on our work and indicated suggestions for improvement.
312
+
313
+ ### 7.1 Effectiveness
314
+
315
+ Interviewees agreed that TPA was effective in data analysis, especially in aggregation abstract that help them to grasp the value distribution of attributes and the correlation between attributes in the dataset(R2). They favored the function that they could adjust the schema in real-time (R1), and also appreciated TPA's capability to efficiently handle big datasets. One of the users said that it was difficult to effectively analyze the risks of data sets in the past when faced with high-dimensional data sets. When used in conjunction with risk tree, dynamic adjustment order were considered to help perceive privacy risks intuitively (R3). In addition, TPA saved them a lot of time than other visualization tools, by providing more preserving operations and allowing them to control the granularity of them (R4).
316
+
317
+ ### 7.2 Limitations
318
+
319
+ However, some users pointed out that the interaction design of the prototype was not good enough, even though we instructed users how to use TPA in prior. Further, some supposed that the utility view may be of limited use. While the utility view could remind them of the differences between the current state and the original one, they still don't understand how those differences mean. Some users also suggested providing a recommendation scheme function to help to carry out privacy enhancement operations. This indicates that, whereas TPA is designed to provide users with high flexibility, they can often get lost in the choices, thus providing some recommended actions shall be a good way to get started quickly.
320
+
321
+ ### 7.3 Future Work
322
+
323
+ Considering that data will be shared to work for specific analysis tasks, we plan to extract patterns for those tasks (e.g., extreme values of samples, clustering, etc.). By indicating the pattern differences before and after privacy preserving, one can more easily take balance between privacy and utility. We will also improve the interface and provide support for more diverse data type, like time, location, sequence, etc.
324
+
325
+ ## 8 CONCLUSION
326
+
327
+ We propose a visual tool, TPA, for privacy protection of tabular data. Our design helps users analyze multidimensional data relationships and identify potential privacy issues. In addition, we provide users with some preserving operations to reduce privacy risks and a utility view is designed to help control the utility loss of operations. By introducing data cube, we have implemented a system that support user exploring any aggregated order in real-time, allowing users to analyze privacy risks from different perspectives and flexibly control the granularity of preserving operations. We use two real datasets to demonstrate that TPA can handle all kinds of data, including big datasets and high dimensional datasets.
328
+
329
+ ## REFERENCES
330
+
331
+ [1] N. C. Abay, Y. Zhou, M. Kantarcioglu, B. Thuraisingham, and L. Sweeney. Privacy preserving synthetic data release using deep learning. In M. Berlingerio, F. Bonchi, T. Gärtner, N. Hurley, and G. Ifrim,
332
+
333
+ eds., Machine Learning and Knowledge Discovery in Databases, pp. 510-526. Springer International Publishing, Cham, 2019.
334
+
335
+ [2] J. M. Abowd and L. Vilhuber. How protective are synthetic data? In
336
+
337
+ J. Domingo-Ferrer and Y. Saygn, eds., Privacy in Statistical Databases, pp. 239-246. Springer Berlin Heidelberg, Berlin, Heidelberg, 2008.
338
+
339
+ [3] K. Bhattacharjee, M. Chen, and A. Dasgupta. Privacy-preserving data visualization: reflections on the state of the art and research opportunities. In Computer Graphics Forum, vol. 39, pp. 675-692. Wiley Online Library, 2020.
340
+
341
+ [4] V. Bolón-Canedo, N. Sánchez-Maroño, and A. Alonso-Betanzos. A review of feature selection methods on synthetic data. Knowledge and information systems, 34(3):483-519, 2013.
342
+
343
+ [5] G. Caraux and S. Pinloche. Permutmatrix: a graphical environment to arrange gene expression profiles in optimal linear order. Bioinformatics, 21(7):1280-1281, 2005.
344
+
345
+ [6] J.-K. Chou, C. Bryan, and K.-L. Ma. Privacy preserving visualization for social network data with ontology information. In 2017 IEEE Pacific Visualization Symposium (Pacific Vis), pp. 11-20. IEEE, 2017.
346
+
347
+ [7] J.-K. Chou, Y. Wang, and K.-L. Ma. Privacy preserving visualization: A study on event sequence data. In Computer Graphics Forum, vol. 38, pp. 340-355. Wiley Online Library, 2019.
348
+
349
+ [8] A. Dasgupta, R. Kosara, and M. Chen. Guess me if you can: A visual uncertainty model for transparent evaluation of disclosure risks in privacy-preserving data visualization. In 2019 IEEE Symposium on Visualization for Cyber Security (VizSec), pp. 1-10. IEEE, 2019.
350
+
351
+ [9] Y.-A. de Montjoye, C. A. Hidalgo, M. Verleysen, and V. D. Blondel. Unique in the crowd: The privacy bounds of human mobility. Scientific Reports, 3(1):1376, Mar 2013. doi: 10.1038/srep01376
352
+
353
+ [10] C. Dwork. Differential privacy: A survey of results. In M. Agrawal, D. Du, Z. Duan, and A. Li, eds., Theory and Applications of Models of Computation, pp. 1-19. Springer Berlin Heidelberg, Berlin, Heidelberg, 2008.
354
+
355
+ [11] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. In Theory of cryptography conference, pp. 265-284. Springer, 2006.
356
+
357
+ [12] M. Elliot, A. Hundepool, E. S. Nordholt, J.-L. Tambay, and T. Wende. Glossary on statistical disclosure control. In Monograph on Official Statistics, pp. 381-392. Eurostat, 2006.
358
+
359
+ [13] N. Elmqvist and J.-D. Fekete. Hierarchical aggregation for information visualization: Overview, techniques, and design guidelines. IEEE Transactions on Visualization and Computer Graphics, 16(3):439-454, 2009.
360
+
361
+ [14] N. F. Fernandez, G. W. Gundersen, A. Rahman, M. L. Grimes, K. Rikova, P. Hornbeck, and A. Ma'ayan. Clustergrammer, a web-based heatmap visualization and analysis tool for high-dimensional biological data. Scientific data, 4(1):1-12, 2017.
362
+
363
+ [15] K. Furmanova, S. Gratzl, H. Stitz, T. Zichner, M. Jaresova, A. Lex, and M. Streit. Taggle: Combining overview and details in tabular data visualizations. Information Visualization, 19(2):114-136, 2020.
364
+
365
+ [16] K. Furmanova, M. Jaresova, B. Kawan, H. Stitz, M. Ennemoser, S. Gratzl, A. Lex, and M. Streit. Taggle: Scaling table visualization through aggregation. In Poster@ IEEE Conference on Information Visualization (InfoVis' 17), p. 139, 2017.
366
+
367
+ [17] J. Gray, S. Chaudhuri, A. Bosworth, A. Layman, D. Reichart, M. Venka-trao, F. Pellow, and H. Pirahesh. Data cube: A relational aggregation operator generalizing group-by, cross-tab, and sub-totals. Data mining and knowledge discovery, 1(1):29-53, 1997.
368
+
369
+ [18] J. Heinrich, C. Vehlow, F. Battke, G. Jäger, D. Weiskopf, and K. Nieselt. ihat: interactive hierarchical aggregation table for genetic association data. BMC bioinformatics, 13(8):1-12, 2012.
370
+
371
+ [19] B. Lantz. Machine Learning with R. Packt, 2013.
372
+
373
+ [20] K. LeFevre, D. J. DeWitt, and R. Ramakrishnan. Mondrian multidimensional k-anonymity. In 22nd International conference on data engineering (ICDE'06), pp. 25-25. IEEE, 2006.
374
+
375
+ [21] B. Li, E. Erdin, M. H. Gunes, G. Bebis, and T. Shipley. An overview of anonymity technology usage. Computer Communications, 36(12):1269-1283, 2013. doi: 10.1016/j.comcom. 2013.04.009
376
+
377
+ [22] N. Li, T. Li, and S. Venkatasubramanian. t-closeness: Privacy beyond k-anonymity and l-diversity. In 2007 IEEE 23rd international conference on data engineering, pp. 106-115. IEEE, 2007.
378
+
379
+ [23] T. Li and N. Li. On the tradeoff between privacy and utility in data publishing. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 517-526,
380
+
381
+ 2009.
382
+
383
+ [24] L. Lins, J. T. Klosowski, and C. Scheidegger. Nanocubes for real-time exploration of spatiotemporal datasets. IEEE Transactions on Visualization and Computer Graphics, 19(12):2456-2465, 2013.
384
+
385
+ [25] A. Machanavajjhala, D. Kifer, J. Gehrke, and M. Venkitasubrama-niam. 1-diversity: Privacy beyond k-anonymity. ACM Transactions on Knowledge Discovery from Data (TKDD), 1(1):3-es, 2007.
386
+
387
+ [26] F. J. Massey Jr. The kolmogorov-smirnov test for goodness of fit. Journal of the American statistical Association, 46(253):68-78, 1951.
388
+
389
+ [27] F. McSherry and K. Talwar. Mechanism design via differential privacy. In 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS'07), pp. 94-103. IEEE, 2007.
390
+
391
+ [28] K. Pytlak. Personal key indicators of heart disease. https://www.kaggle.com/datasets/kamilpytlak/ personal-key-indicators-of-heart-disease/metadata, 2022.
392
+
393
+ [29] F. Rajabiyazdi, C. Perin, L. Oehlberg, and S. Carpendale. Exploring the design of patient-generated data visualizations. In Proceedings of Graphics Interface 2020, GI 2020, pp. 362 - 373. Canadian Human-Computer Communications Society / Societe canadienne du dialogue humain-machine, 2020. doi: 10.20380/GI2020.36
394
+
395
+ [30] R. Rao and S. K. Card. The table lens: merging graphical and symbolic representations in an interactive focus + context visualization for tabular information. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 318-322, 1994.
396
+
397
+ [31] Y. Rubner, C. Tomasi, and L. J. Guibas. The earth mover's distance as a metric for image retrieval. International journal of computer vision, 40(2):99-121, 2000.
398
+
399
+ [32] J. Seo and B. Shneiderman. Interactively exploring hierarchical clustering results [gene identification]. Computer, 35(7):80-86, 2002.
400
+
401
+ [33] T. Stadler, B. Oprisanu, and C. Troncoso. Synthetic data-anonymisation groundhog day. arXiv preprint arXiv:2011.07018, 2021.
402
+
403
+ [34] L. Sweeney. Simple demographics often identify people uniquely. 2000.
404
+
405
+ [35] L. Sweeney. Achieving k-anonymity privacy protection using generalization and suppression. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 10(05):571-588, 2002.
406
+
407
+ [36] L. Sweeney. k-anonymity: A model for protecting privacy. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 10(05):557-570, 2002.
408
+
409
+ [37] P. Thaker, M. Budiu, P. Gopalan, U. Wieder, and M. Zaharia. Overlook: Differentially private exploratory visualization for big data. arXiv preprint arXiv:2006.12018, 2020.
410
+
411
+ [38] X. Wang, W. Chen, J.-K. Chou, C. Bryan, H. Guan, W. Chen, R. Pan, and K.-L. Ma. Graphprotector: A visual interface for employing and assessing multiple privacy preserving graph algorithms. IEEE transactions on visualization and computer graphics, 25(1):193-203, 2018.
412
+
413
+ [39] X. Wang, J.-K. Chou, W. Chen, H. Guan, W. Chen, T. Lao, and K.-L. Ma. A utility-aware visual approach for anonymizing multi-attribute tabular data. IEEE transactions on visualization and computer graphics, 24(1):351-360, 2017.
414
+
415
+ [40] F. T. Wu. Defining privacy and utility in data sets. U. Colo. L. Rev., 84:1117, 2013.
416
+
417
+ [41] F. Xiao, M. Lu, Y. Zhao, S. Menasria, D. Meng, S. Xie, J. Li, and C. Li. An information-aware visualization for privacy-preserving accelerometer data sharing. Human-centric Computing and Information Sciences, 8(1):1-28, 2018.
418
+
419
+ [42] L. Xu, M. Skoularidou, A. Cuesta-Infante, and K. Veeramachaneni. Modeling tabular data using conditional gan. Advances in Neural Information Processing Systems, 32, 2019.
420
+
421
+ [43] D. Zhang, A. Sarvghad, and G. Miklau. Investigating visual analysis of differentially private data. IEEE transactions on visualization and computer graphics, 27(2):1786-1796, 2020.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/BU9lJWKNTG9/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,323 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A Visual Tool for Interactively Privacy Analysis and Preservation on Order-Dynamic Tabular Data
2
+
3
+ < g r a p h i c s >
4
+
5
+ Figure 1: Tabular Privacy Assistant (TPA), a visual tool for the risk analysis and privacy preservation of tabular data with dynamic attribute order. (a) A widget that allows personalized attribute order setting and dynamic adjustment. (b) Statistics of different attributes for overall distribution analysis. (c) The main view for tabular data presentation (box plot means abstract of several items) and interactive privacy enhancement (e.g., choosing five items to merge). (d) Privacy risk tree under the current attribute order (red: privacy breach items on K-anonymity). (e) Historical privacy enhancement operations (allowing backtrack and comparison). (f) Data utility dynamics during interactions.
6
+
7
+ § ABSTRACT
8
+
9
+ The practice of releasing individual data, usually in tabular form, is obligated to prevent privacy leakage. With rendered privacy risks, visualization techniques have greatly prompted the user-friendly data sanitization process. Yet, we point out, for the first time, the attribute order (i.e., schema) of tabular data inherently determines the risk situation and the output utility, while is ignored in previous efforts. To mitigate this gap, this work proposes the design and pipeline of a visual tool (TPA) for nuanced privacy analysis and preservation on order-dynamic tabular data. By adapting data cube structure as the flexible backbone, TPA manages to support real-time risk analysis in response to attribute order adjustment. Novel visual designs, i.e., data abstract, risk tree, integrated privacy enhancement, are developed to explore data correlations and acquire privacy awareness. We demonstrate TPA's effectiveness with two case studies on the prototype and qualitatively discuss the pros and cons with domain experts for future improvement.
10
+
11
+ Index Terms: Human-centered computing-Visualization-Visualization techniques-Treemaps; Human-centered computing-Visualization-Visualization design and evaluation methods
12
+
13
+ § 1 INTRODUCTION
14
+
15
+ We are all providers and beneficiaries of the collection and release of individual data. Generally maintained as multi-attribute tables, the collected data can be used in various learning, statistic, and decision-making tasks (e.g., disease diagnosing, product recommendation). Alongside the well-known benefits, privacy issues in the publish of data have raised massive concerns recently, as more and more real-world safety violation caused by data leakage and abuse are witnessed $\left\lbrack {9,{34}}\right\rbrack$ and the promulgation of regulations (e.g., GDPR).
16
+
17
+ The privacy risk stems from the fact that individual identity, although usually anonymized, is correlated and may be re-identified by the other seemingly harmless attributes. As a result, it is obligated for data holders (e.g., organizations, companies) to properly sanitize data before releasing it. Research communities have responded to this critical requirement with many privacy protection techniques, including anonymity [21], differential privacy [10], and synthetic data mixture $\left\lbrack {1,{42}}\right\rbrack$ . With such technical basis, visualization has been introduced recently to facilitate illustrative, understandable, and easy-to-use privacy analysis tools on behalf of the users $\left\lbrack {6,7,{37} - {39},{41}}\right\rbrack$ . For example, in [39], visual presentations on privacy exposure level and utility preservation degree are provided for detecting and mitigating privacy issues in tabular data.
18
+
19
+ Previous visual methods for privacy analysis build on the setting of fixed attribute order, i.e., the target table has fixed columns. However, we find that the currently unexplored attribute order (i.e., schema) inherently determines the privacy risk situation and the output utility (Detail analysis in $\$ {3.2}$ ). For example, in the process of checking K-anonymity privacy constraints [36] on a sheet, whereas we may find privacy breach on the 3rd attribute and have 5 values changed during protection in an order of 'Age, Work, Disease', we would face a totally different (thornier) privacy context, like privacy breach on the 1 st attribute with 10 values changed, in order 'Work, Disease, Age'. As a result, randomly choosing an attribute order, as the existing proposals do, may unfortunately lead to over-protection and unnecessary utility losses.
20
+
21
+ We are thus motivated to design a flexible visual tool (TPA) that can support and explore order adjustment for nuanced (user-specific, reactive) privacy investigation. The most challenging part for dynamic order is that one should dynamically re-perform risk analysis (e.g., equivalent class parsing) according to the new attribute order. This can be a disaster for existing implementations as it involves aggregation calculations for all combinations under additive prefixes, especially when the sheet owns vast amounts of data items and lots of attributes, indicating significant interaction latency. As a remedy, we adapt the data cube structure with flexibly pre-aggregation to organize the table and use an operation tree to handle order adjustment in real-time ( $§{4.1}$ ). Additionally, we present a data abstract function for statistically analyzing attribute correlation (§ 5.3) and provide fine-grained utility quantification that estimates the differential impact of each privacy-preserving operation (§4.2).
22
+
23
+ Combined with various privacy enhancement technologies, TPA guides data holders on the risks in their data, and prompt utility losses of preserving operations. The main contributions are as follows:
24
+
25
+ * We identify the impact of tabular attribute order on privacy analysis, utility loss, and processing costs. We propose a new tool to explore such a property by adopting data cube to guarantee real-time interaction.
26
+
27
+ * We leverage multi-dimensional value distance to measure utility change at the back-end. We use abstract extraction for inter-attribute relationship analysis and design an intuitive risk tree that semantically bonded with data items for interactive privacy analysis preservation at the front-end.
28
+
29
+ * We implement the prototype of TPA and evaluate its effectiveness with two use cases from the insurance and medical domain, respectively. A qualitative interview points out the pros and cons of TPA from the perspective of domain experts.
30
+
31
+ § 2 RELATED WORK
32
+
33
+ In this section, we provide the background of the privacy preserving and review the literature on visualization.
34
+
35
+ § 2.1 PRIVACY PRESERVING TECHNIQUES
36
+
37
+ Data providers will make data sanitization before making it public. There are three dominant technologies:
38
+
39
+ Anonymity method. The most widely used technique for dealing with linking attacks is k -anonymity [36], which is one of the most representative methods. K-anonymity calls all records with the same quasi-identifier an equivalence class. It requires each equivalence class has at least $k$ records. The k -anonymity avoids attackers to identify users by quasi-identifiers with a confidence level no more than $\frac{1}{k}$ . However, it cannot prevent homogeneity attacks. For example, the sensitive attributes in a equivalence classes are identical, and the attackers can still confirm their sensitive information. Hence, 1-diversity was proposed [25]. If a sensitive attribute of an equivalence class has at least $l$ well-represented count, then the equivalence class is said to be l-diversity. Similarly, if all the equivalence classes meet l-diversity, the dataset can be considered to meet l-diversity. If the distance between the distribution of the sensitive attribute in the equivalence class and the distribution of the sensitive attribute in the whole dataset does not exceed the threshold $\mathrm{t}$ , it is considered to meet t-closeness [22]. Unlike the first two methods, it considers the overall distribution of data rather than the specific count, which can balance privacy preserving and data utility. In addition, there are many other variants based on these three methods $\left\lbrack {{20},{35}}\right\rbrack$ . But anonymity methods are parameter sensitive, and apply to specific constraints.
40
+
41
+ Differential Privacy. Differential privacy $\left\lbrack {{10},{11},{27}}\right\rbrack$ is widely used which has no disadvantages of anonymous methods (only applicable to attackers with specific background knowledge). If the absence of a data item does not significantly affect the output result, the function conforms differential privacy definition. For example, if there is a function that queries 100 items in a certain way and results in the same results as queries for the 99 items, there is no way for an attacker to find information about the 100th item. Therefore, the core idea of differential privacy is that there is only one record for the difference between two data sets, making the probability of the result is almost the same.
42
+
43
+ Synthetic Data. The intuitive advantage of synthetic data is that it is 'artificial data', so synthetic data does not contain real information Synthesizing data is also presented to protect publishing data from traditional attacks $\left\lbrack {1,{29},{42}}\right\rbrack$ . Therefore, many studies $\left\lbrack {2,4,{33}}\right\rbrack$ work on similarity between real and synthetic records to measure privacy leakage in synthetic datasets. True, these techniques avoid exposing real data, but as Stadler argues [33], these studies seriously overestimate the ability of privacy preserving. They can't always prevent attacks. Synthetic data is far from the holy grail of privacy preserving data publishing.
44
+
45
+ § 2.2 PRIVACY VISUALIZATION
46
+
47
+ Privacy preserving is a part of data processing. Visualization plays a key role in data analysis and processing. Recent literature proves that visualization is gaining momentum in the domain of privacy preserving. Much work has assimilated and expanded the concept of privacy and data mining, analyzed how to reduce privacy leaks, maintain utility, and provide the preserving pipeline.
48
+
49
+ Visualization in data analysis. Data analysis $\left\lbrack {{23},{40}}\right\rbrack$ mainly analyzes the relationship between samples from the perspective of distribution, correlation and clustering. Many visualization tools, such as Hierarchical Cluster Explorer [32], PermutMatrix [5] and Clustergrammer [14], are used to analyze the relationship between samples.
50
+
51
+ Elmqvist and Fekete propose several aggregation designs that aggregate data and convey information about the underlying data [13]. Aggregation can better reflect the statistical information of the data and hide the visual interference caused by individual differences. Aggregation visualization techniques are used more on parallel coordinates, analyzing the utility of a cluster, using summaries of histogram statistics [18].
52
+
53
+ Tabular is the main way to represent the binary relationship of data. Tabular visualization $\left\lbrack {{15},{16},{30}}\right\rbrack$ is extremely scalable because cells in table can be divided into many pixels to show more information about data. Taggle has became one of the most beloved tool [15], which is an item-centric (cells in the sheet) visualization technique that provides a seamless combination of details through data-driven aggregation.
54
+
55
+ Visualization for privacy analysis. In recent years, a growing number of studies focus on visualizing-specific contributions for privacy preserving. Chou et al. proposed a visualization tool to help avoid privacy risks in vision, and designed a visual method based on anonymity technology for social network graphs [6] and time series [7]. GraphProtector [38] guides users to protect privacy by using graphical interactive visualizations to the connection of sensitive and non-sensitive nodes and observe structural changes in the graph of utility.
56
+
57
+ There are also some studies [3] that analyze how existing visualizations of privacy preserving affect data and how effective they are. Dasgupta et al. analyze the disclosure risks associated with vulnerabilities in privacy-preserving parallel coordinates and scatter plots [8], and present a case study to demonstrate the effectiveness of the model. Zhang et al. investigate visual analysis of differential privacy [43]. They analyze effectiveness of task-based visualization techniques and a dichotomy model is proposed to define and measure the success of tasks.
58
+
59
+ Visualization for privacy preserving. The preserving pipeline is designed to provide users with a complete processing framework from analysis to protection. Xiao et al. proposed a visualization tool named VISEE [41] to help protect privacy in the case of sensor data sharing. VISEE makes a trade-off between utility and privacy by visualizing the degree of mutual information between different pairs of variables. Overlook [37] was developed for differential privacy preserving of big data. It allows data analysts and data administrators to explore noised data in the face of acceptable delays, while ensuring query accuracy comparable to other synopsis-based systems. Wang et al. [39] propose UPD-Matrix (utility preservation degree matrix) and PER-Tree (privacy exposure risk tree) and developed a visualization tool for multi-attribute tabular based on them. It provides a five-step pipeline of user interaction and iterative processing of data.
60
+
61
+ These visualization tools are designed to help users troubleshoot potential risks. However, most of them are based on a single exploration domain or automatic algorithms. We find that different schema has great difference in risk analysis and handling (§ 3.2). Our approach allows users to explore the tabular data of different attribute order more flexibly and support different granularity of privacy preserving operations.
62
+
63
+ < g r a p h i c s >
64
+
65
+ Figure 2: Aggregation for privacy analysis under a schema example. (a) Original sheet. (b) Reordered records by schema. (c) Privacy-enhanced sheet based on merging attribute values.
66
+
67
+ § 3 PRELIMINARIES, MOTIVATIONS, AND REQUIREMENTS
68
+
69
+ For ease of exposition, we first denote the relevant entities: 1) Individuals whose information are recorded in a sheet. One individual can corresponds to several items. 2) Data holders ${}^{1}$ (e.g., institutions, administrators) that own, maintain, and release the sheet.
70
+
71
+ § 3.1 PRELIMINARIES ON TABULAR PRIVACY
72
+
73
+ The basic privacy risk of sharing tabular data is that individual identity is correlated and may be revealed by the other seemingly harmless attributes. We denote such a unique combination of attributes a quasi-identifier for individual privacy. We first give some key definitions in this privacy context.
74
+
75
+ (Definition 1) Equivalent Class: A subset of items with equal values on all the focused attributes.
76
+
77
+ For example, in Fig. 2, if 'Gender' is the focused attribute, then all the items with value ’ $M$ ’ form an equivalent class, while all items with ’ $\mathrm{F}$ ’ form another class. According to $\mathrm{K}$ -anonymity argument, if the size of a equivalent class is smaller than $k$ , then the inside items form quasi-identifiers that identify their owners' identities.
78
+
79
+ Given a tabular set, a general privacy analysis process (e.g., [39]) involves calculating the size of equivalent classes under additive attribute prefix. For example, measuring the size of equivalent class of prefix ’Gender=F’ and raising a privacy risk if the number is smaller than $k$ ; then checking prefix ’Gender=F, Children = 1’, et al.
80
+
81
+ (Definition 2) Schema: The order of attributes assigned for measuring the size of equivalent class to find a privacy breach.
82
+
83
+ We denote the process of finding equivalent classes according to the schema as aggregation, so that privacy investigation turns into aggregation in the given order of attributes. Fig. 2 shows the aggregation under a specific schema (from the left column to the right ones). Attribute of the left-most column (Gender) is first investigated by measuring the equivalent class of Male ('M') and Female (‘F’) separately. Wherein, the ‘M’ class would breach K-anonymity if $k > 1$ , which can be mitigated by merging values ’ $\mathrm{M}$ ’ and ’ $\mathrm{F}$ ’ together to loose the quasi-identifier $\left( {k = 5}\right)$ . Then the next dimension (Children) is measured by counting items under prefix 'M, 1', 'F, 1', and 'F, 2' separately.
84
+
85
+ § 3.2 MOTIVATIONS: SELF-DEFINED AND DYNAMIC SCHEMA
86
+
87
+ Previous studies use a fixed schema during privacy analysis for that analyzing equivalent class for a tabular set is computation complex, especially when there are large amounts of data items with many attributes. As a result, they usually perform analysis according to the original attribute order in the sheet. However, we find that:
88
+
89
+ Remark: Different schema will yield different privacy risk situations, facilitate distinguished privacy-preserving granularity, and introduce distinct risk handling overhead.
90
+
91
+ < g r a p h i c s >
92
+
93
+ Figure 3: An example of performing equivalent class merging for privacy enhancement under different schemas (O1 and O2). Obviously, the yielded sheets are different.
94
+
95
+ Taking the merging operation as an example, different schemas correspond to different aggregations and result in different equivalence classes after merging. In the case of Fig. 3, O1 and O2 are the same sheet with different schema. When checking the first attribute, O1 will merge the ’Gender’ into $M \mid F$ (operation a), whereas ’Children’ in O2 satisfies $k = 2$ -anonymity and will be retained. Then, O1 continues to check 'Children' without merging operation and merge 'Cancer' under the prefix of 'Gender-Children'. O2, on the other hand, merges the 'Cancer' and 'Gender' attributes under the prefix of 'Children'. Finally, O1 has 10 values altered, while O2 got eight changed during aggregation. That is, the schema has a explicit impact on the privacy situations and enhancement level.
96
+
97
+ In particular, we note that the latter attribute in the schema order has a higher chance of breaching privacy as the equivalent class of a longer prefix (finer granularity) gradually gets smaller, namely, easier to go below the constraint $k$ . As a result, the latter attribute may be heavily merged for privacy preserving, losing more utility. Considering this, we point out that the schema should be assigned by the data holder according to their privacy/utility preference, e.g., subjectively retaining information of some attributes by putting them in the front. Furthermore, as we will show in $§6$ , data holders will dynamically adjust the schema to analyze the risky attributes in coarser granularity for flexible merging operations. For example, instead of studying many equivalent classes for a latter attribute, one can move it to the front to perform merging on fewer classes.
98
+
99
+ ${}^{1}$ We use data holder and user interchangeably.
100
+
101
+ Yet, existing visualization cannot meet the above dynamic schema intentions, as the change in schema requires a new round of aggregation, which will cause significant latency in online interaction. We are thus motivated to design a new privacy visualization tool that supports schema dynamics.
102
+
103
+ § 3.3 REQUIREMENT ANALYSIS
104
+
105
+ Through meetings with domain experts, we acknowledge that they are familiar with common privacy enhancement technologies, such as k-anonymity and differential privacy. In fact, they have actively applied these techniques to mitigate risk before data released. On this basis, we discussed the insufficiency in current privacy practice and have identified four main requirements:
106
+
107
+ R1: Ability to control schema. As indicated above, different order of attributes shows different preference on attributes and has different granularity when applying privacy preserving operations. Flexible support for adjustable schema is widely required.
108
+
109
+ R2: Multidimensional data analysis. Users' prior knowledge is important in risk analysis. Even professional data analysts cannot find relation between attributes by simply glancing at a sheet. Heuristic algorithms for risk assessing do not know the semantic knowledge and their results are unreliable. Therefore, domain experts believe that a sketch view on for exploring attribute relations is beneficial.
110
+
111
+ R3: Intuitive risk cue. Prior privacy preserving studies have addressed visual designs for privacy risk. In these realizations, users are reminded that there are risks somewhere without mapping directly to the specific records on the sheet. Thus, it is expected to provide an integrated process for risk presentation and mitigation.
112
+
113
+ R4: Operation-granularity utility evaluation. The sanitization of data inevitably discard some information details. It would blur the data, deviate the statistics, and reduce releasing utility. In particular, as different schema and sanitization operations lead to different utilities, users generally want to attain the utility outputs of the current settings for further involvement.
114
+
115
+ § 4 BACK-END ENGINE
116
+
117
+ This section introduces the key techniques of TPA and how they are used in the system.
118
+
119
+ § 4.1 DATA STRUCTURE FOR ORDER-DYNAMIC SCHEMA
120
+
121
+ Section 3 discusses the necessity and challenges of adjusting schema dynamically. We propose to adapt data cube as the basis for data management, which has efforts in addressing $\mathbf{{R1}}$ and $\mathbf{{R2}}$ . With data cube, we design the operation tree and facilitate users to change schema and perform operations with almost no perceptible latency.
122
+
123
+ < g r a p h i c s >
124
+
125
+ Figure 4: Data cube of a dataset with attributes $D = A,B,C$ . (a) Tree-based data cube. (b) Aggregation relationships of the data cube.
126
+
127
+ Data cube. Data Cube [17] is well suited to handle online analytical processing queries. It is a data processing form for statistics of data, such as SUM, MIN, MAX and AVERAGE. Queries have low latencies by pre-aggregated. We introduce the data cube into TPA, and pre-calculated aggregations are used to reduce query latency.
128
+
129
+ As shown in Fig. 4, (a) shows a tree-based data cube. Similar to a search tree, records are added as a node based on the value of each dimension. However, the data cube also stores the aggregated results (i.e., the subtree pointed to by the blue arrows). The aggregation results store records when a dimension value was ignored. When a query does not care about the values of specific dimensions, data cube can response quickly by access aggregation. For example, The user wants to find all records which attribute $C$ is ${c}_{1}$ and doesn’t care about other attributes. The result can be obtained by accessing the aggregation of $\left\{ {{all},{all},{c}_{1}}\right\}$ , which is the green node in the figure.
130
+
131
+ We did not calculate statistics of records, but store records' indexes into aggregations. TPA use the Nanocubes [24], which is an implementation of the data cube. Nanocubes proposes shared links to avoid duplicate aggregations, thus using less memory. TPA stores categorical attribute and numeric attribute in different ways. For the categorical attribute, creating branches directly based on their values. For the numeric attribute, will have a default branch to store the all original data and others branches have a specific split range. Data cube is built on the server side, which can quickly access aggregations according to the schema.
132
+
133
+ Operation tree. We propose the operation tree for interaction and visualization on the client side. When the user specifies a schema, the operation tree is generated quickly based on the data cube. The operation tree is similar to the data cube in that each node denotes an aggregation and stores indexes of all records in the aggregation. The nodes in the operation tree have the same aggregated order as the schema, only dimensions involved in the schema are stored, which is a lightweight tree designed for front-end (client) interaction. All operations from TPA are performed in the operation tree, such as merging, noise adding, fake data adding and so on. Besides, the node stores privacy-related parameters for visualization (e.g., number of equivalence classes, whether the noise is added, and so on).
134
+
135
+ When requesting a new schema, TPA will quickly create the operation tree by accessing the aggregations from the data cube. Fig. 5 (a) abd (b) illustrates the process of creating an operation tree, when a user is interested in two attributes and given an schema of Cancer $\rightarrow$ Gender. Taking advantages of the data cube, TPA no need to walk through all records to build the operation tree. TPA can quickly find out records of each node (operation tree) just by looking at aggregations of the data cube. TPA is able to create the operation tree with little overhead, even if the user frequently reorders the dimensions.
136
+
137
+ When we apply certain preserving operations, they are directly performed to the operation tree. Fig. 5 (c) show how the operation tree updated after a merging operation performed. Updating may add or delete nodes and branches, and update the values of node records. Since these changes only tweak the tree structure, they do not incur much computing overhead on the client side, while changes to the operation tree are synchronized to the data cube of the server side, again with no additional delay in interaction.
138
+
139
+ § 4.2 UTILITY QUANTIFICATION
140
+
141
+ Utility is a summary term describing the value of a given data release as an analytical resource [12], which is essentially a measure of the information obtained from the dataset. There is no accepted measure of utility and few studies focus on utility quantification of tabular data. According to the definition of utility, we consider using distance and distribution to measure the utility loss. For any values ${f}_{a}\left( x\right)$ in original dataset (where ${f}_{a}\left( x\right)$ is the value of the attribute $a$ ) and handling values ${f}_{a}^{\prime }\left( x\right)$ which is obtained after privacy preserving, we use ${L}_{\text{ distance }}\left( {{F}_{a},{F}_{a}^{\prime }}\right)$ and ${L}_{\text{ distribution }}\left( {{F}_{a},{F}_{a}^{\prime }}\right)$ to denote the utility loss according to the difference in distance and distribution between them. We propose different algorithms to calculate utility losses for numerical data and categorical data, respectively.
142
+
143
+ < g r a p h i c s >
144
+
145
+ Figure 5: An illustration of creating and updating the operation tree. (a) Data cube is built at the back-end based on the tabular data above. (b) Operation tree is generated based on the data cube. (c) An example of updating the operation tree.
146
+
147
+ Numerical Distance. Inspired by EMD [31], Earth Mover's Distance is used to compare the distance between two datasets. Sort all records of two datasets, and calculate the distance of the corresponding records:
148
+
149
+ $$
150
+ {L}_{\text{ distance }}\left( {{F}_{a},{F}_{a}^{\prime }}\right) = \mathop{\sum }\limits_{{i = 1}}^{n}\frac{\left| i - j\right| }{n}, \tag{1}
151
+ $$
152
+
153
+ where $i$ and $j$ refer to the sorted index of ${f}_{a}\left( x\right)$ and ${f}_{a}^{\prime }\left( x\right)$ and $n$ is the number of records.
154
+
155
+ Categorical Distance. Since the Categorical data may be fuzzy, the value of ${f}_{a}\left( x\right)$ is actually a set. For example, ${f}_{\text{ gender }} =$ {male, female} represents an uncertain value that the gender of this record may be male or female. First, we calculate $I$ of these two fuzzy sets, where $I$ denotes the number of individual values that can only be taken from one of the sets. Taking $\{ a,b,c\}$ and $\{ b,c,d\}$ as an example, $a$ and $d$ are individual values, hence the $I$ is 2 . Then the distance between sets can be calculated by:
156
+
157
+ $$
158
+ {L}_{\text{ distance }}\left( {{F}_{a},{F}_{a}^{\prime }}\right) = \mathop{\sum }\limits_{{i = 1}}^{n}\frac{2I}{\left| {{f}_{a}\left( x\right) }\right| + \left| {{f}_{a}^{\prime }\left( x\right) }\right| }, \tag{2}
159
+ $$
160
+
161
+ where $\left| {{f}_{a}\left( x\right) }\right|$ refers to the size of the set (i.e., the number of fuzzy values contained).
162
+
163
+ Numerical Distribution. As a nonparametric test method, K-S test [26] is applicable to compare the distribution of two datasets when the distribution is unknown. We use the K-S test to measure the distribution of numerical distribution and use the p-value to represent the utility loss:
164
+
165
+ $$
166
+ {L}_{\text{ distribution }}\left( {{F}_{a},{F}_{a}^{\prime }}\right) = 1 - p. \tag{3}
167
+ $$
168
+
169
+ Categorical Distribution. To measure the distribution of fuzzy sets, we first get the global distribution of all possible values. For an attribute $a$ , count the number of occurrences of all values $C =$ $\left\{ {{c}_{{a}_{1}},{c}_{{a}_{2}},\ldots ,{c}_{{a}_{n}}}\right\}$ , where ${c}_{{a}_{n}}$ refers to the number of values with ${a}_{n}$ . Given a fuzzy set ${f}_{a}\left( x\right)$ , counting each possible value ${a}_{n}$ by ${c}_{{a}_{n}} = {c}_{{a}_{n}} + \frac{1}{\left| {f}_{a}\left( x\right) \right| }$ . After getting the global count, the distance of each value can be calculated by:
170
+
171
+ $$
172
+ {L}_{\text{ distribution }}\left( {{F}_{a},{F}_{a}^{\prime }}\right) = \mathop{\sum }\limits_{{i = 1}}^{n}\frac{\left| C - {C}^{\prime }\right| }{n}. \tag{4}
173
+ $$
174
+
175
+ § 5 FRONT-END VISUALIZATION
176
+
177
+ As shown in Fig. 6, the front-end of TPA works in 5 steps: importing, building data cube, privacy analysis, privacy preserving, and exporting. Among them, (c) and (d) are of the most concern for data holders. Being at the heart of visualization and interaction, these two steps work iteratively by presenting risks and performing enhancement until privacy and utility are both satisfactory.
178
+
179
+ § 5.1 IMPORTING
180
+
181
+ As the first step in the pipeline, the user needs to upload the data sheet here. TPA will attempt to automatically identify the attributes type (categorical or numeric), and user can correct possible misjudgments by manually setting the type. Once the attribute type is determined, it cannot be modified in subsequent steps.
182
+
183
+ § 5.2 BUILDING DATA CUBE
184
+
185
+ After receiving the sheet uploaded at the first step, TPA will build the data cube for management and create a session. The session created is used to respond to requests for schemas and to keep track of updates to the operation tree.
186
+
187
+ § 5.3 PRIVACY ANALYSIS
188
+
189
+ Fig. 1 (a) shows how the schema is modified. This widget has two boxes (left and right), and user changes the order by moving the attributes in these two boxes. The first time got to this step all the attributes are in the right side area, and users can select the interested attributes and move them to the left. Users can also add or remove interested attributes at any time. The attributes of the left box can be dragged at will to adjust the aggregated order. Thanks to data cube applied, any changes to the schema will instantly generate the corresponding operation tree. In addition, clicking an attribute can mark it as sensitive (used for 1-diversity and t-closeness).
190
+
191
+ Abstract. Aggregations have sorted records in equivalent classes according to the schema, but dozens or even hundreds of lines of records are hardly to be summarized. To help data holders understand and analyze the relation between attributes (R2), we design the visual abstract. As shown in Fig. 1 (b), TPA provides a global abstract, which shows the distribution and proportion of values. In addition to the global summary, TPA supports draw abstract for any aggregation selected. Clicking on the left of the record to collapse or expand the aggregation, and draw abstract for the collapse one. There are two types of the abstracts, as shown in Fig. 7:
192
+
193
+ * The categorical abstract in (a). Its value distribution is represented by the percentage of color block. For fuzzy values, such as null values and uncertain values, are bisected among all possible blocks. The light (upper) part of the color block refers to the uncertain value. By observing the proportion of the light part, users can know how many records apply the merging operation.
194
+
195
+ < g r a p h i c s >
196
+
197
+ Figure 6: TPA visualization framework, a 5-step pipeline: import the data sheet, build the data cube, iterate to analyze and deal with privacy risks, and finally export the data sheet.
198
+
199
+ < g r a p h i c s >
200
+
201
+ Figure 7: An abstract design for focused data items (e.g., equivalent class) summarizing. An example abstract of the categorical attributes (a) and numeric attributes (b).
202
+
203
+ * The numeric abstract in (b), based on a box-plot design. The box-plot clearly shows the extreme, quartile, and mean values of the aggregation.
204
+
205
+ With summaries by the abstract, users can quickly get the information of the selected aggregation and the relation between the data of different attributes, which is helpful for data analysis.
206
+
207
+ Privacy risk tree. Abstract can guide data analysis, and help to explore data relations, but users also like to tell them directly where the privacy risks are (R3). We came up with a more intuitive visual design, the risk tree, locating privacy risks according to anonymity technologies. Fig. 8 illustrates the risk tree widget. A selector on the top left of the widget allow user to select a specific anonymity technology from k-anonymity, l-diversity and t-closeness. The constraint parameters are set by the slider. Risk Tree consists of layers of pie charts, with the layers from inside to outside corresponding to the given schema. The division of piece at each layer represents the distribution of the value of this attribute, and each piece is a corresponding node (aggregation) from the operation tree. Calculate whether each piece satisfies the constraint based on the parameters set by the user, and map the privacy risk of aggregations to different colors. When the block does not meet the constraint, the color is calculated by linear interpolation, and the color can relay the degree of risk of each aggregation. Users can hover to view specific aggregation information, and click to quickly jump to it's location in the main view.
208
+
209
+ Due to the different granularity of each layer, the actual priority of risks are different. Obviously, the aggregation node of the outer layer has fewer records, exposing fine-grained privacy risk easily. On the contrary, the high risk color of the inner aggregation indicates that the aggregation has large-scale leakage and should be paid more attention to. A simple understanding is that the risk of inner aggregations means that an attacker can use less information to identify items and should be dealt with first.
210
+
211
+ < g r a p h i c s >
212
+
213
+ Figure 8: A risk tree design for intuitive perception of privacy risks in aggregations.
214
+
215
+ § 5.4 PRIVACY PRESERVATION
216
+
217
+ The privacy risks identified in the previous step could be addressed in the this step. TPA provides four operations for privacy enhancement: merging, noise injection, fake data injection, and removing. Operations other than merging require a selection of records. As shown in Fig. 9 (b), user can select records by ctrl+clicking equivalent classes or records.
218
+
219
+ < g r a p h i c s >
220
+
221
+ Figure 9: Preserving operations. (a) Merging operation. (b) Select records. (c) Open operations menu.
222
+
223
+ Merging . Merging is the primary preserving operation, which prevents an attacker from identifying items by making the value fuzzy. Two equivalent classes can be merged when all prior attributes of them have same values (i.e., the two nodes in the operation tree have the same parent node). As shown in Fig. 9(a), dragging one folded class onto another to merge them. The value of these two classes will be updated from a concrete value(A, B)to an uncertain value $\left( {\mathbf{A} \mid \mathbf{B}}\right)$ . Besides, the two aggregations that are merged will exist as a new class in the operation tree, so user can continue to merge it with other classes which have the same parent.
224
+
225
+ Adding noise. The noise operation applies to numeric attributes. By clicking the 'Add noise' of the menu, a new view for noise operation is shown in Fig. 10. The view shows the histogram for all numeric attributes, and the number of bars in the histogram determines the granularity of bins, which can be set by tweaking the slider above. The noise operation adds Laplace noise to the data based on differential privacy. One can click the switch in the upper right corner to set the noise parameter, and drag the white dot in Fig. 9(b) to set $\lambda$ of Laplace for each bin that how much noise to add. After parameters are set, view will prompt some red lines which denote the fluctuations of each bin after adding noise.
226
+
227
+ < g r a p h i c s >
228
+
229
+ Figure 10: A visual design for adding noise.
230
+
231
+ It is unreasonable to add noise to all data indiscriminately. As shown in Fig. 11, TPA provides a matrix view for data analysis and filter data of interest.It shows the two-dimensional distribution of attributes, where the $\mathrm{x}$ -axis of the chart is the attribute above the view and the y-axis is the corresponding numeric attribute. A Scatter-plot is used for numeric-numeric combinations and a grouping Box-plot is used for numeric-categorical combinations. User can select data by brushing and clicking, at which point the noise operation will only be applied to the selected data.
232
+
233
+ < g r a p h i c s >
234
+
235
+ Figure 11: The matrix view presents the two-dimensional distribution of data and provides data filters.
236
+
237
+ Adding fake data. The fake operation uses CTGAN [42] to generate synthetic records and adds these records into sheet to confuse attackers. After clicking 'Generate fake data' in the menu, TPA will use the selected records as training inputs to generate synthetic records. Synthetic data is not always effective in preventing leakages, but it provides a method that does not require other prior knowledge. Since the synthetic data have similar distribution as the training inputs, the utility loss can be controlled to some extent.
238
+
239
+ Removing records. Sometimes, users want to remove records directly (e.g., outlier data). The removing operation can be applied to remove the selected records from the sheet.
240
+
241
+ History View. The history view records all privacy enhancement operations applied. As shown in Fig. 12, the view lists historical states and their detail, allowing user to go back to the historical state. It also provides the user the number of records that are affected. This helps users understand the granularity of each operation. In addition, users can compare utility losses by selecting two historical states.
242
+
243
+ < g r a p h i c s >
244
+
245
+ Figure 12: The history view records the historical states.
246
+
247
+ Utility analysis. For any preserving operation, whether it is modifying the original value or adding/removing records, will result in a loss of utility. Thus, user want to see how utility changes with each operation (R4). TPA uses the measure introduced in section 5.2 to estimate the utility loss by calculating the distance and distribution. To compare utility changes in each operation, we propose the utility comparison view (Fig. 13). Users can select two historical states at history view to compare. When we select a historical state, TPA will compute the result from applying the first operation to the operation selected, and then calculates the difference in utility between selected state and the original sheet.
248
+
249
+ To compare two different states, we utilize a superimposed matrix to visualize the changes in two historical states. The rows represent algorithms to be compared and the columns represent attributes. Each cell is divided into an outer region and an inner region, with the background color saturation representing the difference of the utility in two different state. The higher the saturation, the more the differs from the original data in this attribute (high utility loss). The view is designed to help users understand changes in utility.
250
+
251
+ < g r a p h i c s >
252
+
253
+ Figure 13: Comparison of two historical states, indicating the difference in utility loss.
254
+
255
+ § 5.5 EXPORTING
256
+
257
+ The analysis and preserving loop stops if data holder considers the privacy and utility situation are satisfactory. The corresponding sheet is such downloaded and released.
258
+
259
+ § 6 CASE STUDIES
260
+
261
+ We conduct two case studies with the prototype of TPA, with data from the insurance domain and medical domain.
262
+
263
+ § 6.1 ANALYZING THE MEDICAL COST DATASET
264
+
265
+ The medical cost dataset is an insurance-billed personal medical cost obtained from a book [19]. It has a sheet which shows the age, gender, bmi, children (number of children), smoker, region and charges of 1,339 personal information. This dataset has been sanitized before releasing. In this example, we assume that the dataset collects data from the same hospital and the attacker is most likely to identify individuals through linking attacks.
266
+
267
+ The records of children are numerical. Obviously, the number of children doesn't vary that much and people focus more on whether the patient has a child. Therefore, we mark children as categorical in step 1. After completing the basic setup, we continue to conduct the preserving pipeline.
268
+
269
+ < g r a p h i c s >
270
+
271
+ Figure 14: Process of dealing with privacy risks in the Medical Cost Dataset. (a) Privacy risks are associated with smokers. (b) Adjust the schema to identify high risk aggregations. (c) Use the merging operation to address risks. (d) Affected records.
272
+
273
+ In the case of unfamiliar data, anonymous analysis can be considered first. By setting the k -anonymity constraint of the risk tree as $k = 7$ and observing the visualization in Fig. 14 (a), we find that there are two prominent high risk aggregations. By jumping to the specific aggregations in the sheet, we find they are all smokers. In this case, we can merge 'yes' and 'no' of the attribute "smoker", but many non-smoking data will also be blurred. Comparing aggregations of the previous attribute 'children', we find that both of them have more than four children. It's easy to understand that people with more children are the minority that are easier to identify. Thus, we adjust 'smoker' and 'children' to the front of the order. At this schema in Fig. 14 (b), the new risk-tree indicates their high risk aggregations. As a result, the patients with high risk are those who have more than four children. As shown in Fig. 14 (c) and (d), by merging aggregations of smokers who have ' 4 ' and '5 children, risks have been reduced, and only four records related to risks are modified.
274
+
275
+ When we adjust the 'charges' and 'smoker' to the front of the order and collapse the aggregation of 'smoker' (Fig. 15), abstracts in (a) show an interesting pattern that smokers have much higher charges than non-smokers. This pattern indicates that attacker can simply predict their charges by whether patients smoke or not, with a high degree of confidence. To prevent potential background knowledge attacks, we focus on smokers to protect their privacy, since smokers are a small group. Therefore, we set a filter to find out high charges of both non-smokers and smokers, and merge aggregations of them in (b). (c) indicates that we make high charges records fuzzy which protect the privacy of smokers. Besides, it is reasonable to keep the low charges data which are mostly non-smokers (the majority of people).
276
+
277
+ < g r a p h i c s >
278
+
279
+ Figure 15: Identify potential risks through abstracts. (a) Smokers have extremely high charges. (b) Filter the records to be protected by a filter. (c) and (d) Result of applying preserving operations.
280
+
281
+ § 6.2 ANALYZING PERSONAL KEY INDICATORS OF HEART DISEASE
282
+
283
+ This dataset comes from the CDC (Centers for Disease Control and Prevention) [28], which collects data on the health of U.S. residents. Each record has 300 attributes, including various indicators of the body. According to a CDC report, heart disease is the leading cause of death in the United States. Considering indicators related to heart disease, we narrowed it down to 12 attributes and randomly selected 20,000 records for this example.
284
+
285
+ < g r a p h i c s >
286
+
287
+ Figure 16: For a hight-dimensional complex dataset, t-clossness is used to explore dimensional correlations and locate high risk aggregations. (a), (b) and (c) Iterate through the schema to find high correlated attributes. (d) and (e) Compare the utility loss after using the preserving operation on 'Race' and 'Sex'. (f) Result of applying preserving operations.
288
+
289
+ Patients certainly don't want to expose their disease. In this example, we focus on analyzing and dealing with privacy risks related to 'HeartDisease' attribute. From the perspective of publishers, we should first find out what other attributes are related to the disease. We set the 'HeartDisease' as a sensitive attribute and adjust it to the end of the order. As a result in Fig. 16 (a), the t-closeness view in risk-tree points out that the distribution of heart disease among drinkers was clearly different from the global distribution. It can be considered that drinking is highly correlated with heart disease. We move 'AlocholDrinking' to the front of the order and look at the risk-tree again. The new view (b) shows that "Stoke" also has a significant effect on the distribution. Thus, we move 'Stroke' after 'AlocholDrinking',
290
+
291
+ We have moved high correlated attributes to the front of the order, and adjusted schema is easier to locate risks than a random schema (c). After switching to the K-anonymity view, we find some aggregations with salient high risk in branches of the 'Sex' and 'Race'. To reduce risk, we merge aggregations of 'Sex', which is shown in Fig. 16(d). Jump to the high risk aggregation and try to deal with two attributes separately by merging operation. Fig. 16 (e) indicates the comparison of the feedback from utility view, we find that to merge 'Sex' has less utility loss than to merge 'Race'. Therefore, merging aggregations of 'Sex' is a better choice to reduce risks.
292
+
293
+ < g r a p h i c s >
294
+
295
+ Figure 17: The result of abstracts indicates that people with stroke have more physical and mental health problems.
296
+
297
+ To further explore the risks, we collapse the attribute 'Stroke' (Fig. 17). Abstracts show that people who have had a stroke tend to have high value of mental and physical problems. The proportion of people who had both stroke and alcohol drinking is small, and stroke is highly associated with heart disease. Although health scores are less sensitive. That also means health scores are also more likely to be collected by attackers, which should be blurred for patients with heart disease. As shown in Fig. 18, We filtered and selected stroke and alcohol drinking among patients with heart disease, adding noise to the high values of mental and physical problems.
298
+
299
+ < g r a p h i c s >
300
+
301
+ Figure 18: Add noise to blur the health score and protect the privacy of people who smoke, stroke and have heart disease.
302
+
303
+ For a dataset with 12 dimensions and 20,000 records, taking a long time to calculate once aggregations. Taking advantage of the data cube, even such a high-dimensional dataset can still interact in real-time and dynamically adjust the aggregated order.
304
+
305
+ § 7 QUALITATIVE DISCUSSIONS
306
+
307
+ We conducted interview with four domain experts on the applicability of TPA in real-world scenarios. These users are experienced in data analysis and often work with tabular data. They commented positively on our work and indicated suggestions for improvement.
308
+
309
+ § 7.1 EFFECTIVENESS
310
+
311
+ Interviewees agreed that TPA was effective in data analysis, especially in aggregation abstract that help them to grasp the value distribution of attributes and the correlation between attributes in the dataset(R2). They favored the function that they could adjust the schema in real-time (R1), and also appreciated TPA's capability to efficiently handle big datasets. One of the users said that it was difficult to effectively analyze the risks of data sets in the past when faced with high-dimensional data sets. When used in conjunction with risk tree, dynamic adjustment order were considered to help perceive privacy risks intuitively (R3). In addition, TPA saved them a lot of time than other visualization tools, by providing more preserving operations and allowing them to control the granularity of them (R4).
312
+
313
+ § 7.2 LIMITATIONS
314
+
315
+ However, some users pointed out that the interaction design of the prototype was not good enough, even though we instructed users how to use TPA in prior. Further, some supposed that the utility view may be of limited use. While the utility view could remind them of the differences between the current state and the original one, they still don't understand how those differences mean. Some users also suggested providing a recommendation scheme function to help to carry out privacy enhancement operations. This indicates that, whereas TPA is designed to provide users with high flexibility, they can often get lost in the choices, thus providing some recommended actions shall be a good way to get started quickly.
316
+
317
+ § 7.3 FUTURE WORK
318
+
319
+ Considering that data will be shared to work for specific analysis tasks, we plan to extract patterns for those tasks (e.g., extreme values of samples, clustering, etc.). By indicating the pattern differences before and after privacy preserving, one can more easily take balance between privacy and utility. We will also improve the interface and provide support for more diverse data type, like time, location, sequence, etc.
320
+
321
+ § 8 CONCLUSION
322
+
323
+ We propose a visual tool, TPA, for privacy protection of tabular data. Our design helps users analyze multidimensional data relationships and identify potential privacy issues. In addition, we provide users with some preserving operations to reduce privacy risks and a utility view is designed to help control the utility loss of operations. By introducing data cube, we have implemented a system that support user exploring any aggregated order in real-time, allowing users to analyze privacy risks from different perspectives and flexibly control the granularity of preserving operations. We use two real datasets to demonstrate that TPA can handle all kinds of data, including big datasets and high dimensional datasets.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/BhGl3eYVaf9/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,317 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Acceleration Skinning: Kinematics-Driven Cartoon Effects for Articulated
2
+
3
+ ![01963e6a-d8ab-70d9-837f-01808b2c67d8_0_202_229_1391_569_0.jpg](images/01963e6a-d8ab-70d9-837f-01808b2c67d8_0_202_229_1391_569_0.jpg)
4
+
5
+ Figure 1: Real-time cartoon-like deformation computed automatically with Acceleration Skinning extending the range of deformation available from velocity only. Left: The joint use of angular velocity $\omega$ and acceleration $\alpha$ allows to represent a smooth variation between an acceleration phase ( $\omega$ and $\alpha$ have the same orientation) associated to drag effect, and a deceleration phase ( $\omega$ and $\alpha$ have opposite direction) associated to followthrough transition. Right: A centrifugal effect coupled with a lift deformer represents the lift of the rotating skirts of the dancing toy characters.
6
+
7
+ ## Abstract
8
+
9
+ Cartoon effects described in animation principles are key to adding fluidity and style to animated characters. This paper extends the existing framework of Velocity Skinning to use skeletal acceleration, in addition to velocity, for cartoon-style effects on rigged characters. This Acceleration Skinning is able to produce a variety of cartoon effects from highly efficient closed-form deformers while remaining compatible with standard production pipelines for rigged characters. This paper showcases the introduction of the framework along with providing applications through three new deformers. Specifically, a followthrough effect is obtained from the combination of skeletal acceleration and velocity. Also, centrifugal stretch and centrifugal lift effects are introduced using rotational acceleration to model radial stretching and lifting effects. The paper also explores the application of effect-specific time filtering when combining deformations together allow for more stylization and artist control over the results.
10
+
11
+ Index Terms: Computer Animation, Procedural Deformation.
12
+
13
+ ## 1 INTRODUCTION
14
+
15
+ The fundamental principles of animation laid out by Walt Disney Studios remain an important driving factor for animators today [26]. These principles, created by the pioneers of traditional hand-drawn animation, have been guiding artists on how to create effects such as squash and stretch and followthrough to add vitality to animation. Many techniques have been proposed to adapt such cartoon effects to suit the world of $3\mathrm{D}$ animation production [14], however their application still remains a non-trivial task. Currently, professional animators generally create such effects either through rigged animation or using custom, manually controlled deformers [17]. While these approaches allow fine artistic control, they become tedious in adapting the work to multiple characters and effects. As an alternative, physically-based simulation can produce the effect in a fully automatic fashion, but often requires expert tuning and costly computation which does not adapt easily to the usual production pipeline, including the iterative animation workflow.
16
+
17
+ Recently, a technique called Velocity Skinning [23] (VS) introduced a method able to automatically represent a subset of cartoon effects, including squash and stretch and drag/floppy effect, that can be applied on top of a character deformed by skinning solely using a standard rig. As its name suggests, it relies on the use of velocity at the joint level to drive the deformations, providing real-time effects that automatically adapt to the animation and hierarchy of the rigged character. The approach provides simple and direct artist control over the deformation that works with animator workflow, and does not require costly physical simulation.
18
+
19
+ In this paper, we propose a natural extension we call Acceleration Skinning (AS) which builds off of this previous work. Namely, we explore and showcase additional effects for skinning animation based on the use of acceleration in the pipeline, in conjunction with the VS approach. To this end, we propose a more general framework that combines both, as described in Sec. 3. This generalization allows a wider set of deformers while preserving the original advantages of the VS approach, such as its high efficiency and art directabil-ity. Foremost, we showcase the additional use of acceleration, in combination with velocity, to model overshooting deformation, a common effect which is often called followthrough in animation guides (Sec. 4). Further, as acceleration provides a natural separation of coordinate frames, e.g. associated to centrifugal effects when rotational motions are involved, we also propose additional parameterized deformation effects. Specifically, we introduce centrifugal stretch, for the elongation of an object that rotates, and centrifugal lift, for the elevation of a rotating object, such as a twirling skirt (Sec. 5). Finally, we extend the general (VS/AS) domain further by exploring the use of time filters to highlight and tune effects based on artist control (Sec. 6).
20
+
21
+ In total, the contributions of this paper are to extend the capabilities of VS and skinning overall, and add a set of interesting, AS-specific effects with little overhead in implementation and cost while still supporting a convenient workflow for artist-driven cartoon effects.
22
+
23
+ ## 2 RELATED WORK
24
+
25
+ Much of the production animation created today for characters depends on skinning, such as Linear Blend Skinning (LBS) [16], where the surface geometry (skin) of the character is moved in relation to an articulated skeleton. The dependence between the joints of the skeleton and the vertices of the skin is defined by scalar skinning weights and in combination with the skeleton and joints constitute the rig for character skinning. LBS is a straightforward implementation of skinning where the deformation is computed as a linear combination of the joint transforms. A large set of work in computer animation research has improved upon LBS skinning techniques but LBS remains a common tool in practice. As such, our general formulation for Acceleration Skinning is derived for the basic LBS formulation, but the acceleration-skinning deformers proposed may also be applied to other methods, such as the Dual Quaternion approach [8], another common skinning technique.
26
+
27
+ Traditional skinning defines (static) skin poses of the character that can be derived from interpolating keyframes from an animation. However, this approach only interpolates in-between poses and does not model dynamic effects, such as drag at high speeds, or fol-lowthrough due to inertia. To this end, physically-based simulation has been used to represent so-called secondary actions on top of keyframe motion. We distinguish three main approaches to compute secondary motion. First, projection-based dynamics [18] and its extensions [3] have been proposed to modeling muscle and soft tissue giggling effects $\left\lbrack {{10},{19},{25}}\right\rbrack$ and can be tuned to exaggerate specific cartoon-like effects $\left\lbrack {2,4}\right\rbrack$ . Second, lecond, layering techniques have been proposed where distinct effects are handled as simple physical models, using jiggling implicit surfaces [22], spring-driven bones [11], or custom volume effects such as squash and stretch [1]. Third, reduced deformable models have been introduced $\left\lbrack {6,{28},{30}}\right\rbrack$ where precomputed shape analysis allows for model reduction, and therefore sped-up computation. Work is on-going and new techniques for producing simulation effects are still being proposed, such as efficient simulated cartoon deformations applied on top of skinning animation [31], which defines the space of secondary effects as the orthogonal subspace of the rig. In total, these physically-based approaches are able to model rich and detailed dynamic exaggerated deformations, possibly with collision handling, but they share two main drawbacks. First, physically based simulations are limited with respect to efficiency. Even fast approaches, that can tolerate models with thousands of vertices, cannot scale easily to a large number of meshes at the same time - as may be required for video games applications- or very detailed mesh of several millions of vertices, while geometric methods such as skinning are still orders of magnitude faster. Second, simulation requires numerical time integration computed from the initial conditions. This integration imposes the animation to be baked before being visualized, which hamper the efficiency of the animator by limiting the possibility of navigating and editing details at arbitrary instances along the animation timeline. As such, standard production pipelines [17, 21] often favor procedural deformers that are independent of past state when possible.
28
+
29
+ Aside from simulation, geometric and kinematics-based deformations have also been proposed in the literature. For instance, sketch- $\left\lbrack {9,{15}}\right\rbrack$ and example-based $\left\lbrack {5,{24}}\right\rbrack$ techniques aim to precisely define shape deformations for expressive effects, but require manual input for each shape. In contrast, procedural approaches allow more automatic adaptation while remaining compute efficient, as they take advantage of the skeletal structure position and kinematics. For example, Nobel et al. [20] introduce a local bending behavior parameterized by the direction and velocity between consecutive bones. Yong et al. [12] propose a squash and stretch deformer parameterized by root joint velocity, and a drag effect on end-effector joints of a skinned character. Procedural bone stretching was also proposed by Kwon and Lee [13] for local squash and stretch effects. Recently, Rohmer et al. [23] introduced Velocity Skinning (VS) that can be seen as a generic framework to define local deformers such as squash and stretch and drag based on joint velocity. Like the proposed work here, the VS approach fits well to standard animation pipeline in reusing the existing rig (i.e. original skeleton and skinning weights) for automatic parameterization of deformation. VS also provides a closed form procedural deformation that can be computed extremely efficiently for each vertex in parallel, similar to raw skinning. Our approach follows VS, but extends their approach to acceleration terms. This allows additional deformation behaviors that cannot be achieved in using velocity alone, such as the proposed followthrough effect that requires deformation just as the velocity approaches zero. We also note some cartoon effects have been generated using time filtering [27] or oscillating splines [7] applied to vertex trajectories. Our approach also parallels these works in applying low-pass time filters in order to control the relative timing and magnitude of effects relative to the instantaneous joint values of the skeleton.
30
+
31
+ ## 3 ACCELERATION SKINNING
32
+
33
+ In LBS, the deformed position $p$ of an arbitrary vertex at an initial position ${p}^{0}$ can be expressed as the linear blending
34
+
35
+ $$
36
+ p = \mathop{\sum }\limits_{j}{b}_{j}{p}_{j} \tag{1}
37
+ $$
38
+
39
+ where $j$ is the index of a joint, ${b}_{j} \in \left\lbrack {0,1}\right\rbrack$ are the skinning weights that should satisfy $\mathop{\sum }\limits_{j}{b}_{j} = 1.{p}_{j}$ are the rigid skinning deformation of the position ${p}^{0}$ relative to the joint $j$ . As shown in Rohmer et al [23], differentiating this expression in time, and reorganizing the terms along the skeleton hierarchy leads to the following "skinning-like" relation in velocity
40
+
41
+ $$
42
+ v = \mathop{\sum }\limits_{j}{\widetilde{b}}_{j}{v}_{/j} \tag{2}
43
+ $$
44
+
45
+ where $v$ is the net velocity of the vertex at position $p,{v}_{/j}$ is the contribution coming from the rigid transformation of the joint $j$ only to the net velocity. And ${\widetilde{b}}_{j}$ are the kinematics-weights that are obtained from a combination of skinning weights
46
+
47
+ $$
48
+ {\widetilde{b}}_{j} = \mathop{\sum }\limits_{{k \in \operatorname{Desc}\left( j\right) }}{b}_{k}, \tag{3}
49
+ $$
50
+
51
+ with Desc( $j$ ) being the descendants of the joint $j$ in the skeleton hierarchy.
52
+
53
+ We note that this derivation is valid for any arbitrary degree of differentiation in time. Therefore, relation in Eq. (2) can also be differentiated, which leads to an acceleration-like skinning
54
+
55
+ $$
56
+ a = \mathop{\sum }\limits_{j}{\widetilde{b}}_{j}{a}_{/j}. \tag{4}
57
+ $$
58
+
59
+ With this time $a$ being the net acceleration of the vertex at position $p$ , and ${a}_{/j}$ is the contribution coming from the joint $j$ to the net acceleration.
60
+
61
+ Both velocity ${v}_{/j}$ and acceleration ${a}_{/j}$ contributions are related to rigid motions, and we can therefore breakdown their expressions in relation to linear/angular velocity and acceleration of their respective joint as
62
+
63
+ $$
64
+ {v}_{/j} = {v}_{Lj} + {\omega }_{j} \times {r}_{j} \tag{5}
65
+ $$
66
+
67
+ $$
68
+ {a}_{/j} = {a}_{Lj} + {\alpha }_{j} \times {r}_{j} + {\omega }_{j} \times {\omega }_{j} \times {r}_{j}.
69
+ $$
70
+
71
+ ${v}_{Lj}$ and ${a}_{Lj}$ are respectively the linear velocity and acceleration of joint $j.{\omega }_{j}$ is the angular velocity, and ${\alpha }_{j} = {\dot{\omega }}_{j}$ is the angular acceleration vector. ${r}_{j} = p - {\operatorname{proj}}_{j}\left( p\right)$ is the relative vector between $p$ and its orthogonal projection ${\operatorname{proj}}_{i}\left( p\right)$ onto the rotation axis passing by the joint $j$ and oriented along ${\omega }_{j}$ . Note that while velocity depends only on two components - linear and angular -, the acceleration depends on three, respectively, the linear component $\left( {a}_{Lj}\right)$ , angular component $\left( {{\alpha }_{j} \times {r}_{j}}\right)$ , and centripetal component $\left( {{\omega }_{j} \times {\omega }_{j} \times {r}_{j}}\right)$ . These components are illustrated in Figure 2 for velocity and acceleration.
72
+
73
+ ![01963e6a-d8ab-70d9-837f-01808b2c67d8_2_148_146_729_409_0.jpg](images/01963e6a-d8ab-70d9-837f-01808b2c67d8_2_148_146_729_409_0.jpg)
74
+
75
+ Figure 2: Angular velocity (left) and acceleration (right) components associated to the circular motion of joint $j$ . The acceleration can be split between the angular component (orthogonal to ${\alpha }_{j}$ and ${r}_{j}$ ) and a centripetal one (oriented toward the rotation axis directed by ${\omega }_{j}$ ).
76
+
77
+ Aligned with Rohmer et al. [23] the general idea in acceleration skinning is to define "deformers" $\left( {\psi \mathrm{s}}\right)$ as procedural functions representing a specific type of parameterized deformation at the individual joint level. Subsequently, the deformers are combined over the skeleton hierarchy as Eq. (4) in order to distribute globally the total deformation over the skinning surface as
78
+
79
+ $$
80
+ d\left( p\right) = \mathop{\sum }\limits_{j}{\widetilde{b}}_{j}\psi \left( {{v}_{Lj},{a}_{Lj},{\omega }_{j},{\alpha }_{j},{r}_{j}}\right) . \tag{6}
81
+ $$
82
+
83
+ $d\left( p\right)$ is the net deformation to $p$ such that the final deformed position is ${p}_{\text{final }} = p + d\left( p\right)$ . Further, the deformer $\psi$ can itself be decomposed into a linear sum of individual sub-effects ${\psi }^{\text{effect }}$ such
84
+
85
+ that
86
+
87
+ $$
88
+ \psi = \mathop{\sum }\limits_{\text{effects }}{\psi }^{\text{effect }}. \tag{7}
89
+ $$
90
+
91
+ In this paper, we adopt the deformers from the Velocity Skinning work (e.g. drag) and propose three unique deformers that are constructed using the components of the derived acceleration terms from Eq. 5. Namely, we propose formulations for new deformers that we call respectively ${\psi }^{ft}$ for followthrough, ${\psi }^{cs}$ for centrifugal stretch, and ${\psi }^{cl}$ for centrifugal lift. These are described in the following sections in detail. As each deformer ${\psi }^{\text{effect }}$ is defined in a generic way for any joint $j$ we will omit to explicitly mention the dependence to the index $j$ in its velocity and acceleration parameters for notation clarity.
92
+
93
+ ## 4 FOLLOWTHROUGH DEFORMER
94
+
95
+ Followthrough is a fundamental principle of animation and has been used extensively in animation throughout history as a way to convey the inertia of the animated shape (see Figure 3. Drag and follow-through are two effects that often go hand in hand. We describe our approach to represent followthrough effect next, relying on the combination of velocity and acceleration information.
96
+
97
+ Recall the velocity-based floppy deformer ${\psi }^{\text{floppy }}$ was proposed in VS to model a drag effect. This deformer was split into two parts either coming from the linear velocity or the angular velocity. The linear velocity related effect was associated with the deformer ${\psi }^{\text{floppy, lin }}$ representing a translation, while the angular velocity related effect was associated with the deformer ${\psi }^{\text{floppy, ang }}$ representing bending.
98
+
99
+ $$
100
+ {\psi }^{\text{floppy, lin }}\left( {v}_{L}\right) = - {K}_{\text{lin }}{v}_{L} \tag{8}
101
+ $$
102
+
103
+ $$
104
+ {\psi }^{\text{floppy, ang }}\left( {\omega , r}\right) = \left( {R\left( {-{K}_{\text{ang }}\parallel \omega \times r\parallel ,\omega }\right) - {Id}}\right) r,
105
+ $$
106
+
107
+ where $R\left( {\theta , u}\right)$ is the rotation matrix parameterized by an angle $\theta$ and an axis $u$ , and ${Id}$ is the identity matrix. ${K}_{\text{lin }}$ and ${K}_{\text{ang }}$ are user-defined coefficients allowing the scale of the magnitude of each deformation.
108
+
109
+ ![01963e6a-d8ab-70d9-837f-01808b2c67d8_2_963_161_648_457_0.jpg](images/01963e6a-d8ab-70d9-837f-01808b2c67d8_2_963_161_648_457_0.jpg)
110
+
111
+ Figure 3: Top: Concept of drag and followthrough explained in The Animator's Survival Kit [29]. Bottom: Acceleration Skinning animation results utilizing proposed acceleration-based drag and followthrough defomers.
112
+
113
+ ![01963e6a-d8ab-70d9-837f-01808b2c67d8_2_924_814_726_543_0.jpg](images/01963e6a-d8ab-70d9-837f-01808b2c67d8_2_924_814_726_543_0.jpg)
114
+
115
+ Figure 4: Three phases during a circular motion illustrated by their angular velocity $\omega$ and acceleration $\alpha$ curve along time. Left (red): acceleration phase where $\alpha$ and $\omega$ are aligned and oriented in the same direction. Middle: Constant velocity magnitude with near zero angular acceleration. Right (yellow): deceleration phase where $\alpha$ and $\omega$ are opposed.
116
+
117
+ A straightforward extension of such model to followthrough consists in substituting the linear and angular acceleration terms instead of their velocity counterpart in these floppy deformers. However, this substitution leads to undesirable artifacts. Consider a motion -either linear or angular- with three phases: an acceleration phase at its start; a constant velocity phase; and a deceleration phase at the end before the motion stops. These three phases are illustrated in Figure 4 (and Figure 1, left) in the case of an angular motion, comparing the velocity-based deformer to the acceleration-based one. For such case, the proposed acceleration-based floppy deformer will first start by applying a draglike deformation, then continue with no deformation, and end during the deformation phase with an opposite effect of drag, i.e. an overshot of the deformation beyond the final pose.
118
+
119
+ ![01963e6a-d8ab-70d9-837f-01808b2c67d8_3_165_162_698_358_0.jpg](images/01963e6a-d8ab-70d9-837f-01808b2c67d8_3_165_162_698_358_0.jpg)
120
+
121
+ Figure 5: Flower model with circular motion. Top: Source LBS deformation, Bottom: After application of followthrough deformer ${\psi }^{ft}$ - Note the deformation is acting at the fourth and fifth image.
122
+
123
+ This last phase corresponds to the expected behavior for fol-lowthrough representation. However, the current deformer also contains a draglike behavior which is not expected in the first phase. To avoid this unexpected behavior, we modified the proposed deformer by filtering based on the relative direction between the current velocity and acceleration parameters. More precisely, we introduce a smooth indicator function $\mathcal{D}$ that we characterize by the condition that the velocity component is aligned and opposite with the acceleration component. Specifically, we propose the following indicator function
124
+
125
+ $$
126
+ \mathcal{D} : \left( {a, b}\right) \mapsto \left\{ \begin{matrix} 0 & \text{ if }a \cdot b \geq 0 \\ \left| {a \cdot b}\right| /D & \text{ if }0 \geq a \cdot b \geq - D \\ 1 & \text{ otherwise } \end{matrix}\right. \tag{9}
127
+ $$
128
+
129
+ where the parameter $D \in \left\lbrack {0,1}\right\rbrack$ allows to adapt how fast the transition to the deceleration phase is computed. We then define the followthrough (ft) deformer, ${\psi }^{ft}$ , as
130
+
131
+ $$
132
+ {\psi }^{{ft},\operatorname{lin}}\left( {{v}_{L},{a}_{L}}\right) \; = - \mathcal{D}\left( {{v}_{L},{a}_{L}}\right) {K}_{\operatorname{lin}}{a}_{L}
133
+ $$
134
+
135
+ $$
136
+ {\psi }^{{ft},{ang}}\left( {\omega ,\alpha , r}\right) = \mathcal{D}\left( {\omega ,\alpha }\right) \left( {R\left( {-{K}_{ang}\parallel \alpha \times r\parallel ,\alpha }\right) - {Id}}\right) r.
137
+ $$
138
+
139
+ (10)
140
+
141
+ Results obtained using this deformer are illustrated on a circular motion in Figure 5.
142
+
143
+ In addition, we can further take advantage of the smooth indicator to separate the contribution of drag from the followthrough. Doing so, we propose additional pure acceleration drag (ad) deformer ${\psi }^{ad}$ as
144
+
145
+ $$
146
+ {\psi }^{{ad},{lin}}\left( {{v}_{L},{a}_{L}}\right) \; = - \left( {1 - \mathcal{D}\left( {{v}_{L},{a}_{L}}\right) }\right) {K}_{lin}{a}_{L}
147
+ $$
148
+
149
+ $$
150
+ {\psi }^{{ad},{ang}}\left( {\omega ,\alpha , r}\right) = \left( {1 - \mathcal{D}\left( {\omega ,\alpha }\right) }\right) \left( {R\left( {-{K}_{ang}\parallel \alpha \times r\parallel ,\alpha }\right) - {Id}}\right) r
151
+ $$
152
+
153
+ (11)
154
+
155
+ which will apply a drag effect during the acceleration phase of the motion (see Figure 6, top). Note that both deformers ${\psi }^{ft}$ and ${\psi }^{ad}$ can be used together on an animation and parameterized separately in adapting their respective coefficients ${K}_{\text{lin }}$ and ${K}_{\text{ang }}$ for additional user control (see example in Figure 6, bottom). Note that this notion of combining drag with followthrough remains consistent with traditional animation concepts [29] (Figure 3, top).
156
+
157
+ ## 5 CENTRIFUGAL-BASED DEFORMERS
158
+
159
+ Centrifugal motion during rotation of a physical system moves away radially from the axis. This movement is countered by centripetal acceleration in general. We exploit this relationship to propose two new deformers. Namely, as noted in Eq. (5), the net acceleration of a vertex following a rotating motion contains a centripetal component acting radially around the rotation axis, and expressed as $\omega \times \omega \times r$ . In contrast to the angular component, this component exists even when the magnitude of the velocity remains constant, as in the case of a circular motion with constant angular velocity. We take employ this acceleration term in two distinct effects, first, with a generic centrifugal stretch deformer ${\psi }^{cs}$ , and second, in a centrifugal "lift" deformer, ${\psi }^{cl}$ . By their nature, both are applicable for rotating motion only.
160
+
161
+ ![01963e6a-d8ab-70d9-837f-01808b2c67d8_3_929_152_707_359_0.jpg](images/01963e6a-d8ab-70d9-837f-01808b2c67d8_3_929_152_707_359_0.jpg)
162
+
163
+ Figure 6: Top: application of the acceleration drag ${\psi }^{ad}$ . Bottom: Mix of acceleration drag ${\psi }^{ad}$ with followthrough ${\psi }^{ft}$ .
164
+
165
+ ### 5.1 Centrifugal stretch
166
+
167
+ When observing how chefs make pizza, we see that after kneading and pressing the dough, they proceed to toss it in the air with a spin of their wrists. This spin is associated with centripetal acceleration, which in return leads to a centrifugal effect experienced by any position placed in this rotating frame that stretches the dough further out. Inspired from this notion, we propose the creation of the centrifugal stretch deformer that reproduce such effect with
168
+
169
+ $$
170
+ {\psi }^{cs}\left( {\omega , r}\right) = - {K}_{cs}\omega \times \omega \times r. \tag{12}
171
+ $$
172
+
173
+ Note, as the centrifugal effect acts outward from the axis of rotation as visualized in Figure 2, we need to negate the centripetal acceleration that is directed inward. ${\psi }^{cs}$ creates an effect that translates vertices outward away from the point of rotation. As the magnitude of acceleration increases linearly with distance, the vertices farther away from the center of rotation experience larger deformation than those closer to the center. A simple example appears in Figure 7.
174
+
175
+ ![01963e6a-d8ab-70d9-837f-01808b2c67d8_3_931_1437_710_239_0.jpg](images/01963e6a-d8ab-70d9-837f-01808b2c67d8_3_931_1437_710_239_0.jpg)
176
+
177
+ Figure 7: Centrifugal stretch effect applied onto an animation of a flower twisting its head around its central point.
178
+
179
+ ### 5.2 Centrifugal lift
180
+
181
+ Consider the motion of a twirling cloth, such as a dancer's skirt. As the dancer twirls, their skirt gently lifts. The same centrifugal force that caused the pizza dough to expand also applies to this shape. However, in the case of a cloth, the garment surface is more inextensible, and internal stress will prevent elongation in its local tangent plane. As a result, the rotating dress may first unwrap from its wrinkled configuration onto its extended shape before its only remaining degree of freedom are "lifted" in the relative normal direction.
182
+
183
+ While a general solution to account for centripetal lift is difficult without a dynamic model, we can offer a simple solution that can be used to approximate this centrifugal lifting effect on a conical-like shapes. Cones with rotation about their center have consistent normals that allow us to filter the previously defined deformers to act only along the normal direction to create the effect. Our basic formulation of pure centrifugal lift ${\psi }^{cl}$ is thus
184
+
185
+ $$
186
+ {\psi }^{cl}\left( {\omega , r}\right) = {K}_{cl}\left( {{\psi }^{cs}\left( {\omega , r}\right) \cdot n}\right) n, \tag{13}
187
+ $$
188
+
189
+ where $n$ is the local normal of an ideal cone at the current vertex position $p$ , and ${K}_{cl}$ is a user-defined parameter enabling to allow tuned exaggeration of the lift.
190
+
191
+ ![01963e6a-d8ab-70d9-837f-01808b2c67d8_4_235_553_563_519_0.jpg](images/01963e6a-d8ab-70d9-837f-01808b2c67d8_4_235_553_563_519_0.jpg)
192
+
193
+ Figure 8: Centrifugal lift ${\psi }^{cl}$ is applied on meshes approximated by a conical surface. The deformation is constrained to act only along the normal associated to the conical approximation.
194
+
195
+ Further, any model that can be reasonably approximated by the cone can enjoy this deformer by using the cone as a proxy geometry and mapping the projected geometry to the latter. For example, the stylized dress in Figure 8 maps to the nearest point in the conical approximation to yield a set of normals that are used for the skirt deformation with ${\psi }^{cl}$ . This effect can further be seen as a more generic deformation filter that can be applied to any other deformer $\psi$ in constraining it to act in the normal direction, and can be defined as the functional
196
+
197
+ $$
198
+ {\psi }_{\text{filter }}^{cl} : \psi \mapsto {K}_{cl}\left( {\psi \cdot n}\right) n, \tag{14}
199
+ $$
200
+
201
+ where $\psi$ can be an arbitrary combination of the previously defined deformers.
202
+
203
+ ## 6 TIME FILTERING
204
+
205
+ As a final extension to the AS approach, which can also be applied in the VS framework but did not appear in the previous work, we explore the use of time filtering to offer more fine grain and useful control over the mixing of various deformers. To this end, we point out that time filtering is a useful tool in general. In all our examples, we receive input motion provided by smooth interpolated curves from existing keyposes. But more generally, we may also allow non-smooth inputs, for example, coming from direct interaction where the user articulates the skeleton using a mouse. In the first case, velocity and acceleration can be analytically described from the keyframes, while in the latter case we consider numerical differentiation from the joint frames which is smoothed by a low pass filter to limit spurious high frequency noise. To this end, we consider in our case a simple auto-regressive first order filter
206
+
207
+ $$
208
+ y\left( t\right) = {\gamma x}\left( t\right) + \left( {1 - \gamma }\right) y\left( {t - {\Delta t}}\right) , \tag{15}
209
+ $$
210
+
211
+ where $x$ represents the input value (linear/angular velocity/acceleration), $y$ its filtered value, ${\Delta t}$ being the frame duration, and $\gamma \in \left\lbrack {0,1}\right\rbrack$ being a user-defined parameter that controls the cutoff frequency. Qualitatively, small value of $\gamma$ will lead to very smooth, but also delayed signal, while values near one will cause snappier response time with higher frequencies.
212
+
213
+ ![01963e6a-d8ab-70d9-837f-01808b2c67d8_4_978_159_663_377_0.jpg](images/01963e6a-d8ab-70d9-837f-01808b2c67d8_4_978_159_663_377_0.jpg)
214
+
215
+ Figure 9: Comparison of untuned (left) and tuned (right) time filters. The untuned filter has frames distributed nearly equally throughout the clip. The tuned filter has more frames, with more easing at the start and end of the clip. It also has a smoother arc motion.
216
+
217
+ ![01963e6a-d8ab-70d9-837f-01808b2c67d8_4_997_710_572_357_0.jpg](images/01963e6a-d8ab-70d9-837f-01808b2c67d8_4_997_710_572_357_0.jpg)
218
+
219
+ Figure 10: Comparison between the use of a small time filter (top) and a large time filter (bottom) applied on the acceleration-based drag, followthrough, and deformers. Frames (2), (3), and (5) emphasize the differences between the two filters, showing that small filters increase deformer reactivity.
220
+
221
+ Assuming different effects can take as inputs a signal filtered with different $\gamma$ values, this allows to apply a controlled delay on the time where a specific effect will take place relative to another one. Empirically, we find useful a long and smooth centrifugal stretch deformation, while followthrough works best as a faster and more temporary effect. This authoring of the timing of the effect is supported trivially through the described time filtering, by considering a small $\gamma$ -value for $\omega$ used in ${\psi }^{ft}\left( {\omega , r}\right)$ , and a larger $\gamma$ -value to compute $\alpha$ used in ${\psi }^{cs}\left( {\alpha , r}\right)$ . As shown in Figure 9, adjusting these values allows artistic control over the resulting animation timing on each individual effect, while preserving a simple combination of the effects as a linear summation, $\psi = {\psi }^{ft} + {\psi }^{cs}$ in this case.
222
+
223
+ ## 7 RESULTS
224
+
225
+ We illustrate and compare some AS results obtained on more complex animations that can also be seen in the associated video. All the illustrations are computed in real time (approx. ${60}\mathrm{{fps}}$ ) on a common laptop using a non-optimized CPU implementation. The cow and flower examples have a rig of about 10 joints and up to a thousand of triangles. Although we did not propose a GPU implementation in this paper, the computational time of AS is similar to VS and shares the same properties: it is fully compatible with highly efficient computation in a single-pass vertex shader that has been shown to be applicable up to millions of vertices [23]. However, the non-sparsity of the weights $\widetilde{b}$ limits the possibility of extreme optimization that can be applied on raw LBS.
226
+
227
+ ![01963e6a-d8ab-70d9-837f-01808b2c67d8_5_407_158_967_452_0.jpg](images/01963e6a-d8ab-70d9-837f-01808b2c67d8_5_407_158_967_452_0.jpg)
228
+
229
+ Figure 11: Flower animation with LBS (top), acceleration skinning with acceleration drag and followthrough deformation (bottom). The highlighted frames showcase the flower face bending due to acceleration skinning deformation.
230
+
231
+ Figure 11 shows the comparison of a motion made by an artist of a flower pot "jumping" from left to right. The top row shows the raw animation set by the artist using LBS, while the bottom row illustrates the result obtained after adding the AS deformers. The bending of the flower obtained during the acceleration (drag effect) and deceleration phases (followthrough effect) are highlighted with the red rectangles. Figure 12 shows a comparison between VS and AS on an animated cow illustrating the conceptual example shown in Figure 3. The main difference here between AS and VS can be noticed at the end of the motion during the deceleration phase where the followthrough deformer is acting in the AS animation.
232
+
233
+ While both VS and AS propose a notion of "stretch" deformers, they differ sharply in their effect and are applicable in separate scenarios (see Figure 13). VS stretch relies on a scaling transformation to produce a squash and stretch effect. It requires an axis built from an ad-hoc frame using the skeletal structure and a notion of relative centroid associated to a limb and its descendant. VS stretch is relevant to illustrate cartoon effects that are elongated along the line of action and allow stretching as well as squeezing in that direction. On the contrary, AS centrifugal stretch relies on local translation only that does not require the precomputation of such centroid, and its direction is fully defined from the joint transformation. However, the lack of notion of centroid does not allow to represent directly a notion of squeezing centered around the moving limb.
234
+
235
+ ## 8 CONCLUSION
236
+
237
+ We introduce Acceleration Skinning as a real-time skinning technique that employs the key components of articulated skeletal acceleration for cartoon effects. The general idea is to utilize specific acceleration terms to expand the gamut of deformation effects available through the skinning pipeline. The approach is meant to be employed in conjunction with the recent work in Velocity Skinning [23]. In this paper, we showcase its efficacy by creating three new deformation effects, including automatic followthrough, a common design tool called out by classic 2D animators. Beyond, AS is a general approach with an expandable collection of effects.
238
+
239
+ This method is not intended as a replacement to physical simulation of deformation, but rather as an artist-driven tool to create stylized secondary motion. In support of such, it is easy to control, and works "out of the box" on standard skeletal rigs. As the effects are tied to the skinning rig and not the animation, AS (and VS) are useful for real-time animation effects, even for unseen motion inputs. AS extends the capabilities of LBS and VS to empower animators through a simple tool that bootstraps existing pipelines. Further, in this paper, we explore the power of effect-specific time filtering and its relevance to enhancing artist control. However, unlike physical modeling, the technique is not able to handle collisions, interaction forces, and other nuanced dynamic effects that are generated by direct simulation.
240
+
241
+ While we describe effects that hearken back to traditional animation, the technique is not limited to this restriction. For example, one possible future extension may be to model oscillatory motion using a skinning system. Hypothetically, if we were to expand the system to include higher order derivatives, more oscillations could be modeled. Therefore, a possible direction for future work lies in employing this technique for higher derivatives. Because the work can be applied in real-time, we are excited to investigate interactive applications, for example the use of such in gaming. As an AS rig works independently of the animation, it may be applied as an add-on for interactive character animation, such as in games or in a virtual reality (VR) setting.
242
+
243
+ In summary, we introduce the AS method that is a natural extension to VS, pushing further into the potential for deformable effects layered over LBS, and providing a wider collection of stylized deformations for artists to employ.
244
+
245
+ ## REFERENCES
246
+
247
+ [1] A. Angelidis and K. Singh. Kinodynamic skinning using volume-preserving deformations. In ${SCA},{2007}$ .
248
+
249
+ [2] Y. Bai, D. M. Kaufman, K. Liu, and J. Popovic. Artist-Directed Dynamics for 2D Animation. ACM Trans. on Graphics. Proc. ACM SIGGRAPH, 35(4), 2016.
250
+
251
+ [3] S. Bouaziz, S. Martin, T. Liu, L. Kavan, and M. Pauly. Projective dynamics: fusing constraint projections for fast simulation. ACM Trans. on Graphics. Proc. ACM SIGGRAPH, 33(4), 2014.
252
+
253
+ [4] S. Coros, S. Martin, B. Thomaszewski, C. Schumacher, R. Sumner, and M. Gross. Deformable Objects Alive! ACM Trans. on Graphics. Proc. ACM SIGGRAPH, 31(4), 2012.
254
+
255
+ [5] M. Dvoroznak, P. Bénard, P. Barla, O. Wang, and D. Sykora. Example-Based Expressive Animation of 2D Rigid Bodies. ACM Trans. on Graphics. Proc. ACM SIGGRAPH, 36(4), 2017.
256
+
257
+ [6] D. L. James and D. K. Pai. DyRT: dynamic response textures for real time deformation simulation with graphics hardware. ACM Trans. on Graphics, 21(3), 2002.
258
+
259
+ [7] M. Kass and J. Anderson. Animating Oscillatory Motion With Overlap: Wiggly Splines. ACM Trans. on Graphics. Proc. ACM SIGGRAPH, 2008.
260
+
261
+ ![01963e6a-d8ab-70d9-837f-01808b2c67d8_6_423_161_965_395_0.jpg](images/01963e6a-d8ab-70d9-837f-01808b2c67d8_6_423_161_965_395_0.jpg)
262
+
263
+ Figure 12: Comparison between velocity skinning (top) and acceleration skinning (bottom) deformation during the drag and followthrough phase.
264
+
265
+ ![01963e6a-d8ab-70d9-837f-01808b2c67d8_6_234_658_542_402_0.jpg](images/01963e6a-d8ab-70d9-837f-01808b2c67d8_6_234_658_542_402_0.jpg)
266
+
267
+ Figure 13: Comparison between VS stretch and AS centrifugal stretch deformer effects. Green arrows indicate the direction of stretch for rotation about the blue pivot. VS stretch also requires the notion of a centroid (pink) as well as a frame that depends on the bone orientation. Both capture distinct effects. VS stretch model cartoon squash and stretch to suggest the intensity of the speed of the motion. In contrast, AS centrifugal stretch models elongation, as if due to material yield by representing the centrifugal effect of bone rotation.
268
+
269
+ [8] L. Kavan, S. Collins, J. Zára, and C. O'Sullivan. Geometric skinning with approximate dual quaternion blending. ACM Transactions on Graphics (TOG), 27(4):1-23, 2008.
270
+
271
+ [9] R. H. Kazi, T. Grossman, B. Umetani, and G. Fitzmaurice. Motion Amplifiers: Sketching Dynamic Illustrations Using the Principles of 2D Animation. ACM CHI, 2016.
272
+
273
+ [10] M. Komaritzan and M. Botsh. Projective Skinning. PACM, Proc. 13D, $1\left( 1\right) ,{2018}$ .
274
+
275
+ [11] J. Kwon and I. Lee. Exaggerating Character Motions Using Sub-Joint Hierarchy. Computer Graphics Forum., 27(6):1677-1686, 2008.
276
+
277
+ [12] J.-Y. Kwon and I.-K. Lee. The Squash-and-Stretch Stylization for Character Motions. IEEE TVCG, 18(3), 2012.
278
+
279
+ [13] Y. Kwon and K. Min. Motion Effects for Dynamic Rendering of Characters. Lecture Notes in Electrical Engineering, 181:331-338, Jan. 2012.
280
+
281
+ [14] J. Lasseter. Principles of Traditional Animation Applied to 3D Computer Animation. Computer Graphics. Proc. ACM SIGGRAPH, 21(4), 1987.
282
+
283
+ [15] Y. Li, M. Gleicher, Y.-Q. Xu, and H.-Y. Shum. Stylizing Motion with Drawings. In ${SCA},{2003}$ .
284
+
285
+ [16] N. Magnenat-Thalmann, R. Laperrire, and D. Thalmann. Joint-dependent local deformations for hand animation and object grasping. In In Proceedings on Graphics interface, 1988.
286
+
287
+ [17] Maya. Squash and Jiggle deformers. Autodesk, 2018. https://knowledge.autodesk.com.
288
+
289
+ [18] M. Muller, B. Heidelberger, M. Teschner, and M. Gross. Meshless
290
+
291
+ deformations based on shape matching. ACM Trans. on Graphics. Proc. ACM SIGGRAPH, 24(3), 2005.
292
+
293
+ [19] Naoya Iwamoto and Hubert P.H. Shum and Longzhi Yang and Shigeo Morishima. Multi-layer Lattice Model for Real-Time Dynamic Character Deformation. Computer Graphics Forum. Proc. Pacific Graphics, 34(7), 2015.
294
+
295
+ [20] P. Noble and W. Tang. Automatic Expressive Deformations for Stylizing Motion. In GRAPHITE, 2006.
296
+
297
+ [21] J. A. Okun and S. Zwerman. The VES Handbook of Visual Effects. Industry Standard VFX Practices and Procedures. Focal Press, 2010.
298
+
299
+ [22] A. Opalach and S. Maddock. Disney Effects Using Implicit Surfaces. In Workshop on Animation and Simulation, 1994.
300
+
301
+ [23] D. Rohmer, M. Tarini, N. Kalyanasundaram, F. Moshfeghifar, M.-P. Cani, and V. Zordan. Velocity Skinning for Real-time Stylized Skeletal Animation. Computer Graphics Forum, Proc. Eurographics, 40(2), 2021.
302
+
303
+ [24] K. Ruhland, M. Prasad, and R. McDonnel. Data-driven approach to synthesizing facial animation using motion capture.
304
+
305
+ [25] N. A. Rumman and M. Fratarcangeli. Position based skinning of skeleton-driven deformable characters. In Proceedings of the 30th Spring Conference on Computer Graphics, SCCG '14, p. 83-90, 2014.
306
+
307
+ [26] F. Thomas and O. Johnston. Disney Animation: The Illusion of life. Disney Editions, 1981.
308
+
309
+ [27] J. Wang, S. M. Drucker, M. Agrawala, and M. F. Cohen. The cartoon animation filter. ACM Trans. on Graphics. Proc. ACM SIGGRAPH, 25, 2006.
310
+
311
+ [28] Y. Wang, N. J. Weidner, M. A. Baxter, Y. Hwang, D. M. Kaufman, and S. Sueda. REDMAX: Efficient & flexible approach for articulated dynamics. ACM Trans. Graph., 38(4), July 2019.
312
+
313
+ [29] R. Williams. The Animator's Survival Kit-Revised Edition: A Manual of Methods, Principles and Formulas for Classical, Computer, Games, Stop Motion and Internet Animators. Faber and Faber, 2009.
314
+
315
+ [30] H. Xu and J. Barbic. Pose-Space Subspace Dynamics. ACM Trans. on Graphics. Proc. ACM SIGGRAPH, 35(4), 2016.
316
+
317
+ [31] J. E. Zhang, S. Bang, D. Levin, and A. Jacobson. Complementary Dynamics. ACM Trans. on Graphics. Proc. ACM SIGGRAPH Asia, 2020.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/BhGl3eYVaf9/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,243 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § ACCELERATION SKINNING: KINEMATICS-DRIVEN CARTOON EFFECTS FOR ARTICULATED
2
+
3
+ < g r a p h i c s >
4
+
5
+ Figure 1: Real-time cartoon-like deformation computed automatically with Acceleration Skinning extending the range of deformation available from velocity only. Left: The joint use of angular velocity $\omega$ and acceleration $\alpha$ allows to represent a smooth variation between an acceleration phase ( $\omega$ and $\alpha$ have the same orientation) associated to drag effect, and a deceleration phase ( $\omega$ and $\alpha$ have opposite direction) associated to followthrough transition. Right: A centrifugal effect coupled with a lift deformer represents the lift of the rotating skirts of the dancing toy characters.
6
+
7
+ § ABSTRACT
8
+
9
+ Cartoon effects described in animation principles are key to adding fluidity and style to animated characters. This paper extends the existing framework of Velocity Skinning to use skeletal acceleration, in addition to velocity, for cartoon-style effects on rigged characters. This Acceleration Skinning is able to produce a variety of cartoon effects from highly efficient closed-form deformers while remaining compatible with standard production pipelines for rigged characters. This paper showcases the introduction of the framework along with providing applications through three new deformers. Specifically, a followthrough effect is obtained from the combination of skeletal acceleration and velocity. Also, centrifugal stretch and centrifugal lift effects are introduced using rotational acceleration to model radial stretching and lifting effects. The paper also explores the application of effect-specific time filtering when combining deformations together allow for more stylization and artist control over the results.
10
+
11
+ Index Terms: Computer Animation, Procedural Deformation.
12
+
13
+ § 1 INTRODUCTION
14
+
15
+ The fundamental principles of animation laid out by Walt Disney Studios remain an important driving factor for animators today [26]. These principles, created by the pioneers of traditional hand-drawn animation, have been guiding artists on how to create effects such as squash and stretch and followthrough to add vitality to animation. Many techniques have been proposed to adapt such cartoon effects to suit the world of $3\mathrm{D}$ animation production [14], however their application still remains a non-trivial task. Currently, professional animators generally create such effects either through rigged animation or using custom, manually controlled deformers [17]. While these approaches allow fine artistic control, they become tedious in adapting the work to multiple characters and effects. As an alternative, physically-based simulation can produce the effect in a fully automatic fashion, but often requires expert tuning and costly computation which does not adapt easily to the usual production pipeline, including the iterative animation workflow.
16
+
17
+ Recently, a technique called Velocity Skinning [23] (VS) introduced a method able to automatically represent a subset of cartoon effects, including squash and stretch and drag/floppy effect, that can be applied on top of a character deformed by skinning solely using a standard rig. As its name suggests, it relies on the use of velocity at the joint level to drive the deformations, providing real-time effects that automatically adapt to the animation and hierarchy of the rigged character. The approach provides simple and direct artist control over the deformation that works with animator workflow, and does not require costly physical simulation.
18
+
19
+ In this paper, we propose a natural extension we call Acceleration Skinning (AS) which builds off of this previous work. Namely, we explore and showcase additional effects for skinning animation based on the use of acceleration in the pipeline, in conjunction with the VS approach. To this end, we propose a more general framework that combines both, as described in Sec. 3. This generalization allows a wider set of deformers while preserving the original advantages of the VS approach, such as its high efficiency and art directabil-ity. Foremost, we showcase the additional use of acceleration, in combination with velocity, to model overshooting deformation, a common effect which is often called followthrough in animation guides (Sec. 4). Further, as acceleration provides a natural separation of coordinate frames, e.g. associated to centrifugal effects when rotational motions are involved, we also propose additional parameterized deformation effects. Specifically, we introduce centrifugal stretch, for the elongation of an object that rotates, and centrifugal lift, for the elevation of a rotating object, such as a twirling skirt (Sec. 5). Finally, we extend the general (VS/AS) domain further by exploring the use of time filters to highlight and tune effects based on artist control (Sec. 6).
20
+
21
+ In total, the contributions of this paper are to extend the capabilities of VS and skinning overall, and add a set of interesting, AS-specific effects with little overhead in implementation and cost while still supporting a convenient workflow for artist-driven cartoon effects.
22
+
23
+ § 2 RELATED WORK
24
+
25
+ Much of the production animation created today for characters depends on skinning, such as Linear Blend Skinning (LBS) [16], where the surface geometry (skin) of the character is moved in relation to an articulated skeleton. The dependence between the joints of the skeleton and the vertices of the skin is defined by scalar skinning weights and in combination with the skeleton and joints constitute the rig for character skinning. LBS is a straightforward implementation of skinning where the deformation is computed as a linear combination of the joint transforms. A large set of work in computer animation research has improved upon LBS skinning techniques but LBS remains a common tool in practice. As such, our general formulation for Acceleration Skinning is derived for the basic LBS formulation, but the acceleration-skinning deformers proposed may also be applied to other methods, such as the Dual Quaternion approach [8], another common skinning technique.
26
+
27
+ Traditional skinning defines (static) skin poses of the character that can be derived from interpolating keyframes from an animation. However, this approach only interpolates in-between poses and does not model dynamic effects, such as drag at high speeds, or fol-lowthrough due to inertia. To this end, physically-based simulation has been used to represent so-called secondary actions on top of keyframe motion. We distinguish three main approaches to compute secondary motion. First, projection-based dynamics [18] and its extensions [3] have been proposed to modeling muscle and soft tissue giggling effects $\left\lbrack {{10},{19},{25}}\right\rbrack$ and can be tuned to exaggerate specific cartoon-like effects $\left\lbrack {2,4}\right\rbrack$ . Second, lecond, layering techniques have been proposed where distinct effects are handled as simple physical models, using jiggling implicit surfaces [22], spring-driven bones [11], or custom volume effects such as squash and stretch [1]. Third, reduced deformable models have been introduced $\left\lbrack {6,{28},{30}}\right\rbrack$ where precomputed shape analysis allows for model reduction, and therefore sped-up computation. Work is on-going and new techniques for producing simulation effects are still being proposed, such as efficient simulated cartoon deformations applied on top of skinning animation [31], which defines the space of secondary effects as the orthogonal subspace of the rig. In total, these physically-based approaches are able to model rich and detailed dynamic exaggerated deformations, possibly with collision handling, but they share two main drawbacks. First, physically based simulations are limited with respect to efficiency. Even fast approaches, that can tolerate models with thousands of vertices, cannot scale easily to a large number of meshes at the same time - as may be required for video games applications- or very detailed mesh of several millions of vertices, while geometric methods such as skinning are still orders of magnitude faster. Second, simulation requires numerical time integration computed from the initial conditions. This integration imposes the animation to be baked before being visualized, which hamper the efficiency of the animator by limiting the possibility of navigating and editing details at arbitrary instances along the animation timeline. As such, standard production pipelines [17, 21] often favor procedural deformers that are independent of past state when possible.
28
+
29
+ Aside from simulation, geometric and kinematics-based deformations have also been proposed in the literature. For instance, sketch- $\left\lbrack {9,{15}}\right\rbrack$ and example-based $\left\lbrack {5,{24}}\right\rbrack$ techniques aim to precisely define shape deformations for expressive effects, but require manual input for each shape. In contrast, procedural approaches allow more automatic adaptation while remaining compute efficient, as they take advantage of the skeletal structure position and kinematics. For example, Nobel et al. [20] introduce a local bending behavior parameterized by the direction and velocity between consecutive bones. Yong et al. [12] propose a squash and stretch deformer parameterized by root joint velocity, and a drag effect on end-effector joints of a skinned character. Procedural bone stretching was also proposed by Kwon and Lee [13] for local squash and stretch effects. Recently, Rohmer et al. [23] introduced Velocity Skinning (VS) that can be seen as a generic framework to define local deformers such as squash and stretch and drag based on joint velocity. Like the proposed work here, the VS approach fits well to standard animation pipeline in reusing the existing rig (i.e. original skeleton and skinning weights) for automatic parameterization of deformation. VS also provides a closed form procedural deformation that can be computed extremely efficiently for each vertex in parallel, similar to raw skinning. Our approach follows VS, but extends their approach to acceleration terms. This allows additional deformation behaviors that cannot be achieved in using velocity alone, such as the proposed followthrough effect that requires deformation just as the velocity approaches zero. We also note some cartoon effects have been generated using time filtering [27] or oscillating splines [7] applied to vertex trajectories. Our approach also parallels these works in applying low-pass time filters in order to control the relative timing and magnitude of effects relative to the instantaneous joint values of the skeleton.
30
+
31
+ § 3 ACCELERATION SKINNING
32
+
33
+ In LBS, the deformed position $p$ of an arbitrary vertex at an initial position ${p}^{0}$ can be expressed as the linear blending
34
+
35
+ $$
36
+ p = \mathop{\sum }\limits_{j}{b}_{j}{p}_{j} \tag{1}
37
+ $$
38
+
39
+ where $j$ is the index of a joint, ${b}_{j} \in \left\lbrack {0,1}\right\rbrack$ are the skinning weights that should satisfy $\mathop{\sum }\limits_{j}{b}_{j} = 1.{p}_{j}$ are the rigid skinning deformation of the position ${p}^{0}$ relative to the joint $j$ . As shown in Rohmer et al [23], differentiating this expression in time, and reorganizing the terms along the skeleton hierarchy leads to the following "skinning-like" relation in velocity
40
+
41
+ $$
42
+ v = \mathop{\sum }\limits_{j}{\widetilde{b}}_{j}{v}_{/j} \tag{2}
43
+ $$
44
+
45
+ where $v$ is the net velocity of the vertex at position $p,{v}_{/j}$ is the contribution coming from the rigid transformation of the joint $j$ only to the net velocity. And ${\widetilde{b}}_{j}$ are the kinematics-weights that are obtained from a combination of skinning weights
46
+
47
+ $$
48
+ {\widetilde{b}}_{j} = \mathop{\sum }\limits_{{k \in \operatorname{Desc}\left( j\right) }}{b}_{k}, \tag{3}
49
+ $$
50
+
51
+ with Desc( $j$ ) being the descendants of the joint $j$ in the skeleton hierarchy.
52
+
53
+ We note that this derivation is valid for any arbitrary degree of differentiation in time. Therefore, relation in Eq. (2) can also be differentiated, which leads to an acceleration-like skinning
54
+
55
+ $$
56
+ a = \mathop{\sum }\limits_{j}{\widetilde{b}}_{j}{a}_{/j}. \tag{4}
57
+ $$
58
+
59
+ With this time $a$ being the net acceleration of the vertex at position $p$ , and ${a}_{/j}$ is the contribution coming from the joint $j$ to the net acceleration.
60
+
61
+ Both velocity ${v}_{/j}$ and acceleration ${a}_{/j}$ contributions are related to rigid motions, and we can therefore breakdown their expressions in relation to linear/angular velocity and acceleration of their respective joint as
62
+
63
+ $$
64
+ {v}_{/j} = {v}_{Lj} + {\omega }_{j} \times {r}_{j} \tag{5}
65
+ $$
66
+
67
+ $$
68
+ {a}_{/j} = {a}_{Lj} + {\alpha }_{j} \times {r}_{j} + {\omega }_{j} \times {\omega }_{j} \times {r}_{j}.
69
+ $$
70
+
71
+ ${v}_{Lj}$ and ${a}_{Lj}$ are respectively the linear velocity and acceleration of joint $j.{\omega }_{j}$ is the angular velocity, and ${\alpha }_{j} = {\dot{\omega }}_{j}$ is the angular acceleration vector. ${r}_{j} = p - {\operatorname{proj}}_{j}\left( p\right)$ is the relative vector between $p$ and its orthogonal projection ${\operatorname{proj}}_{i}\left( p\right)$ onto the rotation axis passing by the joint $j$ and oriented along ${\omega }_{j}$ . Note that while velocity depends only on two components - linear and angular -, the acceleration depends on three, respectively, the linear component $\left( {a}_{Lj}\right)$ , angular component $\left( {{\alpha }_{j} \times {r}_{j}}\right)$ , and centripetal component $\left( {{\omega }_{j} \times {\omega }_{j} \times {r}_{j}}\right)$ . These components are illustrated in Figure 2 for velocity and acceleration.
72
+
73
+ < g r a p h i c s >
74
+
75
+ Figure 2: Angular velocity (left) and acceleration (right) components associated to the circular motion of joint $j$ . The acceleration can be split between the angular component (orthogonal to ${\alpha }_{j}$ and ${r}_{j}$ ) and a centripetal one (oriented toward the rotation axis directed by ${\omega }_{j}$ ).
76
+
77
+ Aligned with Rohmer et al. [23] the general idea in acceleration skinning is to define "deformers" $\left( {\psi \mathrm{s}}\right)$ as procedural functions representing a specific type of parameterized deformation at the individual joint level. Subsequently, the deformers are combined over the skeleton hierarchy as Eq. (4) in order to distribute globally the total deformation over the skinning surface as
78
+
79
+ $$
80
+ d\left( p\right) = \mathop{\sum }\limits_{j}{\widetilde{b}}_{j}\psi \left( {{v}_{Lj},{a}_{Lj},{\omega }_{j},{\alpha }_{j},{r}_{j}}\right) . \tag{6}
81
+ $$
82
+
83
+ $d\left( p\right)$ is the net deformation to $p$ such that the final deformed position is ${p}_{\text{ final }} = p + d\left( p\right)$ . Further, the deformer $\psi$ can itself be decomposed into a linear sum of individual sub-effects ${\psi }^{\text{ effect }}$ such
84
+
85
+ that
86
+
87
+ $$
88
+ \psi = \mathop{\sum }\limits_{\text{ effects }}{\psi }^{\text{ effect }}. \tag{7}
89
+ $$
90
+
91
+ In this paper, we adopt the deformers from the Velocity Skinning work (e.g. drag) and propose three unique deformers that are constructed using the components of the derived acceleration terms from Eq. 5. Namely, we propose formulations for new deformers that we call respectively ${\psi }^{ft}$ for followthrough, ${\psi }^{cs}$ for centrifugal stretch, and ${\psi }^{cl}$ for centrifugal lift. These are described in the following sections in detail. As each deformer ${\psi }^{\text{ effect }}$ is defined in a generic way for any joint $j$ we will omit to explicitly mention the dependence to the index $j$ in its velocity and acceleration parameters for notation clarity.
92
+
93
+ § 4 FOLLOWTHROUGH DEFORMER
94
+
95
+ Followthrough is a fundamental principle of animation and has been used extensively in animation throughout history as a way to convey the inertia of the animated shape (see Figure 3. Drag and follow-through are two effects that often go hand in hand. We describe our approach to represent followthrough effect next, relying on the combination of velocity and acceleration information.
96
+
97
+ Recall the velocity-based floppy deformer ${\psi }^{\text{ floppy }}$ was proposed in VS to model a drag effect. This deformer was split into two parts either coming from the linear velocity or the angular velocity. The linear velocity related effect was associated with the deformer ${\psi }^{\text{ floppy, lin }}$ representing a translation, while the angular velocity related effect was associated with the deformer ${\psi }^{\text{ floppy, ang }}$ representing bending.
98
+
99
+ $$
100
+ {\psi }^{\text{ floppy, lin }}\left( {v}_{L}\right) = - {K}_{\text{ lin }}{v}_{L} \tag{8}
101
+ $$
102
+
103
+ $$
104
+ {\psi }^{\text{ floppy, ang }}\left( {\omega ,r}\right) = \left( {R\left( {-{K}_{\text{ ang }}\parallel \omega \times r\parallel ,\omega }\right) - {Id}}\right) r,
105
+ $$
106
+
107
+ where $R\left( {\theta ,u}\right)$ is the rotation matrix parameterized by an angle $\theta$ and an axis $u$ , and ${Id}$ is the identity matrix. ${K}_{\text{ lin }}$ and ${K}_{\text{ ang }}$ are user-defined coefficients allowing the scale of the magnitude of each deformation.
108
+
109
+ < g r a p h i c s >
110
+
111
+ Figure 3: Top: Concept of drag and followthrough explained in The Animator's Survival Kit [29]. Bottom: Acceleration Skinning animation results utilizing proposed acceleration-based drag and followthrough defomers.
112
+
113
+ < g r a p h i c s >
114
+
115
+ Figure 4: Three phases during a circular motion illustrated by their angular velocity $\omega$ and acceleration $\alpha$ curve along time. Left (red): acceleration phase where $\alpha$ and $\omega$ are aligned and oriented in the same direction. Middle: Constant velocity magnitude with near zero angular acceleration. Right (yellow): deceleration phase where $\alpha$ and $\omega$ are opposed.
116
+
117
+ A straightforward extension of such model to followthrough consists in substituting the linear and angular acceleration terms instead of their velocity counterpart in these floppy deformers. However, this substitution leads to undesirable artifacts. Consider a motion -either linear or angular- with three phases: an acceleration phase at its start; a constant velocity phase; and a deceleration phase at the end before the motion stops. These three phases are illustrated in Figure 4 (and Figure 1, left) in the case of an angular motion, comparing the velocity-based deformer to the acceleration-based one. For such case, the proposed acceleration-based floppy deformer will first start by applying a draglike deformation, then continue with no deformation, and end during the deformation phase with an opposite effect of drag, i.e. an overshot of the deformation beyond the final pose.
118
+
119
+ < g r a p h i c s >
120
+
121
+ Figure 5: Flower model with circular motion. Top: Source LBS deformation, Bottom: After application of followthrough deformer ${\psi }^{ft}$ - Note the deformation is acting at the fourth and fifth image.
122
+
123
+ This last phase corresponds to the expected behavior for fol-lowthrough representation. However, the current deformer also contains a draglike behavior which is not expected in the first phase. To avoid this unexpected behavior, we modified the proposed deformer by filtering based on the relative direction between the current velocity and acceleration parameters. More precisely, we introduce a smooth indicator function $\mathcal{D}$ that we characterize by the condition that the velocity component is aligned and opposite with the acceleration component. Specifically, we propose the following indicator function
124
+
125
+ $$
126
+ \mathcal{D} : \left( {a,b}\right) \mapsto \left\{ \begin{matrix} 0 & \text{ if }a \cdot b \geq 0 \\ \left| {a \cdot b}\right| /D & \text{ if }0 \geq a \cdot b \geq - D \\ 1 & \text{ otherwise } \end{matrix}\right. \tag{9}
127
+ $$
128
+
129
+ where the parameter $D \in \left\lbrack {0,1}\right\rbrack$ allows to adapt how fast the transition to the deceleration phase is computed. We then define the followthrough (ft) deformer, ${\psi }^{ft}$ , as
130
+
131
+ $$
132
+ {\psi }^{{ft},\operatorname{lin}}\left( {{v}_{L},{a}_{L}}\right) \; = - \mathcal{D}\left( {{v}_{L},{a}_{L}}\right) {K}_{\operatorname{lin}}{a}_{L}
133
+ $$
134
+
135
+ $$
136
+ {\psi }^{{ft},{ang}}\left( {\omega ,\alpha ,r}\right) = \mathcal{D}\left( {\omega ,\alpha }\right) \left( {R\left( {-{K}_{ang}\parallel \alpha \times r\parallel ,\alpha }\right) - {Id}}\right) r.
137
+ $$
138
+
139
+ (10)
140
+
141
+ Results obtained using this deformer are illustrated on a circular motion in Figure 5.
142
+
143
+ In addition, we can further take advantage of the smooth indicator to separate the contribution of drag from the followthrough. Doing so, we propose additional pure acceleration drag (ad) deformer ${\psi }^{ad}$ as
144
+
145
+ $$
146
+ {\psi }^{{ad},{lin}}\left( {{v}_{L},{a}_{L}}\right) \; = - \left( {1 - \mathcal{D}\left( {{v}_{L},{a}_{L}}\right) }\right) {K}_{lin}{a}_{L}
147
+ $$
148
+
149
+ $$
150
+ {\psi }^{{ad},{ang}}\left( {\omega ,\alpha ,r}\right) = \left( {1 - \mathcal{D}\left( {\omega ,\alpha }\right) }\right) \left( {R\left( {-{K}_{ang}\parallel \alpha \times r\parallel ,\alpha }\right) - {Id}}\right) r
151
+ $$
152
+
153
+ (11)
154
+
155
+ which will apply a drag effect during the acceleration phase of the motion (see Figure 6, top). Note that both deformers ${\psi }^{ft}$ and ${\psi }^{ad}$ can be used together on an animation and parameterized separately in adapting their respective coefficients ${K}_{\text{ lin }}$ and ${K}_{\text{ ang }}$ for additional user control (see example in Figure 6, bottom). Note that this notion of combining drag with followthrough remains consistent with traditional animation concepts [29] (Figure 3, top).
156
+
157
+ § 5 CENTRIFUGAL-BASED DEFORMERS
158
+
159
+ Centrifugal motion during rotation of a physical system moves away radially from the axis. This movement is countered by centripetal acceleration in general. We exploit this relationship to propose two new deformers. Namely, as noted in Eq. (5), the net acceleration of a vertex following a rotating motion contains a centripetal component acting radially around the rotation axis, and expressed as $\omega \times \omega \times r$ . In contrast to the angular component, this component exists even when the magnitude of the velocity remains constant, as in the case of a circular motion with constant angular velocity. We take employ this acceleration term in two distinct effects, first, with a generic centrifugal stretch deformer ${\psi }^{cs}$ , and second, in a centrifugal "lift" deformer, ${\psi }^{cl}$ . By their nature, both are applicable for rotating motion only.
160
+
161
+ < g r a p h i c s >
162
+
163
+ Figure 6: Top: application of the acceleration drag ${\psi }^{ad}$ . Bottom: Mix of acceleration drag ${\psi }^{ad}$ with followthrough ${\psi }^{ft}$ .
164
+
165
+ § 5.1 CENTRIFUGAL STRETCH
166
+
167
+ When observing how chefs make pizza, we see that after kneading and pressing the dough, they proceed to toss it in the air with a spin of their wrists. This spin is associated with centripetal acceleration, which in return leads to a centrifugal effect experienced by any position placed in this rotating frame that stretches the dough further out. Inspired from this notion, we propose the creation of the centrifugal stretch deformer that reproduce such effect with
168
+
169
+ $$
170
+ {\psi }^{cs}\left( {\omega ,r}\right) = - {K}_{cs}\omega \times \omega \times r. \tag{12}
171
+ $$
172
+
173
+ Note, as the centrifugal effect acts outward from the axis of rotation as visualized in Figure 2, we need to negate the centripetal acceleration that is directed inward. ${\psi }^{cs}$ creates an effect that translates vertices outward away from the point of rotation. As the magnitude of acceleration increases linearly with distance, the vertices farther away from the center of rotation experience larger deformation than those closer to the center. A simple example appears in Figure 7.
174
+
175
+ < g r a p h i c s >
176
+
177
+ Figure 7: Centrifugal stretch effect applied onto an animation of a flower twisting its head around its central point.
178
+
179
+ § 5.2 CENTRIFUGAL LIFT
180
+
181
+ Consider the motion of a twirling cloth, such as a dancer's skirt. As the dancer twirls, their skirt gently lifts. The same centrifugal force that caused the pizza dough to expand also applies to this shape. However, in the case of a cloth, the garment surface is more inextensible, and internal stress will prevent elongation in its local tangent plane. As a result, the rotating dress may first unwrap from its wrinkled configuration onto its extended shape before its only remaining degree of freedom are "lifted" in the relative normal direction.
182
+
183
+ While a general solution to account for centripetal lift is difficult without a dynamic model, we can offer a simple solution that can be used to approximate this centrifugal lifting effect on a conical-like shapes. Cones with rotation about their center have consistent normals that allow us to filter the previously defined deformers to act only along the normal direction to create the effect. Our basic formulation of pure centrifugal lift ${\psi }^{cl}$ is thus
184
+
185
+ $$
186
+ {\psi }^{cl}\left( {\omega ,r}\right) = {K}_{cl}\left( {{\psi }^{cs}\left( {\omega ,r}\right) \cdot n}\right) n, \tag{13}
187
+ $$
188
+
189
+ where $n$ is the local normal of an ideal cone at the current vertex position $p$ , and ${K}_{cl}$ is a user-defined parameter enabling to allow tuned exaggeration of the lift.
190
+
191
+ < g r a p h i c s >
192
+
193
+ Figure 8: Centrifugal lift ${\psi }^{cl}$ is applied on meshes approximated by a conical surface. The deformation is constrained to act only along the normal associated to the conical approximation.
194
+
195
+ Further, any model that can be reasonably approximated by the cone can enjoy this deformer by using the cone as a proxy geometry and mapping the projected geometry to the latter. For example, the stylized dress in Figure 8 maps to the nearest point in the conical approximation to yield a set of normals that are used for the skirt deformation with ${\psi }^{cl}$ . This effect can further be seen as a more generic deformation filter that can be applied to any other deformer $\psi$ in constraining it to act in the normal direction, and can be defined as the functional
196
+
197
+ $$
198
+ {\psi }_{\text{ filter }}^{cl} : \psi \mapsto {K}_{cl}\left( {\psi \cdot n}\right) n, \tag{14}
199
+ $$
200
+
201
+ where $\psi$ can be an arbitrary combination of the previously defined deformers.
202
+
203
+ § 6 TIME FILTERING
204
+
205
+ As a final extension to the AS approach, which can also be applied in the VS framework but did not appear in the previous work, we explore the use of time filtering to offer more fine grain and useful control over the mixing of various deformers. To this end, we point out that time filtering is a useful tool in general. In all our examples, we receive input motion provided by smooth interpolated curves from existing keyposes. But more generally, we may also allow non-smooth inputs, for example, coming from direct interaction where the user articulates the skeleton using a mouse. In the first case, velocity and acceleration can be analytically described from the keyframes, while in the latter case we consider numerical differentiation from the joint frames which is smoothed by a low pass filter to limit spurious high frequency noise. To this end, we consider in our case a simple auto-regressive first order filter
206
+
207
+ $$
208
+ y\left( t\right) = {\gamma x}\left( t\right) + \left( {1 - \gamma }\right) y\left( {t - {\Delta t}}\right) , \tag{15}
209
+ $$
210
+
211
+ where $x$ represents the input value (linear/angular velocity/acceleration), $y$ its filtered value, ${\Delta t}$ being the frame duration, and $\gamma \in \left\lbrack {0,1}\right\rbrack$ being a user-defined parameter that controls the cutoff frequency. Qualitatively, small value of $\gamma$ will lead to very smooth, but also delayed signal, while values near one will cause snappier response time with higher frequencies.
212
+
213
+ < g r a p h i c s >
214
+
215
+ Figure 9: Comparison of untuned (left) and tuned (right) time filters. The untuned filter has frames distributed nearly equally throughout the clip. The tuned filter has more frames, with more easing at the start and end of the clip. It also has a smoother arc motion.
216
+
217
+ < g r a p h i c s >
218
+
219
+ Figure 10: Comparison between the use of a small time filter (top) and a large time filter (bottom) applied on the acceleration-based drag, followthrough, and deformers. Frames (2), (3), and (5) emphasize the differences between the two filters, showing that small filters increase deformer reactivity.
220
+
221
+ Assuming different effects can take as inputs a signal filtered with different $\gamma$ values, this allows to apply a controlled delay on the time where a specific effect will take place relative to another one. Empirically, we find useful a long and smooth centrifugal stretch deformation, while followthrough works best as a faster and more temporary effect. This authoring of the timing of the effect is supported trivially through the described time filtering, by considering a small $\gamma$ -value for $\omega$ used in ${\psi }^{ft}\left( {\omega ,r}\right)$ , and a larger $\gamma$ -value to compute $\alpha$ used in ${\psi }^{cs}\left( {\alpha ,r}\right)$ . As shown in Figure 9, adjusting these values allows artistic control over the resulting animation timing on each individual effect, while preserving a simple combination of the effects as a linear summation, $\psi = {\psi }^{ft} + {\psi }^{cs}$ in this case.
222
+
223
+ § 7 RESULTS
224
+
225
+ We illustrate and compare some AS results obtained on more complex animations that can also be seen in the associated video. All the illustrations are computed in real time (approx. ${60}\mathrm{{fps}}$ ) on a common laptop using a non-optimized CPU implementation. The cow and flower examples have a rig of about 10 joints and up to a thousand of triangles. Although we did not propose a GPU implementation in this paper, the computational time of AS is similar to VS and shares the same properties: it is fully compatible with highly efficient computation in a single-pass vertex shader that has been shown to be applicable up to millions of vertices [23]. However, the non-sparsity of the weights $\widetilde{b}$ limits the possibility of extreme optimization that can be applied on raw LBS.
226
+
227
+ < g r a p h i c s >
228
+
229
+ Figure 11: Flower animation with LBS (top), acceleration skinning with acceleration drag and followthrough deformation (bottom). The highlighted frames showcase the flower face bending due to acceleration skinning deformation.
230
+
231
+ Figure 11 shows the comparison of a motion made by an artist of a flower pot "jumping" from left to right. The top row shows the raw animation set by the artist using LBS, while the bottom row illustrates the result obtained after adding the AS deformers. The bending of the flower obtained during the acceleration (drag effect) and deceleration phases (followthrough effect) are highlighted with the red rectangles. Figure 12 shows a comparison between VS and AS on an animated cow illustrating the conceptual example shown in Figure 3. The main difference here between AS and VS can be noticed at the end of the motion during the deceleration phase where the followthrough deformer is acting in the AS animation.
232
+
233
+ While both VS and AS propose a notion of "stretch" deformers, they differ sharply in their effect and are applicable in separate scenarios (see Figure 13). VS stretch relies on a scaling transformation to produce a squash and stretch effect. It requires an axis built from an ad-hoc frame using the skeletal structure and a notion of relative centroid associated to a limb and its descendant. VS stretch is relevant to illustrate cartoon effects that are elongated along the line of action and allow stretching as well as squeezing in that direction. On the contrary, AS centrifugal stretch relies on local translation only that does not require the precomputation of such centroid, and its direction is fully defined from the joint transformation. However, the lack of notion of centroid does not allow to represent directly a notion of squeezing centered around the moving limb.
234
+
235
+ § 8 CONCLUSION
236
+
237
+ We introduce Acceleration Skinning as a real-time skinning technique that employs the key components of articulated skeletal acceleration for cartoon effects. The general idea is to utilize specific acceleration terms to expand the gamut of deformation effects available through the skinning pipeline. The approach is meant to be employed in conjunction with the recent work in Velocity Skinning [23]. In this paper, we showcase its efficacy by creating three new deformation effects, including automatic followthrough, a common design tool called out by classic 2D animators. Beyond, AS is a general approach with an expandable collection of effects.
238
+
239
+ This method is not intended as a replacement to physical simulation of deformation, but rather as an artist-driven tool to create stylized secondary motion. In support of such, it is easy to control, and works "out of the box" on standard skeletal rigs. As the effects are tied to the skinning rig and not the animation, AS (and VS) are useful for real-time animation effects, even for unseen motion inputs. AS extends the capabilities of LBS and VS to empower animators through a simple tool that bootstraps existing pipelines. Further, in this paper, we explore the power of effect-specific time filtering and its relevance to enhancing artist control. However, unlike physical modeling, the technique is not able to handle collisions, interaction forces, and other nuanced dynamic effects that are generated by direct simulation.
240
+
241
+ While we describe effects that hearken back to traditional animation, the technique is not limited to this restriction. For example, one possible future extension may be to model oscillatory motion using a skinning system. Hypothetically, if we were to expand the system to include higher order derivatives, more oscillations could be modeled. Therefore, a possible direction for future work lies in employing this technique for higher derivatives. Because the work can be applied in real-time, we are excited to investigate interactive applications, for example the use of such in gaming. As an AS rig works independently of the animation, it may be applied as an add-on for interactive character animation, such as in games or in a virtual reality (VR) setting.
242
+
243
+ In summary, we introduce the AS method that is a natural extension to VS, pushing further into the potential for deformable effects layered over LBS, and providing a wider collection of stylized deformations for artists to employ.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/BrBlpeYNTMc/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,285 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Fast Vortex Particle Method for Fluid-Character Interaction
2
+
3
+ Category: Research
4
+
5
+ ## Abstract
6
+
7
+ High fidelity interactions between game characters and gaseous effects like smoke, fire and explosions are often neglected in real-time applications due to the high computational cost of detailed fluid simulation. We present a purely vortex particle based fluid model for games which is capable of resolving the collision between fluids and complex objects such as moving game characters in real time. Contrary to most other vorticity based methods, we use a simple inversion free approach to obtain the collision velocity field on surfaces while at the same time avoiding the expensive pressure projection step associated with pressure based fluid solvers. This entails that our method fits within the restricted computational budget of most games. We showcase the efficacy of our method with simulations involving over $1\mathrm{M}\mathrm{{GPU}}$ particles and simulation times below 1 ms.
8
+
9
+ Index Terms: Game physics-Simulation-Fluids-Real-time graphics; Animation-Visualization-Game characters-Vortex method
10
+
11
+ ## 1 INTRODUCTION
12
+
13
+ Real-time simulation of fluids is a valuable addition to interactive applications such as games or virtual reality. While smoke, fire and explosions are key components in immersive gaming experiences, physically correct simulation of fluids is usually not feasible at high resolution. Smoke and explosions are sometimes pre-simulated in high resolution and played back in real-time but this approach precludes any interactions with characters which depends on the input from the user and can only be known at run-time. To represent interactions with fluids in a believable way, a detailed representation of the fluid velocity field close to the character surface is needed to match the high resolution geometric models used for characters in modern games. For game applications, vortex particle methods provide an interesting alternative to the more common velocity-pressure based representation of fluid state where the iterative pressure projection step constitutes a computational bottleneck for real-time applications. By evolving a vorticity field discretized by particles, no pressure projection is needed and the divergence free velocity field can be derived by using the Biot-Savart law. In addition, vortex methods generate unbounded continuous solutions. This allows for adaptive sampling of the velocity fields depending on the level of detail required and the available computational budget. While vortex methods allow for detailed real-time simulation in the absence of kinematic boundary conditions, it is not possible to define the boundary conditions in terms of vorticity. Boundary element methods are common approaches for enforcing kinematic boundary conditions in vorticity based fluid methods but they are computationally expensive and the methods used in most vortex codes like [23] and [5] to obtain a vortex sheet strength across collision surfaces are not very suitable for real-time applications as they require surface integration and inversion of a coupling matrix. Instead, we use a collocation method inspired by the method of [4]. This cheap collision method allows for believable transfer of complex character motion into the surrounding fluid. To further enhance realism we account for vortex shredding from the thin viscous boundary layer at the collision interface by initialization new vortex particles where the tangential velocity of the fluid tends to zero due to shear stresses [2]. The continuous velocity fields generated by the vortex particles allows many different kinds of fluid representations. Here we choose to visualize the effect with large numbers of passive tracker particles advected in the ambient velocity field. To make this feasible, we first evaluate the velocity field on an intermediate scratchpad grid and then broadcast the field to the tracker particles using linear interpolation. Some examples of our method are shown in Figure 1 where a moving game character is being hit by a plume. Figure 2 , shows a similar effect where turbulence is generated solely at the collision interface leading to intricate swirly fluid motion downstream.
14
+
15
+ ![01963e63-71a7-792f-bc87-eb5b005560e6_0_926_355_726_407_0.jpg](images/01963e63-71a7-792f-bc87-eb5b005560e6_0_926_355_726_407_0.jpg)
16
+
17
+ Figure 1: Our vortex method simulates the interaction of a moving character and fluids in real-time. A typical game effect is shown of a moving game character being hit by a plume of fire.
18
+
19
+ The time allocated physics update in most AAA games is limited, thus a real-time capable fluid solver is not necessarily efficient enough for a game but should be able to create appealing fluid simulations in a fraction of a millisecond. Our method allows for sub-millisecond simulation and detailed fluid behavior. The main contributions in this work are:
20
+
21
+ - A purely vortex based particle method for games. Our method simulates intricate turbulence generation and guarantee divergence free flows.
22
+
23
+ - The method creates continuous velocity fields and fluids can be represented as particles, in volumes or in textures without affecting the underlying dynamics.
24
+
25
+ - We adopt a simple approach for handling complex boundaries and infer an appropriate image vorticity field across surfaces without the need for matrix inversion steps.
26
+
27
+ - A computationally light-weight density representation based on tracer particles which are advected passively in the velocity field and otherwise decoupled from the physics update step.
28
+
29
+ ## 2 RELATED WORK
30
+
31
+ Fluid simulation in graphics is a vast field having been developed over the years to facilitate increased fidelity and efficiency. Textbooks like [8] or [28] give a good introduction to many of these contributions. In this work, we primarily focuses on the subset of methods specifically intended for real-time applications. Simulation techniques using regular grids as the underlying discretization of space are widely based on the work of [27]. These methods lend themselves particularly well to GPU architectures. Particle based methods like those developed by [17] and [30] have also been used to simulate fluids in real time while simulations on unstructured grids like [19] and surface-only methods like those used by [12] are particularly useful for accurate handling of interfaces and collisions. Sparse data structures like the tall-grid cells used by [10], octrees employed by [1] and VDB volumes [20] have accelerated fluid simulation even further by limiting the high-resolution simulation domain to areas where it is needed such as at a free surface or near obstacles. The above mentioned methods facilitate real-time fluid simulation but current methods must be accelerated even further to allow for practical use in large game worlds where fluid simulation can only occupy a fraction of the available computational budget.
32
+
33
+ ![01963e63-71a7-792f-bc87-eb5b005560e6_1_163_151_701_522_0.jpg](images/01963e63-71a7-792f-bc87-eb5b005560e6_1_163_151_701_522_0.jpg)
34
+
35
+ Figure 2: A stream of tracer particles is emitted from the top. The stream hits an animating character. Vortex particles are initialized in the collision boundary layer. The amount of vortex shedding is increased from left to right to achieve different looks.
36
+
37
+ ### 2.1 Vortex sheets, particles and filaments
38
+
39
+ Vortex methods commonly use particles, filaments and sheets to discretize the vorticity field from which a divergence free velocity field can be derived. While the state of a fluid can be evolved solely on the basis of a vorticity field, vortex particles have also been used to enrich low-resolution fluid simulations as shown by [26]. [22] and [24] used vortex particles for procedural turbulence advection in an underlying velocity based fluid solver. Vortex particles provide a cheap way of introducing details in a fluid simulation, and they are good candidates for game applications where particle systems are used frequently. However, the kinematic boundary conditions required on collision surfaces are not trivial to enforce. Pure vortex particle methods like those described by [23] require inversion of a pseudo-inverse collision matrix for each time-step making this approach unsuitable for real-time applications. Evaluation of the velocity field requires querying all vortex particles in the simulation whereas our proposed method simplifies this by using a clamped vortex kernel. Hybrid grid and particle methods like [32] have been proposed which are efficient for fast computation of long-range interactions on an underlying grid, while using particle-particle interactions to model short-range interactions. Methods such as the fast multipole method by [14] can accelerate the expensive summation from $\overrightarrow{O}\left( {NM}\right)$ to $\overrightarrow{O}\left( {N + M}\right)$ for $\mathrm{N}$ vortex particles and M locations. The FMM adds a large computational overhead and only outperforms direct summation for large particle counts where $N > {100000}\left\lbrack {32}\right\rbrack$ which is far above the vortex particle counts used in this work.
40
+
41
+ Vortex sheets are boundary element methods used to evolve the vorticity field on interfaces such as the interface between collision objects and fluids, and the interfaces between fluids. [9] used a vortex sheet method to model smoke plumes in linear time using the fast multipole method. [25] simulated smoke plumes by combining a vortex sheet method with an Eulerian grid solver to handle collisions. Closely related methods like [12] were applied for surface only liquids. Vortex filament methods use closed loops of vorticity to represent the state of fluids. This approach is suitable for gaseous plumes. [31], [29] and [6] demonstrated how these methods provide a very cheap way of creating the intricate dynamics of smoke while [29] used vortex filaments and filament shredding around solid objects to handle collisions with objects. Vortex shredding entails seeding of vorticity in the collision boundary layer, and we adopt this approach to enrich fluid collisions with characters. Vortex filaments and sheets represent a vorticity field that is divergence free by default, and vortex stretching is handled trivially. Unfortunately, both methods require re-discretization as simulations evolve in order to insure fidelity. This also entails that the number of vortex elements may grow beyond what is feasible for real-time scenarios.
42
+
43
+ ### 2.2 Fast particle based fluid approaches
44
+
45
+ Particle based physics solvers are used abundantly in games. [18] presented a unified particle based frame-work for real-time physics based on position based dynamics (PBD). PBD is also applicable to fluid simulation as shown by [17]. While PBD is a generalizeable and robust simulation method, it can be expensive for detailed simulations where large quantities of particles are required. In addition, sufficient iterations are required to ensure in-compressible velocity fields and stable simulations. Vortex methods are difficult to extend beyond their applications for gaseous fluid phenomena whereas PBD trivially handles free surfaces. On the other hand, a vortex particle codes only need to advect a small number of vortex particles to generate complex turbulence patterns. The same level of detail is generally not feasible for PBD with the current computational budget of games.
46
+
47
+ ### 2.3 Fast sparse grid based approaches
48
+
49
+ The regular data structures used in grid based fluid solvers lend themselves well to GPU implementations. Fluid solvers based on the original work by [27] like [15] discretizes collision objects on the simulation grid which entails that increased collision fidelity requires a global refinement of the simulation domain. [7] proposed a variational frame-work to address this issue while the advent of sparse data structures like tall cells by [10], or octrees by [3], and recently GPU-optimized VDB volumes by [21] allows for a local discretizations in the vicinity of free surfaces and collision objects While in particular nanoVDB's presented by [21] can simulate fluids in unprecedented detail, they still require a volumetric representation of collision geometry. For the deforming character surface, this requires access to the character mesh at run-time and an update of the VDB data-structure. Our method uses a surface-only approach and only needs to update the position and orientations of the discrete source points scattered across the mesh.
50
+
51
+ ## 3 Method
52
+
53
+ In-compressible fluids are governed by the mass conservation relation: $\nabla \cdot \overrightarrow{v}$ and the Navier-Stokes equation for conservation of momentum:
54
+
55
+ $$
56
+ \frac{\partial \overrightarrow{v}\left( {\overrightarrow{x}, t}\right) }{\partial t} + \left( {\overrightarrow{v} \cdot \nabla }\right) \overrightarrow{v} = \frac{1}{\rho }\left( {\overrightarrow{f} + \mu {\nabla }^{2}\overrightarrow{v} - \nabla p}\right) \tag{1}
57
+ $$
58
+
59
+ ![01963e63-71a7-792f-bc87-eb5b005560e6_2_155_148_733_495_0.jpg](images/01963e63-71a7-792f-bc87-eb5b005560e6_2_155_148_733_495_0.jpg)
60
+
61
+ Figure 3: By measuring the free-space velocity field at collocation points on geometry surfaces, we create an accurate divergence free collision velocity. Here, a stream of particles moves over a simple sphere. The accuracy of the collision field is dependent on the collocation point density. To control the look of the smoke, We distribute vortex particles on the surface and shred them into the ambient flow.
62
+
63
+ where $p$ is the pressure, $\rho$ is the density, which is assumed to be constant, and $\overrightarrow{f}$ are external forces like gravity and baroclinity and $\mu$ is the dynamic viscosity. It is possible to define an alternative version of the momentum equations based on vorticity, the vector field describing the rotation of fluid:
64
+
65
+ $$
66
+ \overrightarrow{\omega } = \nabla \times \overrightarrow{v} \tag{2}
67
+ $$
68
+
69
+ Taking the curl of the momentum equation 1 yields a new equation for the time evolution of vorticity:
70
+
71
+ $$
72
+ \frac{\partial \overrightarrow{\omega }\left( {\overrightarrow{x}, t}\right) }{\partial t} + \left( {\overrightarrow{v} \cdot \nabla }\right) \overrightarrow{\omega } + \left( {\overrightarrow{\omega } \cdot \nabla }\right) \overrightarrow{v} = \frac{1}{\rho }\left( {\nabla \times \overrightarrow{f} + \mu {\nabla }^{2}\omega }\right) . \tag{3}
73
+ $$
74
+
75
+ The vorticity field can be evolved in time according to equation 3 without the need for any pressure projection steps. A fractional step method is commonly employed for time integration. In the fractional step method, the vorticity field is stepped in time assuming no kinematic boundary conditions. The initial free-space solution is then corrected for collisions by submerging collision objects into the initial unperturbed field. The relative velocity between the objects and the ambient fluids allows us to calculate a superimposed collision fields from objects in the scene such that the normal component of velocity is minimized at the surface. Vortex particle methods are Lagrangian in nature and handle advection of vorticity $\left( {\left( {\overrightarrow{v} \cdot \nabla }\right) \overrightarrow{\omega }}\right)$ trivially by storing vorticity on entities that move in the ambient flow. The third term on the left $\left( {\left( {\overrightarrow{\omega } \cdot \nabla }\right) \overrightarrow{v}}\right)$ accounts for vortex stretching and is a feature of $3\mathrm{D}$ fluids only since the vorticity vector is always perpendicular to the velocity field for $2\mathrm{D}$ fluids. Vortex stretching transfers large scale rotation into smaller vorticles. In vortex particle methods, vortex stretching requires special attention (an estimate of $\nabla \overrightarrow{v}$ ) and diffusion is required to insure stability. For game applications, stretch and diffusion could be neglected without very noticeable impacts on realism but we include these terms for completeness. Vortex stretch degrades performance since the method we employ requires two evaluations of the velocity field for each particle and requires vortex diffusion to insure that the vorticity field stays approximately divergence free.
76
+
77
+ ### 3.1 Fundamental solutions
78
+
79
+ A well behaved divergence free velocity field $v\left( {\overrightarrow{x}, t}\right)$ can be represented through a vector potential $\overrightarrow{A}$ . The velocity field is obtained by taking the curl of $\overrightarrow{A}$ which ensures that the divergence of the velocity
80
+
81
+ field is zero,
82
+
83
+ $$
84
+ \overrightarrow{v}\left( {\overrightarrow{x}, t}\right) = \nabla \times \overrightarrow{A}\left( {\overrightarrow{x}, t}\right) . \tag{4}
85
+ $$
86
+
87
+ $\overrightarrow{A}$ is degenerate since the addition of any curl-free vector field yields the same velocity. To tie down the vector potential, it can be assumed that $\nabla \cdot \overrightarrow{A}\left( {\overrightarrow{x}, t}\right) = 0$ . In that case the vorticity field $\omega \left( {\overrightarrow{x}, t}\right)$ and the vector potential are related through a vector Laplacian,
88
+
89
+ $$
90
+ \overrightarrow{\omega }\left( {\overrightarrow{x}, t}\right) = \nabla \times \nabla \times \Psi \left( {\overrightarrow{x}, t}\right) = - {\nabla }^{2}\Psi \left( {\overrightarrow{x}, t}\right) . \tag{5}
91
+ $$
92
+
93
+ In the absence of boundaries and under the assumption that the velocity field goes to zero at infinity, the solution can be composed of a linear combination of fundamental free-space solutions or Green's functions. The vector potential is obtained by integrating the fundamental solutions over the domain,
94
+
95
+ $$
96
+ \overrightarrow{A}\left( {\overrightarrow{x}, t}\right) = \frac{1}{4\pi }{\int }_{V}\frac{\overrightarrow{\omega }\left( {{\overrightarrow{x}}^{\prime }, t}\right) }{\begin{Vmatrix}\overrightarrow{x} - {\overrightarrow{x}}^{\prime }\end{Vmatrix}}d{\overrightarrow{x}}^{\prime }. \tag{6}
97
+ $$
98
+
99
+ Taking the curl of equation 6 leads to the Biot-Savart formula for the velocity field,
100
+
101
+ $$
102
+ \overrightarrow{v}\left( {\overrightarrow{x}, t}\right) = \frac{1}{4\pi }{\int }_{V}\overrightarrow{\omega }\left( {{\overrightarrow{x}}^{\prime }, t}\right) \times \frac{\overrightarrow{x} - {\overrightarrow{x}}^{\prime }}{{\begin{Vmatrix}\overrightarrow{x} - {\overrightarrow{x}}^{\prime }\end{Vmatrix}}^{3}}d{\overrightarrow{x}}^{\prime }. \tag{7}
103
+ $$
104
+
105
+ The vortex particles act as quadrature points in the discrete version of equation 7 . To avoid singularities when $\overrightarrow{x} = {\overrightarrow{x}}^{\prime }$ , we use a mollified solution similar to [11]. This is analogous to the inclusion of a smoothing radius $h$ in the denominator which effectively limits the minimum swirl size,
106
+
107
+ $$
108
+ \overrightarrow{v}\left( {\overrightarrow{x}, t}\right) = \frac{1}{4\pi }\mathop{\sum }\limits_{i}{V}_{i}{\omega }_{i} \times \frac{\overrightarrow{x} - {\overrightarrow{x}}_{i}^{\prime }}{{\left( {h}^{2} + {\begin{Vmatrix}\overrightarrow{x} - {\overrightarrow{x}}_{i}^{\prime }\end{Vmatrix}}^{2}\right) }^{\frac{3}{2}}}. \tag{8}
109
+ $$
110
+
111
+ The vortex blob volume is given by ${\overrightarrow{V}}_{i},{\omega }_{i}$ is the blob vortex density, and ${\overrightarrow{w}}_{i} = {V}_{i}{\omega }_{i}$ is the vorticity stored on each vortex particle. Equation 8 is used to obtain the velocity field anywhere in space. To avoid iterating over every vortex particle in the simulation, we use a nearest neighbour search based on [16] to only query the nearest particles which is sufficient for game applications although physically accuracy would require the contribution from all particles in the simulation either through direct summation or multipole methods. Nearest neighbour searches between vortex particles is cheap since the number of vortex particles is typically only a few thousand compared to other particle based methods where searches between millions of particles may be required. We optimize the simulation further by excluding vortex particles from the simulation when their vorticity falls below a certain threshold.
112
+
113
+ The time dependent evolution of the vorticity is driven by vortex advection, stretching and diffusion. It is possible to get believable fluid-like motion by only using vorticity advection but vortex stretching can easily be included with the vortex segment approach introduced by [32]. Vortex particles do not have a spatial extent like filaments or sheets where stretch is handled trivially. To circumvent this, each vortex particle is converted into a vortex segment and the velocity is evaluated at each end of the segment. The stretching of the segment is converted back into a vortex particle. The procedure is illustrated in Figure 5. The velocity field is measured at each end of the segment and the vorticity updated based on the stretching of the segment,
114
+
115
+ $$
116
+ \overrightarrow{w} \leftarrow \overrightarrow{w} + k\frac{\Delta t}{h}\left( {\overrightarrow{v}\left( {\overrightarrow{q}}_{1}\right) - \overrightarrow{v}\left( {\overrightarrow{q}}_{0}\right) }\right) \tag{9}
117
+ $$
118
+
119
+ where ${\overrightarrow{q}}_{1}$ and ${\overrightarrow{q}}_{0}$ are the positions of the vortex segment ends, ${\overrightarrow{q}}_{0} =$ $\overrightarrow{x} + \frac{h}{2}\frac{\overrightarrow{w}}{\overrightarrow{w}},$ ${\overrightarrow{q}}_{1} = \overrightarrow{x} - \frac{h}{2}\frac{\overrightarrow{w}}{\overrightarrow{w}}$ , and $k$ is the circulation of the segment. Defined by,
120
+
121
+ $$
122
+ k = \frac{1}{h}\overrightarrow{\omega }\text{.} \tag{10}
123
+ $$
124
+
125
+ Vortex stretching converts large swirls into smaller swirls and this can eventually lead to instabilities if the process is allowed to proceed unimpeded. Vortex diffusion is required to ensure stability when the vorticity field is undergoing stretch. The particle strength exchange method gradually homogenizes the vorticity field and insures that it remains nearly divergence free. Therefore, the diffusion ${d\omega }/{dt} =$ $\mu {\nabla }^{2}\omega$ is approximated with,
126
+
127
+ $$
128
+ \omega \leftarrow \omega + {\Delta t}\frac{2v}{\sigma }\mathop{\sum }\limits_{q}\left( {{\overrightarrow{V}}_{q}{\omega }_{q} - {V\omega }}\right) \zeta \left( {\overrightarrow{x},{\overrightarrow{x}}_{q}^{\prime }}\right) \tag{11}
129
+ $$
130
+
131
+ where $\zeta$ is a normalized Gaussian,
132
+
133
+ $$
134
+ \zeta \left( {\overrightarrow{x},{\overrightarrow{x}}^{\prime }}\right) = \frac{1}{{\sigma }^{3}{\left( 2\pi \right) }^{3/2}}{e}^{-\frac{\overrightarrow{x} - {\overrightarrow{x}}^{\prime 2}}{2{\sigma }^{2}}}, \tag{12}
135
+ $$
136
+
137
+ and where the viscosity $v$ and the smoothing radius $\sigma$ are exposed parameters.
138
+
139
+ ### 3.2 Boundary conditions
140
+
141
+ To enforce boundary conditions, source points are stuck to collision surface as illustrated in Figure 4. The ambient velocity field is measured at the source points and we use them to generate an image velocity field which minimizes the normal component of flow. Figure 4 shows the configuration of 512 source points on the surface of a character. The ambient velocity field consists of ${\overrightarrow{u}}^{s}$ which is the velocity of the surface itself, ${\overrightarrow{u}}^{\infty }$ is a superimposed harmonic velocity (such as an initial flow velocity) and ${\overrightarrow{u}}^{p}$ is the turbulent velocity generated by the vortex particles in the ambient fluid.
142
+
143
+ We could view the points on the collision surface as vortex particles with a collision vorticity which is indeed a common procedure in vortex particle methods. An optimal vortex sheet strength can be posed as a regression problem.
144
+
145
+ $$
146
+ {\gamma }_{j}^{ * } = \arg \mathop{\min }\limits_{{\gamma }_{j}}\left\{ {\left( {{\overrightarrow{u}}_{i}^{s} - {\overrightarrow{u}}_{i}^{\infty } - {\overrightarrow{u}}_{i}^{p} + \overrightarrow{v}\left( {{\overrightarrow{x}}_{i},{\gamma }_{j}}\right) }\right) \cdot {\overrightarrow{n}}_{i}}\right\} . \tag{13}
147
+ $$
148
+
149
+ Here ${\gamma }^{ * }$ is the optimal vortex sheet strength, ${\overrightarrow{u}}_{i}^{s},{\overrightarrow{u}}_{i}^{\infty }$ and ${\overrightarrow{u}}_{i}^{p}$ are the ambient velocity components measured at collocation points at the surface position ${x}_{i}$ and $\overrightarrow{v}\left( {{\overrightarrow{x}}_{i},{\gamma }_{j}}\right)$ is the image velocity field,
150
+
151
+ $$
152
+ \overrightarrow{v}\left( {\overrightarrow{x},{\gamma }_{j}}\right) = \frac{1}{4\pi }\mathop{\sum }\limits_{j}{A}_{j}{\gamma }_{j} \times \frac{\overrightarrow{x} - {\overrightarrow{x}}_{j}^{\prime }}{{\left( {h}^{2} + {\begin{Vmatrix}\overrightarrow{x} - {\overrightarrow{x}}_{j}^{\prime }\end{Vmatrix}}^{2}\right) }^{\frac{3}{2}}}. \tag{14}
153
+ $$
154
+
155
+ The index $j$ denotes vortex source points. Each source point stores a tangential vortex vector and the generated velocity field is measured at point ${x}_{i}$ . We require that $i > {2j}$ to have an over-determined system of equations which ensures a unique solution.
156
+
157
+ Unfortunately, this method is not very suitable for real-time applications since it requires the pseudo inverse. Solving the linear system of equations to resolve collisions becomes a significant computational bottleneck.
158
+
159
+ [4] introduced an alternative for simply connected closed surfaces where the matrix inversion step is mitigated. This approach is ideal for game applications. We find the optimal Rankine collision field ${\overrightarrow{v}}_{R}$ by treating each source point as a Rankine field source. Then we obtain the Rankine image field by adding the contributions from the source points,
160
+
161
+ $$
162
+ {\overrightarrow{v}}_{R}\left( \overrightarrow{x}\right) = {\int }_{S}\overrightarrow{n} \cdot \left( {{\overrightarrow{u}}_{s} - \left( {{\overrightarrow{u}}_{p} + {\overrightarrow{u}}_{\infty }}\right) }\right) \nabla {Gd}\overrightarrow{x}. \tag{15}
163
+ $$
164
+
165
+ ![01963e63-71a7-792f-bc87-eb5b005560e6_3_932_149_718_630_0.jpg](images/01963e63-71a7-792f-bc87-eb5b005560e6_3_932_149_718_630_0.jpg)
166
+
167
+ Figure 4: Source points on collision surfaces (left). The points are moved by the underlying rig without dependence on the surface mesh We measure the relative velocity of the fluid at the position of the points and calculate the appropriate source strengths to enforce the boundary conditions. Additional vortex particles (left) are advected in the ambient velocity field.
168
+
169
+ Here $G$ is the Rankine Green’s function $1/\begin{Vmatrix}{\overrightarrow{x} - {\overrightarrow{x}}^{\prime }}\end{Vmatrix}$ and ${\overrightarrow{v}}_{R}\left( \overrightarrow{x}\right)$ is the Rankine image velocity field over the surface. By superimposing this field on the free-space solution, the normal flow is minimized as shown in detail by [4]. While this approach only works for each collision surface in isolation, the continual measurement of the surface velocity field on all surfaces ensures that the perturbations to the velocity field created by one object is mapped onto the surface of all other objects in the scene. To carry out this integral, we discretize it by using the mollified Green's function and discrete integration over the source points on the collision surface,
170
+
171
+ $$
172
+ {\overrightarrow{v}}_{R}\left( \overrightarrow{x}\right) = \mathop{\sum }\limits_{j}{A}_{j}{\overrightarrow{n}}_{j} \cdot \left( {{\overrightarrow{u}}_{s}\left( {\overrightarrow{x}}_{j}\right) - {\overrightarrow{u}}_{p}\left( {\overrightarrow{x}}_{j}\right) - {\overrightarrow{u}}_{\infty }\left( {\overrightarrow{x}}_{j}\right) }\right) \nabla G\left( {\overrightarrow{x} - {\overrightarrow{x}}_{j}}\right) . \tag{16}
173
+ $$
174
+
175
+ The Rankine collision field is divergence free but the interesting fluid behaviour associated with surface interactions requires an estimate of the fluid rotation generated in the thin viscous boundary layer that forms in real fluids. We use a simple model to transfer rotation to vortex particles close to the surface. The procedure is shown in Figure 6. The tangential component is extracted from ${\overrightarrow{v}}_{R}$ . The initialized vortex vector is perpendicular to the surface normal and the tangential velocity component. Vortex particles can be initialized with a small normal displacement $\varepsilon$ from the surface. The vorticity is determined such that the generated velocity field cancels the tangential part of ${v}_{R}$ at the surface. The vortex vector is stored on the surface particles and mapped to nearby free-flowing vortex particles with an exponentially decreasing kernel to simulate turbulence generation in the boundary layer.
176
+
177
+ ### 3.3 Velocity Broadcast
178
+
179
+ It is possible to create appealing turbulent fluid motion with a small number of vortex particles but we still need a way of representing
180
+
181
+ ![01963e63-71a7-792f-bc87-eb5b005560e6_4_217_218_521_276_0.jpg](images/01963e63-71a7-792f-bc87-eb5b005560e6_4_217_218_521_276_0.jpg)
182
+
183
+ Figure 5: A vortex particle is converted to a vortex segment. The velocity is evaluated at each end of the segment. When the segment has been stretched in the velocity field it is converted back into a vortex particle.
184
+
185
+ ![01963e63-71a7-792f-bc87-eb5b005560e6_4_185_798_644_199_0.jpg](images/01963e63-71a7-792f-bc87-eb5b005560e6_4_185_798_644_199_0.jpg)
186
+
187
+ Figure 6: The relative velocity between the surface and the ambient fluid is measured and the no-through Rankine collision field is calculated using equation 16 (left). We initialize vorticity on the surface to match the tangential component of the Rankine velocity (middle), and map vorticity to the particles in the surrounding flow (right).
188
+
189
+ density. To render the effect, we distinguish between vortex particles and tracer particles. The velocity field is calculated directly on each vortex particle but this is not feasible for the millions of tracer particles used to render the effect. To update the velocity on millions of tracer particles in real time, we found that the best solution is an underlying scratchpad grid. The velocity field is calculated on grid nodes and tri-linear interpolation is used to update the velocity of the tracer particles within the grid. A second-order explicit Adams-Bashforth scheme was used for time integration as this yields better defined turbulence compared to a simple Euler time stepping scheme,
190
+
191
+ $$
192
+ {\overrightarrow{x}}_{t}^{\left( n + 1\right) } = {\overrightarrow{x}}_{t}^{n} + \left( {\frac{3}{2}{\overrightarrow{v}}_{t}^{n} - \frac{1}{2}{\overrightarrow{v}}_{t}^{\left( n - 1\right) }}\right) . \tag{17}
193
+ $$
194
+
195
+ The grid is sparse in the sense that each voxel only query the nearest vortex particles and empty grid regions are cheap to update. It is possible to use large grid domains without a significant compromise to efficiency. Tracer particles simply trace the velocity field, but they are not required to update the dynamics. Since each tracer particle only needs to source the velocity from the grid, millions of particles can be traced in real time though sub-millisecond performance restricts the count to $\sim {10}^{6}$ on a RTX2080 Max-Q GPU. For all the simulations shown in this work, we use 4000-8000 vortex particles.
196
+
197
+ ## 4 RESULTS
198
+
199
+ We have simulated several examples showing the interacting between streams of fluids and different game characters. The motion capture library and character geometries from mixamo. com were used in the simulations. This library contains complex animations like dancing, running and jumping. Different characters with significant shape variations are used to further demonstrate the versatility of the method. With just 4096 vortex particles, our method can simulate fluids with rich dynamics. The physics easily fits within the computational budget of most games and we are able to update millions of tracer particles in real time. Tracing the velocity field constitutes the computational bottleneck of our method. We found that tracer particle counts up to $\sim {10}^{6}$ are feasible for sub-millisecond performance.
200
+
201
+ Table 1: Time measurements of the physics update with different combinations of tracer particles and vortex particles. The number in brackets denotes the upper limit of particles queries admitted for each velocity evaluation. With $\infty$ , we denote an unlimited number of queries Unless explicitly mentioned, the illustrations presented in this work used ${140}^{3}$ tracer particles, ${16}^{3}$ vortex particles, ${100}^{3}$ grid nodes and 32 as the query limit. All steps of our algorithm are implemented on the GPU and only the configuration of the character rig (bone transforms) needs to be transferred on each time step. We list this step separately as the access to the bone transforms will be readily available on the GPU for most game applications.
202
+
203
+ <table><tr><td>Name</td><td>Tracer pts / Vortex pts / Grid</td><td>Time (ms)</td></tr><tr><td>Rig Transfer</td><td>-</td><td>0.6</td></tr><tr><td>Laminar Beam</td><td>${100}^{3}/{16}^{3}\left\lbrack {64}\right\rbrack /{100}^{3}$</td><td><0.3</td></tr><tr><td>Laminar Beam</td><td>${100}^{3}/{20}^{3}\left\lbrack {64}\right\rbrack /{100}^{3}$</td><td>0.4</td></tr><tr><td>Laminar Beam</td><td>${140}^{3}/{20}^{3}\left\lbrack {64}\right\rbrack /{100}^{3}$</td><td>3.1</td></tr><tr><td>Laminar Beam</td><td>${140}^{3}/{20}^{3}\left\lbrack {32}\right\rbrack /{100}^{3}$</td><td>2.3</td></tr><tr><td>Buoyancy Driven</td><td>${100}^{3}/{32}^{3}\left\lbrack {32}\right\rbrack /{100}^{3}$</td><td>2.1</td></tr><tr><td>Buoyancy Driven</td><td>${140}^{3}/{32}^{3}\left\lbrack {32}\right\rbrack /{100}^{3}$</td><td>5.1</td></tr><tr><td>Buoyancy Driven</td><td>${140}^{3}/{16}^{3}\left( \infty \right) /{100}^{3}$</td><td>21.1</td></tr></table>
204
+
205
+ Figure 1 and 2 depicts streams of fluid hitting characters in motion. The turbulence generation from the boundary layer is evident in Figure 2 and our method can also be used to simulate a variety of flame-like effects by seeding vortex particles with random vorticities at the fluid source as shown in Figure 1. Figure 3 shows collisions with a simple sphere object. The source particles on the collision surface creates accurate collision field though it requires a sufficiently large search radius. The source particles on surfaces are treated like the vortex particles in the surrounding fluid and it is important to include a sufficient number of source particles to resolve the collisions accurately. The side-by-side simulations in Figure 7 shows two similar scenarios with different numbers of tracer particles. Including rendering, we can simulate more than ${800}\mathrm{{fps}}$ with ${10}^{6}$ tracer particles and the dynamics are unchanged by the tracer particle count.
206
+
207
+ ## 5 DISCUSSION AND LIMITATIONS
208
+
209
+ Pure vortex particle methods are well suited for real-time fluid simulation in game applications but have not been used widely. The method outlined here is simple to implement and fast enough to fit within the computational budget of most games. In this work, we have explored a limited set of applications, specifically, the interactions between characters and fluids which is a particular challenge with existing methods. By placing vortex particles on surfaces and using the matrix-free collision method, this can be handled easily with the proposed method. Several improvements are possible. The placement of source particles on collision geometry is fixed at runtime yet it could be advantageous to place them dynamically in the vicinity of fluid density. In particular, this may be required for large game worlds where all surfaces could potentially be collision surfaces. The continuous velocity fields created by the vortex particles is a decidedly advantageous feature of pure vortex methods. Since the evaluation of the velocity field represents the computational bottleneck, the number of velocity samples can be adapted to fit the computational budget and the level of detail needed for a particular application. A continuous velocity field entails that it is easy to swap the tracer particles for other density representations such as grid based density fields or texture representations. Vortex methods are well suited for gaseous fluids but they are difficult to adapt to other fluid phenomena. In particular, the inclusion of a free surface required for liquids is not straight-forward although approaches such as [13] is an example of a vortex method for liquids. These approaches are not necessarily more efficient that their velocity based counterparts.
210
+
211
+ ![01963e63-71a7-792f-bc87-eb5b005560e6_5_162_149_702_743_0.jpg](images/01963e63-71a7-792f-bc87-eb5b005560e6_5_162_149_702_743_0.jpg)
212
+
213
+ Figure 7: 4096 vortex particles instigate detailed fluid motion in a field of tracer particles. The dynamics are unchanged by the number of tracer particles. ${10}^{6}$ tracer particles are used in the left image and ${2.74} * {10}^{6}$ tracer particles are used on the right. The left simulation including rendering runs at $\sim {900}\mathrm{{fps}}$ and the right simulation runs at $\sim {300}$ fps.
214
+
215
+ ## 6 CONCLUSION
216
+
217
+ We have presented a vortex based fluid solver capable of handling the intricate collisions between fluids and game characters. It is the first method to specifically target these interactions and resolve them at a high level of detail. Our method is fast enough for practical use in interactive applications like games or VR and represents an additional step towards bringing realism to fluid simulations for these kinds of applications.
218
+
219
+ ## REFERENCES
220
+
221
+ [1] Mridul Aanjaneya, Ming Gao, Haixiang Liu, Christopher Batty, and Eftychios Sifakis. 2017. Power diagrams and sparse paged grids for high resolution adaptive liquids. ACM Transactions on Graphics (TOG) 36, 4 (2017), 1-12.
222
+
223
+ [2] John D Anderson. 2005. Ludwig Prandtl's boundary layer. Physics today 58, 12 (2005), 42-48.
224
+
225
+ [3] Ryoichi Ando and Christopher Batty. 2020. A Practical Octree
226
+
227
+ Liquid Simulator with Adaptive Surface Resolution. ACM Trans. Graph. 39, 4, Article 32 (July 2020), 17 pages.
228
+
229
+ [4] Alexis Angelidis. 2015. Vorticle Fluid Simulation Technical Memo 15-01. Technical Report. Pixar Animation Studios.
230
+
231
+ [5] Alexis Angelidis. 2017. Multi-scale vorticle fluids. ACM Transactions on Graphics (TOG) 36, 4 (2017), 1-12.
232
+
233
+ [6] Alfred Barnat and Nancy S. Pollard. 2012. Smoke Sheets for Graph-Structured Vortex Filaments. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation (Lausanne, Switzerland) (SCA '12). Eurographics Association, Goslar, DEU, 77-86.
234
+
235
+ [7] Christopher Batty, Florence Bertails, and Robert Bridson. 2007. A fast variational framework for accurate solid-fluid coupling. ACM Transactions on Graphics (TOG) 26, 3 (2007), 100-es.
236
+
237
+ [8] Robert Bridson. 2015. Fluid simulation for computer graphics. A K Peters/CRC Press, New York.
238
+
239
+ [9] Tyson Brochu, Todd Keeler, and Robert Bridson. 2012. Linear-Time Smoke Animation with Vortex Sheet Meshes. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation (Lausanne, Switzerland) (SCA '12). Eurographics Association, Goslar, DEU, 87-95.
240
+
241
+ [10] Nuttapong Chentanez and Matthias Müller. 2011. Real-Time Eulerian Water Simulation Using a Restricted Tall Cell Grid. ACM Trans. Graph. 30, 4, Article 82 (July 2011), 10 pages.
242
+
243
+ [11] Alexandre Joel Chorin and Peter S Bernard. 1973. Discretiza-tion of a vortex sheet, with an example of roll-up. J. Comput. Phys. 13, 3 (1973), 423-429.
244
+
245
+ [12] Fang Da, David Hahn, Christopher Batty, Chris Wojtan, and Eitan Grinspun. 2016. Surface-Only Liquids. ACM Trans. Graph. 35, 4, Article 78 (July 2016), 12 pages.
246
+
247
+ [13] Abhinav Golas, Rahul Narain, Jason Sewall, Pavel Krajcevski, Pradeep Dubey, and Ming Lin. 2012. Large-Scale Fluid Simulation Using Velocity-Vorticity Domain Decomposition. ACM Trans. Graph. 31, 6, Article 148 (Nov. 2012), 9 pages.
248
+
249
+ [14] Leslie Greengard and Vladimir Rokhlin. 1987. A fast algorithm for particle simulations. Journal of computational physics 73, 2 (1987), 325-348.
250
+
251
+ [15] Mark J Harris. 2005. Fast fluid dynamics simulation on the GPU. SIGGRAPH Courses 220, 10.1145 (2005), 1198555- 1198790.
252
+
253
+ [16] Rama C Hoetzlein. 2014. Fast fixed-radius nearest neighbors: interactive million-particle fluids. Online slides from GPU Technology Conference. Accessed November 2021.
254
+
255
+ [17] Miles Macklin and Matthias Müller. 2013. Position based fluids. ACM Transactions on Graphics (TOG) 32, 4 (2013), 1-12 .
256
+
257
+ [18] Miles Macklin, Matthias Müller, Nuttapong Chentanez, and Tae-Yong Kim. 2014. Unified Particle Physics for Real-Time Applications. ACM Trans. Graph. 33, 4, Article 153 (July 2014), 12 pages.
258
+
259
+ [19] Marek Krzysztof Misztal, Kenny Erleben, Adam Bargteil, Jens Fursund, Brian Bunch Christensen, Jakob Andreas Bærentzen, and Robert Bridson. 2013. Multiphase flow of immiscible fluids on unstructured moving meshes. IEEE transactions on visualization and computer graphics 20, 1 (2013), 4-16.
260
+
261
+ [20] Ken Museth. 2013. VDB: High-resolution sparse volumes with dynamic topology. ACM transactions on graphics (TOG) 32, 3 (2013), 1-22.
262
+
263
+ [21] Ken Museth. 2021. NanoVDB: A GPU-Friendly and Portable VDB Data Structure For Real-Time Rendering And Simulation. In ACM SIGGRAPH 2021 Talks (Virtual Event, USA) (SIGGRAPH '21). Association for Computing Machinery, New York, NY, USA, Article 1, 2 pages.
264
+
265
+ [22] Rahul Narain, Jason Sewall, Mark Carlson, and Ming C. Lin. 2008. Fast Animation of Turbulence Using Energy Transport and Procedural Synthesis. In ACM SIGGRAPH Asia 2008 Papers (Singapore) (SIGGRAPH Asia '08). Association for Computing Machinery, New York, NY, USA, Article 166, 8 pages.
266
+
267
+ [23] Sang Il Park and Myoung Jun Kim. 2005. Vortex Fluid for Gaseous Phenomena. In Proceedings of the 2005 ACM SIG-GRAPH/Eurographics Symposium on Computer Animation (Los Angeles, California) (SCA '05). Association for Computing Machinery, New York, NY, USA, 261-270.
268
+
269
+ [24] Tobias Pfaff, Nils Thuerey, Jonathan Cohen, Sarah Tariq, and Markus Gross. 2010. Scalable Fluid Simulation Using Anisotropic Turbulence Particles. In ACM SIGGRAPH Asia 2010 Papers (Seoul, South Korea) (SIGGRAPH ASIA '10). Association for Computing Machinery, New York, NY, USA, Article 174, 8 pages.
270
+
271
+ [25] Tobias Pfaff, Nils Thuerey, and Markus Gross. 2012. Lagrangian Vortex Sheets for Animating Fluids. ACM Trans. Graph. 31, 4, Article 112 (July 2012), 8 pages.
272
+
273
+ [26] Andrew Selle, Nick Rasmussen, and Ronald Fedkiw. 2005. A Vortex Particle Method for Smoke, Water and Explosions. ACM Trans. Graph. 24, 3 (July 2005), 910-914.
274
+
275
+ [27] Jos Stam. 1999. Stable Fluids. In Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '99). ACM Press/Addison-Wesley Publishing Co., USA, 121-128.
276
+
277
+ [28] Jos Stam. 2015. The art of fluid animation. A K Peters/CRC Press, New York.
278
+
279
+ [29] Steffen Weißmann and Ulrich Pinkall. 2010. Filament-Based Smoke with Vortex Shedding and Variational Reconnection. ACM Trans. Graph. 29, 4, Article 115 (July 2010), 12 pages.
280
+
281
+ [30] He Yan, Zhangye Wang, Jian He, Xi Chen, Changbo Wang, and Qunsheng Peng. 2009. Real-time fluid simulation with adaptive SPH. Computer Animation and Virtual Worlds 20, 2-3 (2009), 417-426.
282
+
283
+ [31] Meng Zhang, Weixin Si, Yinling Qian, H. Sun, J. Qin, and P. Heng. 2015. Vortex Filaments in Grids for Scalable, Fine Smoke Simulation. IEEE Computer Graphics and Applications 35 (2015), 60-68.
284
+
285
+ [32] Xinxin Zhang and Robert Bridson. 2014. A PPPM fast summation method for fluids and beyond. ACM Transactions on Graphics (TOG) 33, 6 (2014), 1-11.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/BrBlpeYNTMc/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,245 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § FAST VORTEX PARTICLE METHOD FOR FLUID-CHARACTER INTERACTION
2
+
3
+ Category: Research
4
+
5
+ § ABSTRACT
6
+
7
+ High fidelity interactions between game characters and gaseous effects like smoke, fire and explosions are often neglected in real-time applications due to the high computational cost of detailed fluid simulation. We present a purely vortex particle based fluid model for games which is capable of resolving the collision between fluids and complex objects such as moving game characters in real time. Contrary to most other vorticity based methods, we use a simple inversion free approach to obtain the collision velocity field on surfaces while at the same time avoiding the expensive pressure projection step associated with pressure based fluid solvers. This entails that our method fits within the restricted computational budget of most games. We showcase the efficacy of our method with simulations involving over $1\mathrm{M}\mathrm{{GPU}}$ particles and simulation times below 1 ms.
8
+
9
+ Index Terms: Game physics-Simulation-Fluids-Real-time graphics; Animation-Visualization-Game characters-Vortex method
10
+
11
+ § 1 INTRODUCTION
12
+
13
+ Real-time simulation of fluids is a valuable addition to interactive applications such as games or virtual reality. While smoke, fire and explosions are key components in immersive gaming experiences, physically correct simulation of fluids is usually not feasible at high resolution. Smoke and explosions are sometimes pre-simulated in high resolution and played back in real-time but this approach precludes any interactions with characters which depends on the input from the user and can only be known at run-time. To represent interactions with fluids in a believable way, a detailed representation of the fluid velocity field close to the character surface is needed to match the high resolution geometric models used for characters in modern games. For game applications, vortex particle methods provide an interesting alternative to the more common velocity-pressure based representation of fluid state where the iterative pressure projection step constitutes a computational bottleneck for real-time applications. By evolving a vorticity field discretized by particles, no pressure projection is needed and the divergence free velocity field can be derived by using the Biot-Savart law. In addition, vortex methods generate unbounded continuous solutions. This allows for adaptive sampling of the velocity fields depending on the level of detail required and the available computational budget. While vortex methods allow for detailed real-time simulation in the absence of kinematic boundary conditions, it is not possible to define the boundary conditions in terms of vorticity. Boundary element methods are common approaches for enforcing kinematic boundary conditions in vorticity based fluid methods but they are computationally expensive and the methods used in most vortex codes like [23] and [5] to obtain a vortex sheet strength across collision surfaces are not very suitable for real-time applications as they require surface integration and inversion of a coupling matrix. Instead, we use a collocation method inspired by the method of [4]. This cheap collision method allows for believable transfer of complex character motion into the surrounding fluid. To further enhance realism we account for vortex shredding from the thin viscous boundary layer at the collision interface by initialization new vortex particles where the tangential velocity of the fluid tends to zero due to shear stresses [2]. The continuous velocity fields generated by the vortex particles allows many different kinds of fluid representations. Here we choose to visualize the effect with large numbers of passive tracker particles advected in the ambient velocity field. To make this feasible, we first evaluate the velocity field on an intermediate scratchpad grid and then broadcast the field to the tracker particles using linear interpolation. Some examples of our method are shown in Figure 1 where a moving game character is being hit by a plume. Figure 2, shows a similar effect where turbulence is generated solely at the collision interface leading to intricate swirly fluid motion downstream.
14
+
15
+ < g r a p h i c s >
16
+
17
+ Figure 1: Our vortex method simulates the interaction of a moving character and fluids in real-time. A typical game effect is shown of a moving game character being hit by a plume of fire.
18
+
19
+ The time allocated physics update in most AAA games is limited, thus a real-time capable fluid solver is not necessarily efficient enough for a game but should be able to create appealing fluid simulations in a fraction of a millisecond. Our method allows for sub-millisecond simulation and detailed fluid behavior. The main contributions in this work are:
20
+
21
+ * A purely vortex based particle method for games. Our method simulates intricate turbulence generation and guarantee divergence free flows.
22
+
23
+ * The method creates continuous velocity fields and fluids can be represented as particles, in volumes or in textures without affecting the underlying dynamics.
24
+
25
+ * We adopt a simple approach for handling complex boundaries and infer an appropriate image vorticity field across surfaces without the need for matrix inversion steps.
26
+
27
+ * A computationally light-weight density representation based on tracer particles which are advected passively in the velocity field and otherwise decoupled from the physics update step.
28
+
29
+ § 2 RELATED WORK
30
+
31
+ Fluid simulation in graphics is a vast field having been developed over the years to facilitate increased fidelity and efficiency. Textbooks like [8] or [28] give a good introduction to many of these contributions. In this work, we primarily focuses on the subset of methods specifically intended for real-time applications. Simulation techniques using regular grids as the underlying discretization of space are widely based on the work of [27]. These methods lend themselves particularly well to GPU architectures. Particle based methods like those developed by [17] and [30] have also been used to simulate fluids in real time while simulations on unstructured grids like [19] and surface-only methods like those used by [12] are particularly useful for accurate handling of interfaces and collisions. Sparse data structures like the tall-grid cells used by [10], octrees employed by [1] and VDB volumes [20] have accelerated fluid simulation even further by limiting the high-resolution simulation domain to areas where it is needed such as at a free surface or near obstacles. The above mentioned methods facilitate real-time fluid simulation but current methods must be accelerated even further to allow for practical use in large game worlds where fluid simulation can only occupy a fraction of the available computational budget.
32
+
33
+ < g r a p h i c s >
34
+
35
+ Figure 2: A stream of tracer particles is emitted from the top. The stream hits an animating character. Vortex particles are initialized in the collision boundary layer. The amount of vortex shedding is increased from left to right to achieve different looks.
36
+
37
+ § 2.1 VORTEX SHEETS, PARTICLES AND FILAMENTS
38
+
39
+ Vortex methods commonly use particles, filaments and sheets to discretize the vorticity field from which a divergence free velocity field can be derived. While the state of a fluid can be evolved solely on the basis of a vorticity field, vortex particles have also been used to enrich low-resolution fluid simulations as shown by [26]. [22] and [24] used vortex particles for procedural turbulence advection in an underlying velocity based fluid solver. Vortex particles provide a cheap way of introducing details in a fluid simulation, and they are good candidates for game applications where particle systems are used frequently. However, the kinematic boundary conditions required on collision surfaces are not trivial to enforce. Pure vortex particle methods like those described by [23] require inversion of a pseudo-inverse collision matrix for each time-step making this approach unsuitable for real-time applications. Evaluation of the velocity field requires querying all vortex particles in the simulation whereas our proposed method simplifies this by using a clamped vortex kernel. Hybrid grid and particle methods like [32] have been proposed which are efficient for fast computation of long-range interactions on an underlying grid, while using particle-particle interactions to model short-range interactions. Methods such as the fast multipole method by [14] can accelerate the expensive summation from $\overrightarrow{O}\left( {NM}\right)$ to $\overrightarrow{O}\left( {N + M}\right)$ for $\mathrm{N}$ vortex particles and M locations. The FMM adds a large computational overhead and only outperforms direct summation for large particle counts where $N > {100000}\left\lbrack {32}\right\rbrack$ which is far above the vortex particle counts used in this work.
40
+
41
+ Vortex sheets are boundary element methods used to evolve the vorticity field on interfaces such as the interface between collision objects and fluids, and the interfaces between fluids. [9] used a vortex sheet method to model smoke plumes in linear time using the fast multipole method. [25] simulated smoke plumes by combining a vortex sheet method with an Eulerian grid solver to handle collisions. Closely related methods like [12] were applied for surface only liquids. Vortex filament methods use closed loops of vorticity to represent the state of fluids. This approach is suitable for gaseous plumes. [31], [29] and [6] demonstrated how these methods provide a very cheap way of creating the intricate dynamics of smoke while [29] used vortex filaments and filament shredding around solid objects to handle collisions with objects. Vortex shredding entails seeding of vorticity in the collision boundary layer, and we adopt this approach to enrich fluid collisions with characters. Vortex filaments and sheets represent a vorticity field that is divergence free by default, and vortex stretching is handled trivially. Unfortunately, both methods require re-discretization as simulations evolve in order to insure fidelity. This also entails that the number of vortex elements may grow beyond what is feasible for real-time scenarios.
42
+
43
+ § 2.2 FAST PARTICLE BASED FLUID APPROACHES
44
+
45
+ Particle based physics solvers are used abundantly in games. [18] presented a unified particle based frame-work for real-time physics based on position based dynamics (PBD). PBD is also applicable to fluid simulation as shown by [17]. While PBD is a generalizeable and robust simulation method, it can be expensive for detailed simulations where large quantities of particles are required. In addition, sufficient iterations are required to ensure in-compressible velocity fields and stable simulations. Vortex methods are difficult to extend beyond their applications for gaseous fluid phenomena whereas PBD trivially handles free surfaces. On the other hand, a vortex particle codes only need to advect a small number of vortex particles to generate complex turbulence patterns. The same level of detail is generally not feasible for PBD with the current computational budget of games.
46
+
47
+ § 2.3 FAST SPARSE GRID BASED APPROACHES
48
+
49
+ The regular data structures used in grid based fluid solvers lend themselves well to GPU implementations. Fluid solvers based on the original work by [27] like [15] discretizes collision objects on the simulation grid which entails that increased collision fidelity requires a global refinement of the simulation domain. [7] proposed a variational frame-work to address this issue while the advent of sparse data structures like tall cells by [10], or octrees by [3], and recently GPU-optimized VDB volumes by [21] allows for a local discretizations in the vicinity of free surfaces and collision objects While in particular nanoVDB's presented by [21] can simulate fluids in unprecedented detail, they still require a volumetric representation of collision geometry. For the deforming character surface, this requires access to the character mesh at run-time and an update of the VDB data-structure. Our method uses a surface-only approach and only needs to update the position and orientations of the discrete source points scattered across the mesh.
50
+
51
+ § 3 METHOD
52
+
53
+ In-compressible fluids are governed by the mass conservation relation: $\nabla \cdot \overrightarrow{v}$ and the Navier-Stokes equation for conservation of momentum:
54
+
55
+ $$
56
+ \frac{\partial \overrightarrow{v}\left( {\overrightarrow{x},t}\right) }{\partial t} + \left( {\overrightarrow{v} \cdot \nabla }\right) \overrightarrow{v} = \frac{1}{\rho }\left( {\overrightarrow{f} + \mu {\nabla }^{2}\overrightarrow{v} - \nabla p}\right) \tag{1}
57
+ $$
58
+
59
+ < g r a p h i c s >
60
+
61
+ Figure 3: By measuring the free-space velocity field at collocation points on geometry surfaces, we create an accurate divergence free collision velocity. Here, a stream of particles moves over a simple sphere. The accuracy of the collision field is dependent on the collocation point density. To control the look of the smoke, We distribute vortex particles on the surface and shred them into the ambient flow.
62
+
63
+ where $p$ is the pressure, $\rho$ is the density, which is assumed to be constant, and $\overrightarrow{f}$ are external forces like gravity and baroclinity and $\mu$ is the dynamic viscosity. It is possible to define an alternative version of the momentum equations based on vorticity, the vector field describing the rotation of fluid:
64
+
65
+ $$
66
+ \overrightarrow{\omega } = \nabla \times \overrightarrow{v} \tag{2}
67
+ $$
68
+
69
+ Taking the curl of the momentum equation 1 yields a new equation for the time evolution of vorticity:
70
+
71
+ $$
72
+ \frac{\partial \overrightarrow{\omega }\left( {\overrightarrow{x},t}\right) }{\partial t} + \left( {\overrightarrow{v} \cdot \nabla }\right) \overrightarrow{\omega } + \left( {\overrightarrow{\omega } \cdot \nabla }\right) \overrightarrow{v} = \frac{1}{\rho }\left( {\nabla \times \overrightarrow{f} + \mu {\nabla }^{2}\omega }\right) . \tag{3}
73
+ $$
74
+
75
+ The vorticity field can be evolved in time according to equation 3 without the need for any pressure projection steps. A fractional step method is commonly employed for time integration. In the fractional step method, the vorticity field is stepped in time assuming no kinematic boundary conditions. The initial free-space solution is then corrected for collisions by submerging collision objects into the initial unperturbed field. The relative velocity between the objects and the ambient fluids allows us to calculate a superimposed collision fields from objects in the scene such that the normal component of velocity is minimized at the surface. Vortex particle methods are Lagrangian in nature and handle advection of vorticity $\left( {\left( {\overrightarrow{v} \cdot \nabla }\right) \overrightarrow{\omega }}\right)$ trivially by storing vorticity on entities that move in the ambient flow. The third term on the left $\left( {\left( {\overrightarrow{\omega } \cdot \nabla }\right) \overrightarrow{v}}\right)$ accounts for vortex stretching and is a feature of $3\mathrm{D}$ fluids only since the vorticity vector is always perpendicular to the velocity field for $2\mathrm{D}$ fluids. Vortex stretching transfers large scale rotation into smaller vorticles. In vortex particle methods, vortex stretching requires special attention (an estimate of $\nabla \overrightarrow{v}$ ) and diffusion is required to insure stability. For game applications, stretch and diffusion could be neglected without very noticeable impacts on realism but we include these terms for completeness. Vortex stretch degrades performance since the method we employ requires two evaluations of the velocity field for each particle and requires vortex diffusion to insure that the vorticity field stays approximately divergence free.
76
+
77
+ § 3.1 FUNDAMENTAL SOLUTIONS
78
+
79
+ A well behaved divergence free velocity field $v\left( {\overrightarrow{x},t}\right)$ can be represented through a vector potential $\overrightarrow{A}$ . The velocity field is obtained by taking the curl of $\overrightarrow{A}$ which ensures that the divergence of the velocity
80
+
81
+ field is zero,
82
+
83
+ $$
84
+ \overrightarrow{v}\left( {\overrightarrow{x},t}\right) = \nabla \times \overrightarrow{A}\left( {\overrightarrow{x},t}\right) . \tag{4}
85
+ $$
86
+
87
+ $\overrightarrow{A}$ is degenerate since the addition of any curl-free vector field yields the same velocity. To tie down the vector potential, it can be assumed that $\nabla \cdot \overrightarrow{A}\left( {\overrightarrow{x},t}\right) = 0$ . In that case the vorticity field $\omega \left( {\overrightarrow{x},t}\right)$ and the vector potential are related through a vector Laplacian,
88
+
89
+ $$
90
+ \overrightarrow{\omega }\left( {\overrightarrow{x},t}\right) = \nabla \times \nabla \times \Psi \left( {\overrightarrow{x},t}\right) = - {\nabla }^{2}\Psi \left( {\overrightarrow{x},t}\right) . \tag{5}
91
+ $$
92
+
93
+ In the absence of boundaries and under the assumption that the velocity field goes to zero at infinity, the solution can be composed of a linear combination of fundamental free-space solutions or Green's functions. The vector potential is obtained by integrating the fundamental solutions over the domain,
94
+
95
+ $$
96
+ \overrightarrow{A}\left( {\overrightarrow{x},t}\right) = \frac{1}{4\pi }{\int }_{V}\frac{\overrightarrow{\omega }\left( {{\overrightarrow{x}}^{\prime },t}\right) }{\begin{Vmatrix}\overrightarrow{x} - {\overrightarrow{x}}^{\prime }\end{Vmatrix}}d{\overrightarrow{x}}^{\prime }. \tag{6}
97
+ $$
98
+
99
+ Taking the curl of equation 6 leads to the Biot-Savart formula for the velocity field,
100
+
101
+ $$
102
+ \overrightarrow{v}\left( {\overrightarrow{x},t}\right) = \frac{1}{4\pi }{\int }_{V}\overrightarrow{\omega }\left( {{\overrightarrow{x}}^{\prime },t}\right) \times \frac{\overrightarrow{x} - {\overrightarrow{x}}^{\prime }}{{\begin{Vmatrix}\overrightarrow{x} - {\overrightarrow{x}}^{\prime }\end{Vmatrix}}^{3}}d{\overrightarrow{x}}^{\prime }. \tag{7}
103
+ $$
104
+
105
+ The vortex particles act as quadrature points in the discrete version of equation 7 . To avoid singularities when $\overrightarrow{x} = {\overrightarrow{x}}^{\prime }$ , we use a mollified solution similar to [11]. This is analogous to the inclusion of a smoothing radius $h$ in the denominator which effectively limits the minimum swirl size,
106
+
107
+ $$
108
+ \overrightarrow{v}\left( {\overrightarrow{x},t}\right) = \frac{1}{4\pi }\mathop{\sum }\limits_{i}{V}_{i}{\omega }_{i} \times \frac{\overrightarrow{x} - {\overrightarrow{x}}_{i}^{\prime }}{{\left( {h}^{2} + {\begin{Vmatrix}\overrightarrow{x} - {\overrightarrow{x}}_{i}^{\prime }\end{Vmatrix}}^{2}\right) }^{\frac{3}{2}}}. \tag{8}
109
+ $$
110
+
111
+ The vortex blob volume is given by ${\overrightarrow{V}}_{i},{\omega }_{i}$ is the blob vortex density, and ${\overrightarrow{w}}_{i} = {V}_{i}{\omega }_{i}$ is the vorticity stored on each vortex particle. Equation 8 is used to obtain the velocity field anywhere in space. To avoid iterating over every vortex particle in the simulation, we use a nearest neighbour search based on [16] to only query the nearest particles which is sufficient for game applications although physically accuracy would require the contribution from all particles in the simulation either through direct summation or multipole methods. Nearest neighbour searches between vortex particles is cheap since the number of vortex particles is typically only a few thousand compared to other particle based methods where searches between millions of particles may be required. We optimize the simulation further by excluding vortex particles from the simulation when their vorticity falls below a certain threshold.
112
+
113
+ The time dependent evolution of the vorticity is driven by vortex advection, stretching and diffusion. It is possible to get believable fluid-like motion by only using vorticity advection but vortex stretching can easily be included with the vortex segment approach introduced by [32]. Vortex particles do not have a spatial extent like filaments or sheets where stretch is handled trivially. To circumvent this, each vortex particle is converted into a vortex segment and the velocity is evaluated at each end of the segment. The stretching of the segment is converted back into a vortex particle. The procedure is illustrated in Figure 5. The velocity field is measured at each end of the segment and the vorticity updated based on the stretching of the segment,
114
+
115
+ $$
116
+ \overrightarrow{w} \leftarrow \overrightarrow{w} + k\frac{\Delta t}{h}\left( {\overrightarrow{v}\left( {\overrightarrow{q}}_{1}\right) - \overrightarrow{v}\left( {\overrightarrow{q}}_{0}\right) }\right) \tag{9}
117
+ $$
118
+
119
+ where ${\overrightarrow{q}}_{1}$ and ${\overrightarrow{q}}_{0}$ are the positions of the vortex segment ends, ${\overrightarrow{q}}_{0} =$ $\overrightarrow{x} + \frac{h}{2}\frac{\overrightarrow{w}}{\overrightarrow{w}},$ ${\overrightarrow{q}}_{1} = \overrightarrow{x} - \frac{h}{2}\frac{\overrightarrow{w}}{\overrightarrow{w}}$ , and $k$ is the circulation of the segment. Defined by,
120
+
121
+ $$
122
+ k = \frac{1}{h}\overrightarrow{\omega }\text{ . } \tag{10}
123
+ $$
124
+
125
+ Vortex stretching converts large swirls into smaller swirls and this can eventually lead to instabilities if the process is allowed to proceed unimpeded. Vortex diffusion is required to ensure stability when the vorticity field is undergoing stretch. The particle strength exchange method gradually homogenizes the vorticity field and insures that it remains nearly divergence free. Therefore, the diffusion ${d\omega }/{dt} =$ $\mu {\nabla }^{2}\omega$ is approximated with,
126
+
127
+ $$
128
+ \omega \leftarrow \omega + {\Delta t}\frac{2v}{\sigma }\mathop{\sum }\limits_{q}\left( {{\overrightarrow{V}}_{q}{\omega }_{q} - {V\omega }}\right) \zeta \left( {\overrightarrow{x},{\overrightarrow{x}}_{q}^{\prime }}\right) \tag{11}
129
+ $$
130
+
131
+ where $\zeta$ is a normalized Gaussian,
132
+
133
+ $$
134
+ \zeta \left( {\overrightarrow{x},{\overrightarrow{x}}^{\prime }}\right) = \frac{1}{{\sigma }^{3}{\left( 2\pi \right) }^{3/2}}{e}^{-\frac{\overrightarrow{x} - {\overrightarrow{x}}^{\prime 2}}{2{\sigma }^{2}}}, \tag{12}
135
+ $$
136
+
137
+ and where the viscosity $v$ and the smoothing radius $\sigma$ are exposed parameters.
138
+
139
+ § 3.2 BOUNDARY CONDITIONS
140
+
141
+ To enforce boundary conditions, source points are stuck to collision surface as illustrated in Figure 4. The ambient velocity field is measured at the source points and we use them to generate an image velocity field which minimizes the normal component of flow. Figure 4 shows the configuration of 512 source points on the surface of a character. The ambient velocity field consists of ${\overrightarrow{u}}^{s}$ which is the velocity of the surface itself, ${\overrightarrow{u}}^{\infty }$ is a superimposed harmonic velocity (such as an initial flow velocity) and ${\overrightarrow{u}}^{p}$ is the turbulent velocity generated by the vortex particles in the ambient fluid.
142
+
143
+ We could view the points on the collision surface as vortex particles with a collision vorticity which is indeed a common procedure in vortex particle methods. An optimal vortex sheet strength can be posed as a regression problem.
144
+
145
+ $$
146
+ {\gamma }_{j}^{ * } = \arg \mathop{\min }\limits_{{\gamma }_{j}}\left\{ {\left( {{\overrightarrow{u}}_{i}^{s} - {\overrightarrow{u}}_{i}^{\infty } - {\overrightarrow{u}}_{i}^{p} + \overrightarrow{v}\left( {{\overrightarrow{x}}_{i},{\gamma }_{j}}\right) }\right) \cdot {\overrightarrow{n}}_{i}}\right\} . \tag{13}
147
+ $$
148
+
149
+ Here ${\gamma }^{ * }$ is the optimal vortex sheet strength, ${\overrightarrow{u}}_{i}^{s},{\overrightarrow{u}}_{i}^{\infty }$ and ${\overrightarrow{u}}_{i}^{p}$ are the ambient velocity components measured at collocation points at the surface position ${x}_{i}$ and $\overrightarrow{v}\left( {{\overrightarrow{x}}_{i},{\gamma }_{j}}\right)$ is the image velocity field,
150
+
151
+ $$
152
+ \overrightarrow{v}\left( {\overrightarrow{x},{\gamma }_{j}}\right) = \frac{1}{4\pi }\mathop{\sum }\limits_{j}{A}_{j}{\gamma }_{j} \times \frac{\overrightarrow{x} - {\overrightarrow{x}}_{j}^{\prime }}{{\left( {h}^{2} + {\begin{Vmatrix}\overrightarrow{x} - {\overrightarrow{x}}_{j}^{\prime }\end{Vmatrix}}^{2}\right) }^{\frac{3}{2}}}. \tag{14}
153
+ $$
154
+
155
+ The index $j$ denotes vortex source points. Each source point stores a tangential vortex vector and the generated velocity field is measured at point ${x}_{i}$ . We require that $i > {2j}$ to have an over-determined system of equations which ensures a unique solution.
156
+
157
+ Unfortunately, this method is not very suitable for real-time applications since it requires the pseudo inverse. Solving the linear system of equations to resolve collisions becomes a significant computational bottleneck.
158
+
159
+ [4] introduced an alternative for simply connected closed surfaces where the matrix inversion step is mitigated. This approach is ideal for game applications. We find the optimal Rankine collision field ${\overrightarrow{v}}_{R}$ by treating each source point as a Rankine field source. Then we obtain the Rankine image field by adding the contributions from the source points,
160
+
161
+ $$
162
+ {\overrightarrow{v}}_{R}\left( \overrightarrow{x}\right) = {\int }_{S}\overrightarrow{n} \cdot \left( {{\overrightarrow{u}}_{s} - \left( {{\overrightarrow{u}}_{p} + {\overrightarrow{u}}_{\infty }}\right) }\right) \nabla {Gd}\overrightarrow{x}. \tag{15}
163
+ $$
164
+
165
+ < g r a p h i c s >
166
+
167
+ Figure 4: Source points on collision surfaces (left). The points are moved by the underlying rig without dependence on the surface mesh We measure the relative velocity of the fluid at the position of the points and calculate the appropriate source strengths to enforce the boundary conditions. Additional vortex particles (left) are advected in the ambient velocity field.
168
+
169
+ Here $G$ is the Rankine Green’s function $1/\begin{Vmatrix}{\overrightarrow{x} - {\overrightarrow{x}}^{\prime }}\end{Vmatrix}$ and ${\overrightarrow{v}}_{R}\left( \overrightarrow{x}\right)$ is the Rankine image velocity field over the surface. By superimposing this field on the free-space solution, the normal flow is minimized as shown in detail by [4]. While this approach only works for each collision surface in isolation, the continual measurement of the surface velocity field on all surfaces ensures that the perturbations to the velocity field created by one object is mapped onto the surface of all other objects in the scene. To carry out this integral, we discretize it by using the mollified Green's function and discrete integration over the source points on the collision surface,
170
+
171
+ $$
172
+ {\overrightarrow{v}}_{R}\left( \overrightarrow{x}\right) = \mathop{\sum }\limits_{j}{A}_{j}{\overrightarrow{n}}_{j} \cdot \left( {{\overrightarrow{u}}_{s}\left( {\overrightarrow{x}}_{j}\right) - {\overrightarrow{u}}_{p}\left( {\overrightarrow{x}}_{j}\right) - {\overrightarrow{u}}_{\infty }\left( {\overrightarrow{x}}_{j}\right) }\right) \nabla G\left( {\overrightarrow{x} - {\overrightarrow{x}}_{j}}\right) . \tag{16}
173
+ $$
174
+
175
+ The Rankine collision field is divergence free but the interesting fluid behaviour associated with surface interactions requires an estimate of the fluid rotation generated in the thin viscous boundary layer that forms in real fluids. We use a simple model to transfer rotation to vortex particles close to the surface. The procedure is shown in Figure 6. The tangential component is extracted from ${\overrightarrow{v}}_{R}$ . The initialized vortex vector is perpendicular to the surface normal and the tangential velocity component. Vortex particles can be initialized with a small normal displacement $\varepsilon$ from the surface. The vorticity is determined such that the generated velocity field cancels the tangential part of ${v}_{R}$ at the surface. The vortex vector is stored on the surface particles and mapped to nearby free-flowing vortex particles with an exponentially decreasing kernel to simulate turbulence generation in the boundary layer.
176
+
177
+ § 3.3 VELOCITY BROADCAST
178
+
179
+ It is possible to create appealing turbulent fluid motion with a small number of vortex particles but we still need a way of representing
180
+
181
+ < g r a p h i c s >
182
+
183
+ Figure 5: A vortex particle is converted to a vortex segment. The velocity is evaluated at each end of the segment. When the segment has been stretched in the velocity field it is converted back into a vortex particle.
184
+
185
+ < g r a p h i c s >
186
+
187
+ Figure 6: The relative velocity between the surface and the ambient fluid is measured and the no-through Rankine collision field is calculated using equation 16 (left). We initialize vorticity on the surface to match the tangential component of the Rankine velocity (middle), and map vorticity to the particles in the surrounding flow (right).
188
+
189
+ density. To render the effect, we distinguish between vortex particles and tracer particles. The velocity field is calculated directly on each vortex particle but this is not feasible for the millions of tracer particles used to render the effect. To update the velocity on millions of tracer particles in real time, we found that the best solution is an underlying scratchpad grid. The velocity field is calculated on grid nodes and tri-linear interpolation is used to update the velocity of the tracer particles within the grid. A second-order explicit Adams-Bashforth scheme was used for time integration as this yields better defined turbulence compared to a simple Euler time stepping scheme,
190
+
191
+ $$
192
+ {\overrightarrow{x}}_{t}^{\left( n + 1\right) } = {\overrightarrow{x}}_{t}^{n} + \left( {\frac{3}{2}{\overrightarrow{v}}_{t}^{n} - \frac{1}{2}{\overrightarrow{v}}_{t}^{\left( n - 1\right) }}\right) . \tag{17}
193
+ $$
194
+
195
+ The grid is sparse in the sense that each voxel only query the nearest vortex particles and empty grid regions are cheap to update. It is possible to use large grid domains without a significant compromise to efficiency. Tracer particles simply trace the velocity field, but they are not required to update the dynamics. Since each tracer particle only needs to source the velocity from the grid, millions of particles can be traced in real time though sub-millisecond performance restricts the count to $\sim {10}^{6}$ on a RTX2080 Max-Q GPU. For all the simulations shown in this work, we use 4000-8000 vortex particles.
196
+
197
+ § 4 RESULTS
198
+
199
+ We have simulated several examples showing the interacting between streams of fluids and different game characters. The motion capture library and character geometries from mixamo. com were used in the simulations. This library contains complex animations like dancing, running and jumping. Different characters with significant shape variations are used to further demonstrate the versatility of the method. With just 4096 vortex particles, our method can simulate fluids with rich dynamics. The physics easily fits within the computational budget of most games and we are able to update millions of tracer particles in real time. Tracing the velocity field constitutes the computational bottleneck of our method. We found that tracer particle counts up to $\sim {10}^{6}$ are feasible for sub-millisecond performance.
200
+
201
+ Table 1: Time measurements of the physics update with different combinations of tracer particles and vortex particles. The number in brackets denotes the upper limit of particles queries admitted for each velocity evaluation. With $\infty$ , we denote an unlimited number of queries Unless explicitly mentioned, the illustrations presented in this work used ${140}^{3}$ tracer particles, ${16}^{3}$ vortex particles, ${100}^{3}$ grid nodes and 32 as the query limit. All steps of our algorithm are implemented on the GPU and only the configuration of the character rig (bone transforms) needs to be transferred on each time step. We list this step separately as the access to the bone transforms will be readily available on the GPU for most game applications.
202
+
203
+ max width=
204
+
205
+ Name Tracer pts / Vortex pts / Grid Time (ms)
206
+
207
+ 1-3
208
+ Rig Transfer - 0.6
209
+
210
+ 1-3
211
+ Laminar Beam ${100}^{3}/{16}^{3}\left\lbrack {64}\right\rbrack /{100}^{3}$ <0.3
212
+
213
+ 1-3
214
+ Laminar Beam ${100}^{3}/{20}^{3}\left\lbrack {64}\right\rbrack /{100}^{3}$ 0.4
215
+
216
+ 1-3
217
+ Laminar Beam ${140}^{3}/{20}^{3}\left\lbrack {64}\right\rbrack /{100}^{3}$ 3.1
218
+
219
+ 1-3
220
+ Laminar Beam ${140}^{3}/{20}^{3}\left\lbrack {32}\right\rbrack /{100}^{3}$ 2.3
221
+
222
+ 1-3
223
+ Buoyancy Driven ${100}^{3}/{32}^{3}\left\lbrack {32}\right\rbrack /{100}^{3}$ 2.1
224
+
225
+ 1-3
226
+ Buoyancy Driven ${140}^{3}/{32}^{3}\left\lbrack {32}\right\rbrack /{100}^{3}$ 5.1
227
+
228
+ 1-3
229
+ Buoyancy Driven ${140}^{3}/{16}^{3}\left( \infty \right) /{100}^{3}$ 21.1
230
+
231
+ 1-3
232
+
233
+ Figure 1 and 2 depicts streams of fluid hitting characters in motion. The turbulence generation from the boundary layer is evident in Figure 2 and our method can also be used to simulate a variety of flame-like effects by seeding vortex particles with random vorticities at the fluid source as shown in Figure 1. Figure 3 shows collisions with a simple sphere object. The source particles on the collision surface creates accurate collision field though it requires a sufficiently large search radius. The source particles on surfaces are treated like the vortex particles in the surrounding fluid and it is important to include a sufficient number of source particles to resolve the collisions accurately. The side-by-side simulations in Figure 7 shows two similar scenarios with different numbers of tracer particles. Including rendering, we can simulate more than ${800}\mathrm{{fps}}$ with ${10}^{6}$ tracer particles and the dynamics are unchanged by the tracer particle count.
234
+
235
+ § 5 DISCUSSION AND LIMITATIONS
236
+
237
+ Pure vortex particle methods are well suited for real-time fluid simulation in game applications but have not been used widely. The method outlined here is simple to implement and fast enough to fit within the computational budget of most games. In this work, we have explored a limited set of applications, specifically, the interactions between characters and fluids which is a particular challenge with existing methods. By placing vortex particles on surfaces and using the matrix-free collision method, this can be handled easily with the proposed method. Several improvements are possible. The placement of source particles on collision geometry is fixed at runtime yet it could be advantageous to place them dynamically in the vicinity of fluid density. In particular, this may be required for large game worlds where all surfaces could potentially be collision surfaces. The continuous velocity fields created by the vortex particles is a decidedly advantageous feature of pure vortex methods. Since the evaluation of the velocity field represents the computational bottleneck, the number of velocity samples can be adapted to fit the computational budget and the level of detail needed for a particular application. A continuous velocity field entails that it is easy to swap the tracer particles for other density representations such as grid based density fields or texture representations. Vortex methods are well suited for gaseous fluids but they are difficult to adapt to other fluid phenomena. In particular, the inclusion of a free surface required for liquids is not straight-forward although approaches such as [13] is an example of a vortex method for liquids. These approaches are not necessarily more efficient that their velocity based counterparts.
238
+
239
+ < g r a p h i c s >
240
+
241
+ Figure 7: 4096 vortex particles instigate detailed fluid motion in a field of tracer particles. The dynamics are unchanged by the number of tracer particles. ${10}^{6}$ tracer particles are used in the left image and ${2.74} * {10}^{6}$ tracer particles are used on the right. The left simulation including rendering runs at $\sim {900}\mathrm{{fps}}$ and the right simulation runs at $\sim {300}$ fps.
242
+
243
+ § 6 CONCLUSION
244
+
245
+ We have presented a vortex based fluid solver capable of handling the intricate collisions between fluids and game characters. It is the first method to specifically target these interactions and resolve them at a high level of detail. Our method is fast enough for practical use in interactive applications like games or VR and represents an additional step towards bringing realism to fluid simulations for these kinds of applications.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/CT27gkIMlKU/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,222 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Normal Maps for Rendering Vast Ocean Scenes
2
+
3
+ Category: Research
4
+
5
+ ## Abstract
6
+
7
+ Maritime scene simulations frequently use a height-field representation of the ocean surface. Many scenarios create a visible surface over large areas, with high amounts of detail for a camera close to the surface. Efficient rendering of a vast ocean like this makes use of Level of Detail (LOD) degradation of polygonal tessellation of the surface. But LOD degradation can have consequences in the rendered scene, particularly near the horizon, for both qualitative and quantitative metrics. The magnitude of these impacts depend on the specifics of the ocean surface conditions, and on the structure of sky light illuminating the surface. Here we present a method of extending the concept of normal mapping to efficiently restore full spatial resolution of the surface normals to the LOD degraded surface. The impact of this normal mapping process is evaluated for qualitative and quantitative metrics, across a collection of ocean surface random realizations and for a collection of sky illumination patterns. Specific cases are presented in detail, and a summary assessment of the impact of 93 simulations is presented.
8
+
9
+ Index Terms: Computing Methodologies-Computer Graphics-Rendering-Reflectance Modeling Computing Methodologies—Computer Graphics—Rendering—Ray Tracing Applied Computing-Physical Sciences and engineering-Earth and atmospheric sciences-Environmental sciences
10
+
11
+ ## 1 INTRODUCTION
12
+
13
+ There are many graphics applications that employ realistic simulation and rendering of ocean surfaces. The VFX industry [9] and game industry [2] have applied phenomenological models of height-based ocean surfaces for a number of years. Ocean surface simulation is also used in engineering applications for ship operations trainers [3], assessment of remote sensing concepts and systems [14], and AI training of detection and classification systems [11].
14
+
15
+ The height-field approach, while present in many applications, is limited in physical fidelity because it is based on linearized Bernoulli wave theory, and so is not capable of simulating wave breaking, whitecaps, foam, or vortical motion. Its applicability to creating a vast ocean scene, meaning an ocean visible from close to camera out to the horizon, relies on phenomenological oceanographic observations of the statistical properties of ocean surfaces, treated as a random process in time and space. These empirical properties are emulated by random realizations of height fields that evolve according to a dispersion relationship, i.e. linearized Bernoulli theory of a free surface. Of course, the statistical description lacks the impact of complex nonlinear motion of the surface that only occurs transiently. Some applications supplement the height-field with a more complete CFD simulation that is either blended with the height-field surface, or driven partially by the height field realization [1, 12]. This 3D simulation is particularly useful at locations near the camera in a rendered maritime scene, but much less important at mid-range and near-horizon distances from the camera.
16
+
17
+ The creation and rendering of maritime scenes in computer graphics involves describing an ocean over regions of potentially hundreds of square kilometers. For example, for a camera located two meters above the mean ocean level, the distance to the horizon on earth is approximately $5\mathrm{\;{km}}$ , and the horizon distance for a camera at 100 meters above the mean ocean is approximately ${35}\mathrm{\;{km}}$ . The potentially viewable surface area for a $5\mathrm{\;{km}}$ horizon is approximately 80 ${\mathrm{{km}}}^{2}$ , and for a ${35}\mathrm{\;{km}}$ horizon, over ${4000}{\mathrm{\;{km}}}^{2}$ . Construction of a dynamic free-surface in a 3D CFD simulation over this vast scale, with detail sufficient for a camera at a height of meters to hundreds of meters above the surface, has severe practical limitations that the height-field approximation addresses.
18
+
19
+ There are three properties of height-field ocean simulations that allow for practical construction of ocean surfaces over this vast scale. First, when the Fast Fourier Transform method [6] is used to create a patch of ocean surface height field, the properties of the FFT produce a patch of surface that is periodic and can be applied as a tile to cover any desired area, including thousands of square kilometers. Such a repetition over vast areas is known to produce visual artifacts, in which prominent waves appear in a repeating pattern. This is overcome by the second property: that multiple realizations, when created at spatially-disparate resolutions, can be added together, suppressing the repetitive artifact. This property is made possible by the fact that the height field dynamics is a linearized theory, and so several height field realizations added together is equally valid as a height field realization. By choosing the repetition rates of the individual realizations appropriately, the effective repetition distance for the combined height field can be made to be hundreds of $\mathrm{{km}}$ using only 2 or 3 realizations [7], eliminating the artifact for the distances considered in this paper. The third property relates directly to the efficiency of the task of rendering a scene with a vast ocean. When the height field is tessellated into polygons (triangles or quads), standard methods of Level of Detail (LOD) can be employed to sample the height field with larger and larger polygons as the distance from the camera increases. Tessellation allows the rendering system to use the fastest and most efficient ray intersection acceleration structures suitable for a problem. In some applications, the ray tracing task computes the intersection directly against the height field data using a type of acceleration structure [8], eliminating the need for tessellation. Ray-height-field-intersection in this fashion is not as efficient as the approach for tessellated height field. Also for height fields that have some small amount of horizontal displacement, and height fields that are "wrapped" onto curved surfaces such as a spherical earth, the performance of ray-height-field-intersection degrades, whereas ray tracing a tessellated scene is unaffected by those conditions.
20
+
21
+ The application of LOD to the pattern of tessellation has consequences, however. Larger polygons lack detail of the height variations, and so the positions of ray intersections are shifted somewhat, producing a phenomenon known as "wave hiding", i.e. there may be regions in the height field, foreground of the point of intersection with the LOD tessellated polygon, that should have intersected the ray if they had been represented by the tessellation. Ocean surface rendering typically handles light reflection and refraction using Fresnel reflectivity and transmissivity, which is very sensitive to the surface normal. But LOD tessellation loses surface normal variations across the surface of the polygon, and interpolating the vertex normals recovers very little of that detail. Because the larger polygons are distant from the camera, there can be an expectation that these losses of surface detail can have negligible impact of the rendered image. As shown in the examples of this paper, this expectation is born out in some cases, but in most circumstances there is an impact both visually and quantifiably.
22
+
23
+ This paper focuses on the issue of restoring the surface normal detail in the rendering of LOD tessellation surfaces in scenes of vast oceans, providing visual and quantitative measures of the impact of restoring that detail. The approach is to apply a variation of the concept of normal maps [4], which are a tool for establishing detail during rendering, and for altering and controlling detail during rendering. Normal maps are typically generated and stored in a texture image. In the application here, there is no need to generate such a texture image. Instead, the original wave height simulation data can be used to generate a surface normal at any location on the surface by storing horizontal positions as vertex texture coordinates in the tessellated geometry, and reconstructing the surface normal at any location from the interpolated texture coordinate at the location of the ray-polygon intersection. No additional data is generated in preparation for rendering, and the render-time impact of on-the-fly normal construction is modest, and as noted below, can be offset in some cases by reduced time spent in construction of the ray trace acceleration structure.
24
+
25
+ To provide visual and quantitative measures of the impact of this form of normal mapping, four rendering scenarios are produced:
26
+
27
+ 1. LOD tessellate with small polygons and low amounts of LOD degradation (high resolution), and render with normal mapping.
28
+
29
+ 2. LOD tessellate with small polygons and low amounts of LOD degradation (high resolution), and render without normal mapping.
30
+
31
+ 3. LOD tessellate with modest polygons and high amounts of LOD degradation (low resolution), and render with normal mapping.
32
+
33
+ 4. LOD tessellate with modest polygons and high amounts of LOD degradation (low resolution), and render without normal mapping.
34
+
35
+ The four scenarios are created for a collection of 93 cases with randomly varing ocean surface conditions and sky illumination conditions. A visual comparison of the four scenarios for each case shows the relative impact of normal mapping and tessellation detail. Taking scenario 1 as a baseline, for each case the variance of the difference of scenario 1 and each of scenarios 2,3 , and 4 provides a quantitative assessment of the impact of normal mapping, particularly near the horizon. For this analysis, "near the horzon" is considered the range of elevations of 5 degrees below the horizon up to the horizon.
36
+
37
+ In the next section, the process of using a linear wave height field description of an ocean free surface is presented. This includes assembling the ocean from multiple "layers" of realizations, applying horizontal displacement if desired, and computing the surface normal from the combination of the surface layers. That is followed by an examination of one possible LOD tessellation process. Many tessellation schemes are possible, but the issues presented above about loss of detail in LOD tessellation apply to all, and normal mapping is applicable to all of them. In section 4 the specific implementation of normal mapping, as it applies to this specific problem, is presented, and in section 5 the impact of normal mapping on the visual and quantitative assessement is presented for a representative few of the 93 cases evaluated. The paper concludes in section 6 with an assessment of the quantitative improvements from normal mapping for all 93 cases generated.
38
+
39
+ ## 2 ASSEMBLING A VAST OCEAN
40
+
41
+ Ocean surfaces represented as a height field have been in use for some time [6]. Such a representation is based on a phenomenological model of the statistical properties of the height. This leads to a Fourier-domain representation for the height field as
42
+
43
+ $$
44
+ h\left( {\mathbf{x}, t}\right) = \int \frac{{d}^{2}k}{{\left( 2\pi \right) }^{2}}\widetilde{h}\left( {\mathbf{k}, t}\right) \exp \left( {i\mathbf{k} \cdot \mathbf{x}}\right) \tag{1}
45
+ $$
46
+
47
+ where the height $h$ at the horizontal position $\mathbf{x} \equiv \left( {x, z}\right)$ on the ocean surface is the Fourier transform of a complex height amplitude $\widetilde{h}$ as a function of a 2D Fourier wavevector $\mathbf{k}$ . The time-dependent amplitude is assembled from random time-independent amplitudes ${\widetilde{h}}_{0}\left( \mathbf{k}\right)$ and a dispersion relation $\omega \left( k\right)$
48
+
49
+ $$
50
+ \widetilde{h}\left( {\mathbf{k}, t}\right) = {\widetilde{h}}_{0}\left( \mathbf{k}\right) \exp \left( {{i\omega }\left( k\right) t}\right) + {\widetilde{h}}_{0}^{ * }\left( {-\mathbf{k}}\right) \exp \left( {-{i\omega }\left( k\right) t}\right) \tag{2}
51
+ $$
52
+
53
+ and $k$ is the magnitude of the $2\mathrm{D}$ wavevector $\mathbf{k}$ . In turn, the complex height amplitudes ${\widetilde{h}}_{0}\left( \mathbf{k}\right)$ are a random realization of complex values from a distribution that has a phenomenologically-prescribed spatial spectrum $P\left( \mathbf{k}\right)$ . There are a variety of spatial spectra that have been used for this application $\left\lbrack {9,{10}}\right\rbrack$ .
54
+
55
+ This height field representation is sometimes supplemented with horizontal displacements of the surface, using a logic based on Gerstner waves that constructs the 2D horizontal displacement $\mathbf{D}\left( {\mathbf{x}, t}\right)$ at any point from the height field in the Fourier space representation as
56
+
57
+ $$
58
+ \mathbf{D}\left( {\mathbf{x}, t}\right) = {f}_{d}\int \frac{{d}^{2}k}{{\left( 2\pi \right) }^{2}}\left( {-i\frac{\mathbf{k}}{k}}\right) \widetilde{h}\left( {\mathbf{k}, t}\right) \exp \left( {i\mathbf{k} \cdot \mathbf{x}}\right) \tag{3}
59
+ $$
60
+
61
+ and ${f}_{d}$ is a user-specified dimensionless displacement scaling parameter. With this displacement, the 3D position of the ocean surface for the "nominal" flat-plane coordinate $\mathbf{x}$ is
62
+
63
+ $$
64
+ \mathbf{X}\left( {\mathbf{x}, t}\right) = \mathbf{x} + \mathbf{D}\left( {\mathbf{x}, t}\right) + \widehat{\mathbf{y}}h\left( {\mathbf{x}, t}\right) \tag{4}
65
+ $$
66
+
67
+ and $\widehat{\mathbf{y}}$ is the unit vector pointing upward.
68
+
69
+ In numerical implementations, the Fourier transforms in equations 1 and 3 are replaced with Fast Fourier Transforms (FFTs), which generate the height and displacement fields on a rectangular spatial grid with user-chosen number of grid points and spatial extent, and sums over a discrete set of wave vectors that complement the number of grid points and spatial extent to the Nyquist limit. Evaluating quantities at locations that are not grid points is accomplished via bilinear interpolation. This gridded height field is also spatially periodic as a result of the FFT computation. The periodicity can be used as a tiling scheme to extend the ocean surface beyond the nominal bounds of the FFT domain. An unfortunate consequence of the periodicity of the tile pattern is that visualizations of the ocean surface can have noticeable repetitions of prominent waves in the scene. This is overcome by generating multiple random realizations of height, ${h}_{i}\left( {\mathbf{x}, t}\right)$ and corresponding displacements, ${\mathbf{D}}_{i}\left( {\mathbf{x}, t}\right)$ for $i = 0,\ldots , N - 1$ , with different choices of spatial extent and periodicity of the realizations. The full surface is assembled as the sum of these "layers":
70
+
71
+ $$
72
+ h\left( {\mathbf{x}, t}\right) = \mathop{\sum }\limits_{{i = 0}}^{{N - 1}}{h}_{i}\left( {\mathbf{x}, t}\right) \tag{5}
73
+ $$
74
+
75
+ $$
76
+ \mathbf{D}\left( {\mathbf{x}, t}\right) = \mathop{\sum }\limits_{{i = 0}}^{{N - 1}}{\mathbf{D}}_{i}\left( {\mathbf{x}, t}\right) \tag{6}
77
+ $$
78
+
79
+ When the spatial extents of the realizations are not related via integer ratios, the repetition of waves can be reduced or completely eliminated. This makes it possible to visually represent a vast ocean expanse even with only a few realizations, e.g. $N = 2$ or 3, free from repetition artifacts.
80
+
81
+ The normal for the displaced surface is computed from the ex-
82
+
83
+ pression
84
+
85
+ $$
86
+ {\widehat{\mathbf{n}}}_{S}\left( {\mathbf{x}, t}\right) = \frac{\partial \mathbf{X}}{\partial x} \times \frac{\partial \mathbf{X}}{\partial z}/\left| {\frac{\partial \mathbf{X}}{\partial x} \times \frac{\partial \mathbf{X}}{\partial z}}\right| \tag{7}
87
+ $$
88
+
89
+ with partial derivatives obtained in practice either by finite differences, or by the more accurate FFT evaluation of the derivatives. For the examples shown here, the FFT approach was used to compute additional data for the spatial gradients for each layer.
90
+
91
+ Rendering a maritime scene using Global Illumination algorithms, such as Monte Carlo path tracing, is assisted by tessellating the ocean surface into polygons. Here the discussion is focused on tessellation into triangles, but the results apply equally to other choices of polygonalization. The tessellation lays out a network of grid points in the $\mathbf{x}$ coordinate, i.e. ${\mathbf{x}}_{i}$ that are arranged in collections of triangles in the flat 2D plane. Each vertex $i$ of the ocean surface tessellated geometry hold, among other possible rendering-related information, the 3D position ${\mathbf{X}}_{i} = \mathbf{X}\left( {{\mathbf{x}}_{i}, t}\right)$ , surface normal ${\widehat{\mathbf{n}}}_{Si} =$ ${\widehat{\mathbf{n}}}_{S}\left( {{\mathbf{x}}_{i}, t}\right)$ , and texture coordinate ${\mathbf{x}}_{i}$ . For a ray intersecting one of the triangles, the vertex normals can be interpolated to produce a surface normal at the location of the ray intersection, for use in shading and/or reflection and refraction of rays.
92
+
93
+ The choice of tessellation pattern is very dependent on the scene content within the camera field of view. For regions close to the camera, a reasonable choice is to generate triangles with sides roughly the same as the smallest grid spacing in the set of height fields used, although in some cases finer detail may be of interest because the horizontal displacements can compress together regions near the peaks of waves. However, at great distance from the camera, Level of Detail (LOD) schemes are valuable where the camera cannot resolve small triangles. In order to render a vast ocean, LOD tessellation is essential in order to efficiently render. The smallest details usually present in phenomenological ocean spectra are around a few cm in size. If triangles with $3\mathrm{\;{cm}}$ edges are tessellated from a camera located $2\mathrm{\;m}$ above the mean ocean surface to the horizon $5\mathrm{\;{km}}$ away, the number of triangles that must be generated is on the order of ${10}^{10}$ , although the exact amount would depend on the choice of tessellation pattern. One LOD scheme to reduce the number of triangles is the doubling method, in which a user-specified "double-distance" gives the range at which the size of the triangles and the double-distance are both doubled. Figure 1 shows the doubling method for two different tessellation patterns. For example, if the triangle size close to the camera is $3\mathrm{\;{cm}}$ and the double-distance is ${14}\mathrm{\;m}$ , triangles beyond ${14}\mathrm{\;m}$ are doubled in size to $6\mathrm{\;{cm}}$ , and the double-distance is increased to ${28}\mathrm{\;m}$ . Beyond ${42}\mathrm{\;m}$ the triangles are doubled to ${12}\mathrm{\;{cm}}$ and the double-distance is doubled to ${56}\mathrm{\;m}$ , and so on until tessellation terminates at the furthest desired distance. In this example extending out to the horizon at $5\mathrm{\;{km}}$ , doubling happens 6 times and the triangle size at the furthest distance, near the horizon, is ${1.9}\mathrm{\;m}$ . The number of triangles is reduced from around ${10}^{10}$ to around ${10}^{8}$ , which is a large but manageable number of triangles in current Monte Carlo path tracing software. Even so, ${10}^{8}$ triangles takes time to assemble and distribute in an acceleration structure such as a BVH tree, and has a substantial memory burden. More aggressive doubling can further reduce the resources needed. If the above example starts with triangles with resolution $3\mathrm{\;{cm}}$ and a double-distance of only $6\mathrm{\;m}$ , the triangle count reduces to ${20}\%$ of the number for the ${14}\mathrm{\;m}$ double-distance. Similarly, keeping a double-distance of ${14}\mathrm{\;m}$ but increasing the smallest resolution to ${90}\mathrm{\;{cm}}$ reduces the triangle count to $3\%$ of the original. Aggressive application of LOD has consequences, however.
94
+
95
+ ## 3 CONSEQUENCES OF LOD TESSELLATION NEAR THE HORIZON
96
+
97
+ Lighting of maritime scenes can be important near the horizon. When the sun is low in the sky, the brightest part of the sky is near the horizon, and the largest gradients of the light field are near the horizon. When the sun is high in the sky, the horizon still has substantial lighting impact because of volumetric scattering of the sunlight by the atmosphere. The loss of wave detail near the horizon due to LOD tessellation could lead to biased and/or incorrect rendering of near-horizon lighting. This concern is amplified by the fact that the reflectivity, direction of refraction, and direction of reflection at ray-intersection points with water surfaces are very sensitive to the surface normal. In turn, the surface normal is obtained by interpolation of vertex normals, increasing the potentially negative impact of LOD tessellation. The visual impact of LOD tessellation is demonstrated in Figure 2, of an ocean surface and sky rendering in a Monte Carlo path tracer, with two different choices of the near-camera triangle size.
98
+
99
+ ![01963e65-600e-7489-99aa-9c40c993b3ad_2_960_148_651_711_0.jpg](images/01963e65-600e-7489-99aa-9c40c993b3ad_2_960_148_651_711_0.jpg)
100
+
101
+ Figure 1: Two examples of tessellation patterns that use the doubling method. Top: Tessellation around the field of view of a perspective camera. Bottom: Tessellation in all directions in a square pattern of nested grids.
102
+
103
+ ## 4 NORMAL MAPPING FOR OCEAN SURFACE RENDERING
104
+
105
+ A very successful way of adding and controlling detail in the shading of a surface is to inject normal maps to the shading algorithm. The normal arising from a normal map can modify or completely replace the interpolated-vertex-normal, and can be incorporate into rendering pipelines as an encoded texture. However, for rendering vast ocean surfaces with LOD tessellation, it is not necessary to generate a special-purpose texture for normal mapping, and in fact such a texture would potentially be very large when repetitive artifacts have been suppressed. Instead, we can continue to use the height $h\left( {\mathbf{x}, t}\right)$ and displacement $\mathbf{D}\left( {\mathbf{x}, t}\right)$ fields, composed of multiple layers ${h}_{i}$ and ${\mathbf{D}}_{i}$ , and their spatial gradients. At the location of the intersection of a ray with a triangle, the texture coordinates on the triangle are interpolated to produce a nominal texture coodinate horizontal position $\mathbf{x}$ at the intersection point. This coordinate can be applied in equation 7 with the ocean realization data to compute a normal at the intersection point. This normal is used for all subsequent shading and path spawning operations.
106
+
107
+ This normal, computed on the fly at each ray-triangle intersection, contains all of the spatial detail in ocean realization. Fresnel optical properties are sensitive to the surface normal, so capturing this spatial detail has important benefits, as demonstrated in the sections below. However, it does not capture the "hiding" effects that the full height field would include, in which rays may intersect the surface earlier or later than the triangle intersection as a consequence of the lost height-field detail.
108
+
109
+ ![01963e65-600e-7489-99aa-9c40c993b3ad_3_149_152_1507_390_0.jpg](images/01963e65-600e-7489-99aa-9c40c993b3ad_3_149_152_1507_390_0.jpg)
110
+
111
+ Figure 2: Two renders of the same ocean with differing amounts of tessellation. The camera is $2\mathrm{\;m}$ above the mean ocean surface, and tessellation extends to the horizon $5\mathrm{\;{km}}$ away. Left: Near-camera triangle size of $3\mathrm{\;{cm}}$ and double distance of ${18}\mathrm{\;m}$ (6 generations of doubling, with ${1.9}\mathrm{\;m}$ triangles near the horizon). Right: Near-camera triangle size of ${90}\mathrm{\;{cm}}$ and double-distance of ${18}\mathrm{\;m}$ (6 generations of doubling, with ${58}\mathrm{\;m}$ triangles at the horizon). The camera has a 360 degree field of view with equirectangular projection, and these frames are cropped from the full images.
112
+
113
+ ## 5 IMPACT OF NORMAL MAPPING OCEAN SURFACE REN- DERING
114
+
115
+ The examples in this paper compare two tessellations, for a variety of lighting and ocean conditions. The double-distance is ${18}\mathrm{\;m}$ for both tessellations. The high resolution tessellation has near-camera triangles $3\mathrm{\;{cm}}$ in size, which grow to ${1.9}\mathrm{\;m}$ at the horizon $5\mathrm{\;{km}}$ from the camera, for a total of 22,341,600 triangles from the camera to the horizon in all directions. The low resolution tessellation has near-camera triangles with size ${90}\mathrm{\;{cm}}$ , which grow to ${58}\mathrm{\;m}$ at the horizon, for a total of 25,520 triangles, roughly 0.11% of the number of triangles for the high resolution tessellation. The Monte Carlo path tracer used 1000 samples per pixel. Each intersection with an ocean surface triangle generated a Fresnel reflection and refraction. Each Monte Carlo path was limited to no more than 10 segments because initial tests found no significant additional contribution from paths with more segments. The camera has a full 360 degree spherical field of view in order to capture the impact of resolution and normal mapping throughout the environment. The only lighting of the scene was from Image Based Lighting (IBL) [5] with 360 degree sky maps composed from 360 degree photos with ground cluttered removed. Figure 3 shows two of the IBL skies used. In this images, the horizontal bisector is at the horizon, the top of the image looks straight up and the bottom of the images is straight down.
116
+
117
+ A collection of 93 variations of maritime conditions and sky IBL map were generated. The ocean realizations were based on the TMA spectrum [10], with randomly generated spectrum parameters. The IBL sky was chosen from a collection of eighteen sky maps. Here we show specific results from five of the 93 variations, chosen to illustrate the range of outcomes found. The TMA spectrum parameters for each case are in Table 1.
118
+
119
+ The variations have been evaluated based on two criteria of the impact of resolution and normal mapping near the horizon. The visual impact criterion compares the rendered images side-by-side to show the qualitative relative contributions of tessellation resolution and normal mapping. A quantitative statistical criterion treats the high resolution normal mapped render as a baseline to statistically compute the mean and standard deviation of relative luminance [13] difference between the baseline and each the other three renders, for each case. Both criteria are presented below for five chosen illustrative cases. Figures 4 and 5 show one case which demonstates the impact of normal mapping, particulary when the strongest light in the sky is near the horizon. The visual demonstration in figure 4 shows that the low-resolution tessellation with normal mapping (bottom image), produces very nearly the same image detail as the high-resolution tessellation, whether the high-resolution tessellation case is normal mapped (middle image) or not (top image). This visual appearance is born out in the quantitative evaluation of the difference between the high tessellation, normal mapped render and each of the three other options (high tessellation without normal mapping, low tessellation with and without normal mapping). Figure 5 shows that difference, azimuthally-accumulated into mean and standard deviation of the relative luminance of the difference. The data is plotted for elevations from the horizon to five degrees below the horizon.
120
+
121
+ In this case, and in all of the cases, the green curve, representing the high tessellation without normal mapping, is most similar to the baseline high tessellation normal mapped image. Also true in all cases is that the low tessellation normal mapped result (blue curve) better matches the baseline than low tessellation without normal mapping (yellow curve). In this particular case of figure 5 , normal mapping reduced the low tessellation standard deviation by a factor of 5 to 10 . This case demonstrates clear and substantial impacts from normal mapping. But the impact is dependent on both ocean surface conditions and lighting conditions. The four cases below show more outcomes from variations of sky and surface parameter choices.
122
+
123
+ Figure 6 shows an environment with an overcast sky, relatively uniform intensity across the IBL image, low windspeed and low wave height. The four images (high tessellation with and without mapping, low tessellation with and without mapping) are all very similar to each other. The statistical behavior in figure 7 shows that the low tessellation cases follow each other closely, although the low-tessellation-mapped data have lower standard deviation than the low-tessellation-not-mapped data, at all elevations. Note however, that wave height is not the single key factor driving improvement from normal mapping, because the case in figures 4 and 5 is also low wave height with substantial improvement from normal mapping, while also having more variation of lighting than in the current case.
124
+
125
+ The environment in figure 8 contains a partly cloudy sky, stronger windspeed than 6 , and mild wave heights. The visual improvement from normal mapping is significant. The low-tessellation-mapped result has visible differences from the baseline near the horizon, although much less pronounced that the low-tessellation-unmapped image. The standard deviation in figure 9 shows about ${20}\%$ improvement from mapping.
126
+
127
+ The case in figure 11 has the same sky as figure 8 and similar windspeed and RMS wave height. But this case has much shorter fetch and much shallower bottom depth, producing ocean surface content that is smoother but with choppy waves. Visually, all four images have significant differences. The standard deviation in figure 12 is improved by normal mapping by approximately 50%, but the
128
+
129
+ ![01963e65-600e-7489-99aa-9c40c993b3ad_4_372_271_1057_1060_0.jpg](images/01963e65-600e-7489-99aa-9c40c993b3ad_4_372_271_1057_1060_0.jpg)
130
+
131
+ Figure 3: Two IBL skies used in rendering vast oceans. The images cover 360 degrees horizontally and 180 degrees vertically. A spherical skydome ${200}\mathrm{\;{km}}$ above the ocean surface uses the skymap as a texture.
132
+
133
+ Table 1: Maritime conditions for the example cases based on TMA spectrum.
134
+
135
+ <table><tr><td>Case</td><td>Windspeed (m/s)</td><td>RMS Height (m)</td><td>Fetch (km)</td><td>Depth (m)</td></tr><tr><td>12</td><td>1.3</td><td>0.098</td><td>194.8</td><td>381.4</td></tr><tr><td>22</td><td>4.16</td><td>0.225</td><td>204.5</td><td>997.9</td></tr><tr><td>34</td><td>2.38</td><td>0.239</td><td>117</td><td>150.9</td></tr><tr><td>50</td><td>4.91</td><td>0.548</td><td>99.95</td><td>678.8</td></tr><tr><td>87</td><td>1.78</td><td>0.127</td><td>141.35</td><td>263.4</td></tr></table>
136
+
137
+ ![01963e65-600e-7489-99aa-9c40c993b3ad_5_149_154_1499_1092_0.jpg](images/01963e65-600e-7489-99aa-9c40c993b3ad_5_149_154_1499_1092_0.jpg)
138
+
139
+ Figure 4: Case 87. Renders using normal mapping of the ocean surface. Top: Near-camera triangle size of $3\mathrm{\;{cm}}$ and double distance of ${18}\mathrm{\;m}$ (6 generations of doubling, with ${1.9}\mathrm{\;m}$ triangles near the horizon) and no normal mapping, for reference. Top-Middle: Near-camera triangle size of $3\mathrm{\;{cm}}$ and double distance of ${18}\mathrm{\;m}$ (6 generations of doubling, with ${1.9}\mathrm{\;m}$ triangles near the horizon) with normal mapping. Bottom-Middle: Near-camera triangle size of ${90}\mathrm{\;{cm}}$ and double-distance of ${18}\mathrm{\;m}$ (6 generations of doubling, with ${58}\mathrm{\;m}$ triangles at the horizon) with normal mapping. Bottom: Near-camera triangle size of ${90}\mathrm{\;{cm}}$ and double-distance of ${18}\mathrm{\;m}$ (6 generations of doubling, with ${58}\mathrm{\;m}$ triangles at the horizon) without normal mapping. low-tessellation outcomes are statistically very different from the high-tessellation outcomes. This case is the kind of scenario that might be impacted by the lack of wave hiding in the near-horizon region with heavy loss of surface detail from LOD degradation.
140
+
141
+ ![01963e65-600e-7489-99aa-9c40c993b3ad_5_618_1537_573_418_0.jpg](images/01963e65-600e-7489-99aa-9c40c993b3ad_5_618_1537_573_418_0.jpg)
142
+
143
+ Figure 5: Case 87. Left: Azimuth-averaged mean difference from the high resolution mapped case. Right: Azimuth-averaged standard deviations from the high resolution mapped case. Green: High resolution unmapped; Blue: low resolution mapped; Yellow: low resolution unmapped.
144
+
145
+ ![01963e65-600e-7489-99aa-9c40c993b3ad_6_151_183_1497_1090_0.jpg](images/01963e65-600e-7489-99aa-9c40c993b3ad_6_151_183_1497_1090_0.jpg)
146
+
147
+ Figure 6: Case 12. From top to bottom: High resolution with normal map, high resolution without normap map, low resolution with normal map, low resolution without normal map.
148
+
149
+ ![01963e65-600e-7489-99aa-9c40c993b3ad_6_615_1506_577_420_0.jpg](images/01963e65-600e-7489-99aa-9c40c993b3ad_6_615_1506_577_420_0.jpg)
150
+
151
+ Figure 7: Case 12. Left: Azimuth-averaged mean difference from the high resolution mapped case. Right: Azimuth-averaged standard deviations from the high resolution mapped case. Green: High resolution unmapped; Blue: low resolution mapped; Yellow: low resolution unmapped.
152
+
153
+ ![01963e65-600e-7489-99aa-9c40c993b3ad_7_149_182_1500_1091_0.jpg](images/01963e65-600e-7489-99aa-9c40c993b3ad_7_149_182_1500_1091_0.jpg)
154
+
155
+ Figure 8: Case 22. From top to bottom: High resolution with normal map, high resolution without normap map, low resolution with normal map, low resolution without normal map.
156
+
157
+ ![01963e65-600e-7489-99aa-9c40c993b3ad_7_615_1507_577_418_0.jpg](images/01963e65-600e-7489-99aa-9c40c993b3ad_7_615_1507_577_418_0.jpg)
158
+
159
+ Figure 9: Case 22. Left: Azimuth-averaged mean difference from the high resolution mapped case. Right: Azimuth-averaged standard deviations from the high resolution mapped case. Green: High resolution unmapped; Blue: low resolution mapped; Yellow: low resolution unmapped.
160
+
161
+ ![01963e65-600e-7489-99aa-9c40c993b3ad_8_197_207_674_480_0.jpg](images/01963e65-600e-7489-99aa-9c40c993b3ad_8_197_207_674_480_0.jpg)
162
+
163
+ Figure 10: Scatter plot of azimuth-averaged standard deviation, averaged over angles from 0 to 5 degrees below the horizon, for 93 cases. The six outlier points at the top of the scatter are cases with the same sky as 87 (figure 4) but varying ocean surface conditions. The straight line is equal variance with and without mapping. Points below the line have lower standard deviation when not mapped, and points above the line have lower standard deviation when mapped.
164
+
165
+ Another case that may suffer from insufficient wave hiding is shown in figure 13. This case has extensive clouds while not being overcast, the highest windspeed, and more than twice the RMS wave height as the other cases. While the fetch is relatively short, the depth is large and the waves are not smooth like that of figure 11 . The normal mapping improves the standard deviation in figure 14 by a factor of 2 near the horizon. This case also illustrates that normal mapping a low-tessellation case can also improve the visual result near camera (toward the bottom of the images).
166
+
167
+ ## 6 CONCLUSIONS
168
+
169
+ The impact of normal mapping is summarized in the scatter plot in figure 10. This plot compares the azimuthally-accumulated standard deviation of the luminace difference, averaged over the near-horizon ( 0 to -5 degrees) of the low resolution tesselled renders with and without mapping, for all 93 cases generated. One feature is the six "outlier" cases with much better performance, lying above the scatter. Figure 4 is one of them, and all six have the same sky but varying ocean conditions.
170
+
171
+ In all 93 cases, normal mapping has brought renders with low-resolution tessellation closer statistically to the high-resolution tessellation. The high standard deviation cases have the highest wave heights among the cases studied. As noted for cases 34 and 50, these high-standard-deviation cases may suffer from lack of wave hiding on large triangles near the horizon as one source for their higher values.
172
+
173
+ The resource requirements of this approach to normal mapping are modest, and in some cases favorable. The calculation of normals on-the-fly at each ray-triangle intersection added between $1\%$ and ${11}\%$ to the total render time for the various cases. However, the low-resolution tessellation cases saved an amount of time building the acceleration tree structure, in this study a BVH tree, that was comparable to, and sometimes more than, the additional time-cost of normal calculations.
174
+
175
+ In every one of the 93 cases studied, normal mapping improved the quality of the rendered result, both visually and statistically. The extent of the improvement was sensitive to the sky light and the ocean surface conditions. The application of this approach to normal mapping for ocean surface renders may have a systematic benefit under routine use.
176
+
177
+ ## REFERENCES
178
+
179
+ [1] Autodesk. Boss - bifrost ocean simulation system, 2021.
180
+
181
+ [2] H. Bowles. Multi-resolution ocean rendering in crest ocean system, Aug 2019.
182
+
183
+ [3] J.-M. Cieutat, J.-C. Gonzato, and P. Guitton. A new efficient wave model for maritime training simulator. pp. 202-209, 02 2001. doi: 10. 1109/SCCG.2001.945355
184
+
185
+ [4] J. Cohen, M. Olano, and D. Manocha. Appearance-preserving simplification. In Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '98, p. 115-122. Association for Computing Machinery, New York, NY, USA, 1998. doi: 10.1145/280814.280832
186
+
187
+ [5] P. Debevec. Rendering synthetic objects into real scenes: Bridging traditional and image-based graphics with global illumination and high dynamic range photography. In Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '98, p. 189-198. Association for Computing Machinery, New York, NY, USA, 1998. doi: 10.1145/280814.280864
188
+
189
+ [6] O. Deusen, D. S. Ebert, R. Fedkiw, F. K. Musgrave, P. Prusinkiewicz, D. Roble, J. Stam, and J. Tessendorf. The elements of nature: Interactive and realistic techniques. In ACM SIGGRAPH 2004 Course Notes, SIGGRAPH '04, p. 32-es. Association for Computing Machinery, New York, NY, USA, 2004. doi: 10.1145/1103900.1103932
190
+
191
+ [7] K. L. Gundersen. Ocean surface shader, May 2015.
192
+
193
+ [8] C. Henning and P. Stephenson. Accelerating the ray tracing of height fields. pp. 254-258, 01 2004. doi: 10.1145/988834.988878
194
+
195
+ [9] C. J. Horvath. Empirical directional wave spectra for computer graphics. In Proceedings of the 2015 Symposium on Digital Production, DigiPro '15, p. 29-39. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2791261.2791267
196
+
197
+ [10] S. R. Massel. Ocean Surface Waves: Their Physics and Prediction. WORLD SCIENTIFIC, 1996. doi: 10.1142/2285
198
+
199
+ [11] A. Sagar. Deep learning for ship detection and segmentation, Nov 2019.
200
+
201
+ [12] S. Software. Oceans, 2021.
202
+
203
+ [13] Wikipedia contributors. Relative luminance - Wikipedia, the free encyclopedia. https://en.wikipedia.org/w/index.php?title= Relative_luminance&oldid=1053378045, 2021. [Online; accessed 17-December-2021].
204
+
205
+ [14] A. Zapevalov, K. Pokazeev, and T. Chaplina. Simulation of the sea surface for remote sensing. Springer, 2020.
206
+
207
+ ![01963e65-600e-7489-99aa-9c40c993b3ad_9_149_182_1499_1093_0.jpg](images/01963e65-600e-7489-99aa-9c40c993b3ad_9_149_182_1499_1093_0.jpg)
208
+
209
+ Figure 11: Case 34. From top to bottom: High resolution with normal map, high resolution without normap map, low resolution with normal map, low resolution without normal map.
210
+
211
+ ![01963e65-600e-7489-99aa-9c40c993b3ad_9_615_1507_577_421_0.jpg](images/01963e65-600e-7489-99aa-9c40c993b3ad_9_615_1507_577_421_0.jpg)
212
+
213
+ Figure 12: Case 34. Left: Azimuth-averaged mean difference from the high resolution mapped case. Right: Azimuth-averaged standard deviations from the high resolution mapped case. Green: High resolution unmapped; Blue: low resolution mapped; Yellow: low resolution unmapped.
214
+
215
+ ![01963e65-600e-7489-99aa-9c40c993b3ad_10_148_182_1502_1092_0.jpg](images/01963e65-600e-7489-99aa-9c40c993b3ad_10_148_182_1502_1092_0.jpg)
216
+
217
+ Figure 13: Case 50. From top to bottom: High resolution with normal map, high resolution without normap map, low resolution with normal map, low resolution without normal map. In this case, the near-camera structue was also improved by normal mapping.
218
+
219
+ ![01963e65-600e-7489-99aa-9c40c993b3ad_10_616_1507_578_418_0.jpg](images/01963e65-600e-7489-99aa-9c40c993b3ad_10_616_1507_578_418_0.jpg)
220
+
221
+ Figure 14: Case 50. Left: Azimuth-averaged mean difference from the high resolution mapped case. Right: Azimuth-averaged standard deviations from the high resolution mapped case. Green: High resolution unmapped; Blue: low resolution mapped; Yellow: low resolution unmapped.
222
+
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/CT27gkIMlKU/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § NORMAL MAPS FOR RENDERING VAST OCEAN SCENES
2
+
3
+ Category: Research
4
+
5
+ § ABSTRACT
6
+
7
+ Maritime scene simulations frequently use a height-field representation of the ocean surface. Many scenarios create a visible surface over large areas, with high amounts of detail for a camera close to the surface. Efficient rendering of a vast ocean like this makes use of Level of Detail (LOD) degradation of polygonal tessellation of the surface. But LOD degradation can have consequences in the rendered scene, particularly near the horizon, for both qualitative and quantitative metrics. The magnitude of these impacts depend on the specifics of the ocean surface conditions, and on the structure of sky light illuminating the surface. Here we present a method of extending the concept of normal mapping to efficiently restore full spatial resolution of the surface normals to the LOD degraded surface. The impact of this normal mapping process is evaluated for qualitative and quantitative metrics, across a collection of ocean surface random realizations and for a collection of sky illumination patterns. Specific cases are presented in detail, and a summary assessment of the impact of 93 simulations is presented.
8
+
9
+ Index Terms: Computing Methodologies-Computer Graphics-Rendering-Reflectance Modeling Computing Methodologies—Computer Graphics—Rendering—Ray Tracing Applied Computing-Physical Sciences and engineering-Earth and atmospheric sciences-Environmental sciences
10
+
11
+ § 1 INTRODUCTION
12
+
13
+ There are many graphics applications that employ realistic simulation and rendering of ocean surfaces. The VFX industry [9] and game industry [2] have applied phenomenological models of height-based ocean surfaces for a number of years. Ocean surface simulation is also used in engineering applications for ship operations trainers [3], assessment of remote sensing concepts and systems [14], and AI training of detection and classification systems [11].
14
+
15
+ The height-field approach, while present in many applications, is limited in physical fidelity because it is based on linearized Bernoulli wave theory, and so is not capable of simulating wave breaking, whitecaps, foam, or vortical motion. Its applicability to creating a vast ocean scene, meaning an ocean visible from close to camera out to the horizon, relies on phenomenological oceanographic observations of the statistical properties of ocean surfaces, treated as a random process in time and space. These empirical properties are emulated by random realizations of height fields that evolve according to a dispersion relationship, i.e. linearized Bernoulli theory of a free surface. Of course, the statistical description lacks the impact of complex nonlinear motion of the surface that only occurs transiently. Some applications supplement the height-field with a more complete CFD simulation that is either blended with the height-field surface, or driven partially by the height field realization [1, 12]. This 3D simulation is particularly useful at locations near the camera in a rendered maritime scene, but much less important at mid-range and near-horizon distances from the camera.
16
+
17
+ The creation and rendering of maritime scenes in computer graphics involves describing an ocean over regions of potentially hundreds of square kilometers. For example, for a camera located two meters above the mean ocean level, the distance to the horizon on earth is approximately $5\mathrm{\;{km}}$ , and the horizon distance for a camera at 100 meters above the mean ocean is approximately ${35}\mathrm{\;{km}}$ . The potentially viewable surface area for a $5\mathrm{\;{km}}$ horizon is approximately 80 ${\mathrm{{km}}}^{2}$ , and for a ${35}\mathrm{\;{km}}$ horizon, over ${4000}{\mathrm{\;{km}}}^{2}$ . Construction of a dynamic free-surface in a 3D CFD simulation over this vast scale, with detail sufficient for a camera at a height of meters to hundreds of meters above the surface, has severe practical limitations that the height-field approximation addresses.
18
+
19
+ There are three properties of height-field ocean simulations that allow for practical construction of ocean surfaces over this vast scale. First, when the Fast Fourier Transform method [6] is used to create a patch of ocean surface height field, the properties of the FFT produce a patch of surface that is periodic and can be applied as a tile to cover any desired area, including thousands of square kilometers. Such a repetition over vast areas is known to produce visual artifacts, in which prominent waves appear in a repeating pattern. This is overcome by the second property: that multiple realizations, when created at spatially-disparate resolutions, can be added together, suppressing the repetitive artifact. This property is made possible by the fact that the height field dynamics is a linearized theory, and so several height field realizations added together is equally valid as a height field realization. By choosing the repetition rates of the individual realizations appropriately, the effective repetition distance for the combined height field can be made to be hundreds of $\mathrm{{km}}$ using only 2 or 3 realizations [7], eliminating the artifact for the distances considered in this paper. The third property relates directly to the efficiency of the task of rendering a scene with a vast ocean. When the height field is tessellated into polygons (triangles or quads), standard methods of Level of Detail (LOD) can be employed to sample the height field with larger and larger polygons as the distance from the camera increases. Tessellation allows the rendering system to use the fastest and most efficient ray intersection acceleration structures suitable for a problem. In some applications, the ray tracing task computes the intersection directly against the height field data using a type of acceleration structure [8], eliminating the need for tessellation. Ray-height-field-intersection in this fashion is not as efficient as the approach for tessellated height field. Also for height fields that have some small amount of horizontal displacement, and height fields that are "wrapped" onto curved surfaces such as a spherical earth, the performance of ray-height-field-intersection degrades, whereas ray tracing a tessellated scene is unaffected by those conditions.
20
+
21
+ The application of LOD to the pattern of tessellation has consequences, however. Larger polygons lack detail of the height variations, and so the positions of ray intersections are shifted somewhat, producing a phenomenon known as "wave hiding", i.e. there may be regions in the height field, foreground of the point of intersection with the LOD tessellated polygon, that should have intersected the ray if they had been represented by the tessellation. Ocean surface rendering typically handles light reflection and refraction using Fresnel reflectivity and transmissivity, which is very sensitive to the surface normal. But LOD tessellation loses surface normal variations across the surface of the polygon, and interpolating the vertex normals recovers very little of that detail. Because the larger polygons are distant from the camera, there can be an expectation that these losses of surface detail can have negligible impact of the rendered image. As shown in the examples of this paper, this expectation is born out in some cases, but in most circumstances there is an impact both visually and quantifiably.
22
+
23
+ This paper focuses on the issue of restoring the surface normal detail in the rendering of LOD tessellation surfaces in scenes of vast oceans, providing visual and quantitative measures of the impact of restoring that detail. The approach is to apply a variation of the concept of normal maps [4], which are a tool for establishing detail during rendering, and for altering and controlling detail during rendering. Normal maps are typically generated and stored in a texture image. In the application here, there is no need to generate such a texture image. Instead, the original wave height simulation data can be used to generate a surface normal at any location on the surface by storing horizontal positions as vertex texture coordinates in the tessellated geometry, and reconstructing the surface normal at any location from the interpolated texture coordinate at the location of the ray-polygon intersection. No additional data is generated in preparation for rendering, and the render-time impact of on-the-fly normal construction is modest, and as noted below, can be offset in some cases by reduced time spent in construction of the ray trace acceleration structure.
24
+
25
+ To provide visual and quantitative measures of the impact of this form of normal mapping, four rendering scenarios are produced:
26
+
27
+ 1. LOD tessellate with small polygons and low amounts of LOD degradation (high resolution), and render with normal mapping.
28
+
29
+ 2. LOD tessellate with small polygons and low amounts of LOD degradation (high resolution), and render without normal mapping.
30
+
31
+ 3. LOD tessellate with modest polygons and high amounts of LOD degradation (low resolution), and render with normal mapping.
32
+
33
+ 4. LOD tessellate with modest polygons and high amounts of LOD degradation (low resolution), and render without normal mapping.
34
+
35
+ The four scenarios are created for a collection of 93 cases with randomly varing ocean surface conditions and sky illumination conditions. A visual comparison of the four scenarios for each case shows the relative impact of normal mapping and tessellation detail. Taking scenario 1 as a baseline, for each case the variance of the difference of scenario 1 and each of scenarios 2,3, and 4 provides a quantitative assessment of the impact of normal mapping, particularly near the horizon. For this analysis, "near the horzon" is considered the range of elevations of 5 degrees below the horizon up to the horizon.
36
+
37
+ In the next section, the process of using a linear wave height field description of an ocean free surface is presented. This includes assembling the ocean from multiple "layers" of realizations, applying horizontal displacement if desired, and computing the surface normal from the combination of the surface layers. That is followed by an examination of one possible LOD tessellation process. Many tessellation schemes are possible, but the issues presented above about loss of detail in LOD tessellation apply to all, and normal mapping is applicable to all of them. In section 4 the specific implementation of normal mapping, as it applies to this specific problem, is presented, and in section 5 the impact of normal mapping on the visual and quantitative assessement is presented for a representative few of the 93 cases evaluated. The paper concludes in section 6 with an assessment of the quantitative improvements from normal mapping for all 93 cases generated.
38
+
39
+ § 2 ASSEMBLING A VAST OCEAN
40
+
41
+ Ocean surfaces represented as a height field have been in use for some time [6]. Such a representation is based on a phenomenological model of the statistical properties of the height. This leads to a Fourier-domain representation for the height field as
42
+
43
+ $$
44
+ h\left( {\mathbf{x},t}\right) = \int \frac{{d}^{2}k}{{\left( 2\pi \right) }^{2}}\widetilde{h}\left( {\mathbf{k},t}\right) \exp \left( {i\mathbf{k} \cdot \mathbf{x}}\right) \tag{1}
45
+ $$
46
+
47
+ where the height $h$ at the horizontal position $\mathbf{x} \equiv \left( {x,z}\right)$ on the ocean surface is the Fourier transform of a complex height amplitude $\widetilde{h}$ as a function of a 2D Fourier wavevector $\mathbf{k}$ . The time-dependent amplitude is assembled from random time-independent amplitudes ${\widetilde{h}}_{0}\left( \mathbf{k}\right)$ and a dispersion relation $\omega \left( k\right)$
48
+
49
+ $$
50
+ \widetilde{h}\left( {\mathbf{k},t}\right) = {\widetilde{h}}_{0}\left( \mathbf{k}\right) \exp \left( {{i\omega }\left( k\right) t}\right) + {\widetilde{h}}_{0}^{ * }\left( {-\mathbf{k}}\right) \exp \left( {-{i\omega }\left( k\right) t}\right) \tag{2}
51
+ $$
52
+
53
+ and $k$ is the magnitude of the $2\mathrm{D}$ wavevector $\mathbf{k}$ . In turn, the complex height amplitudes ${\widetilde{h}}_{0}\left( \mathbf{k}\right)$ are a random realization of complex values from a distribution that has a phenomenologically-prescribed spatial spectrum $P\left( \mathbf{k}\right)$ . There are a variety of spatial spectra that have been used for this application $\left\lbrack {9,{10}}\right\rbrack$ .
54
+
55
+ This height field representation is sometimes supplemented with horizontal displacements of the surface, using a logic based on Gerstner waves that constructs the 2D horizontal displacement $\mathbf{D}\left( {\mathbf{x},t}\right)$ at any point from the height field in the Fourier space representation as
56
+
57
+ $$
58
+ \mathbf{D}\left( {\mathbf{x},t}\right) = {f}_{d}\int \frac{{d}^{2}k}{{\left( 2\pi \right) }^{2}}\left( {-i\frac{\mathbf{k}}{k}}\right) \widetilde{h}\left( {\mathbf{k},t}\right) \exp \left( {i\mathbf{k} \cdot \mathbf{x}}\right) \tag{3}
59
+ $$
60
+
61
+ and ${f}_{d}$ is a user-specified dimensionless displacement scaling parameter. With this displacement, the 3D position of the ocean surface for the "nominal" flat-plane coordinate $\mathbf{x}$ is
62
+
63
+ $$
64
+ \mathbf{X}\left( {\mathbf{x},t}\right) = \mathbf{x} + \mathbf{D}\left( {\mathbf{x},t}\right) + \widehat{\mathbf{y}}h\left( {\mathbf{x},t}\right) \tag{4}
65
+ $$
66
+
67
+ and $\widehat{\mathbf{y}}$ is the unit vector pointing upward.
68
+
69
+ In numerical implementations, the Fourier transforms in equations 1 and 3 are replaced with Fast Fourier Transforms (FFTs), which generate the height and displacement fields on a rectangular spatial grid with user-chosen number of grid points and spatial extent, and sums over a discrete set of wave vectors that complement the number of grid points and spatial extent to the Nyquist limit. Evaluating quantities at locations that are not grid points is accomplished via bilinear interpolation. This gridded height field is also spatially periodic as a result of the FFT computation. The periodicity can be used as a tiling scheme to extend the ocean surface beyond the nominal bounds of the FFT domain. An unfortunate consequence of the periodicity of the tile pattern is that visualizations of the ocean surface can have noticeable repetitions of prominent waves in the scene. This is overcome by generating multiple random realizations of height, ${h}_{i}\left( {\mathbf{x},t}\right)$ and corresponding displacements, ${\mathbf{D}}_{i}\left( {\mathbf{x},t}\right)$ for $i = 0,\ldots ,N - 1$ , with different choices of spatial extent and periodicity of the realizations. The full surface is assembled as the sum of these "layers":
70
+
71
+ $$
72
+ h\left( {\mathbf{x},t}\right) = \mathop{\sum }\limits_{{i = 0}}^{{N - 1}}{h}_{i}\left( {\mathbf{x},t}\right) \tag{5}
73
+ $$
74
+
75
+ $$
76
+ \mathbf{D}\left( {\mathbf{x},t}\right) = \mathop{\sum }\limits_{{i = 0}}^{{N - 1}}{\mathbf{D}}_{i}\left( {\mathbf{x},t}\right) \tag{6}
77
+ $$
78
+
79
+ When the spatial extents of the realizations are not related via integer ratios, the repetition of waves can be reduced or completely eliminated. This makes it possible to visually represent a vast ocean expanse even with only a few realizations, e.g. $N = 2$ or 3, free from repetition artifacts.
80
+
81
+ The normal for the displaced surface is computed from the ex-
82
+
83
+ pression
84
+
85
+ $$
86
+ {\widehat{\mathbf{n}}}_{S}\left( {\mathbf{x},t}\right) = \frac{\partial \mathbf{X}}{\partial x} \times \frac{\partial \mathbf{X}}{\partial z}/\left| {\frac{\partial \mathbf{X}}{\partial x} \times \frac{\partial \mathbf{X}}{\partial z}}\right| \tag{7}
87
+ $$
88
+
89
+ with partial derivatives obtained in practice either by finite differences, or by the more accurate FFT evaluation of the derivatives. For the examples shown here, the FFT approach was used to compute additional data for the spatial gradients for each layer.
90
+
91
+ Rendering a maritime scene using Global Illumination algorithms, such as Monte Carlo path tracing, is assisted by tessellating the ocean surface into polygons. Here the discussion is focused on tessellation into triangles, but the results apply equally to other choices of polygonalization. The tessellation lays out a network of grid points in the $\mathbf{x}$ coordinate, i.e. ${\mathbf{x}}_{i}$ that are arranged in collections of triangles in the flat 2D plane. Each vertex $i$ of the ocean surface tessellated geometry hold, among other possible rendering-related information, the 3D position ${\mathbf{X}}_{i} = \mathbf{X}\left( {{\mathbf{x}}_{i},t}\right)$ , surface normal ${\widehat{\mathbf{n}}}_{Si} =$ ${\widehat{\mathbf{n}}}_{S}\left( {{\mathbf{x}}_{i},t}\right)$ , and texture coordinate ${\mathbf{x}}_{i}$ . For a ray intersecting one of the triangles, the vertex normals can be interpolated to produce a surface normal at the location of the ray intersection, for use in shading and/or reflection and refraction of rays.
92
+
93
+ The choice of tessellation pattern is very dependent on the scene content within the camera field of view. For regions close to the camera, a reasonable choice is to generate triangles with sides roughly the same as the smallest grid spacing in the set of height fields used, although in some cases finer detail may be of interest because the horizontal displacements can compress together regions near the peaks of waves. However, at great distance from the camera, Level of Detail (LOD) schemes are valuable where the camera cannot resolve small triangles. In order to render a vast ocean, LOD tessellation is essential in order to efficiently render. The smallest details usually present in phenomenological ocean spectra are around a few cm in size. If triangles with $3\mathrm{\;{cm}}$ edges are tessellated from a camera located $2\mathrm{\;m}$ above the mean ocean surface to the horizon $5\mathrm{\;{km}}$ away, the number of triangles that must be generated is on the order of ${10}^{10}$ , although the exact amount would depend on the choice of tessellation pattern. One LOD scheme to reduce the number of triangles is the doubling method, in which a user-specified "double-distance" gives the range at which the size of the triangles and the double-distance are both doubled. Figure 1 shows the doubling method for two different tessellation patterns. For example, if the triangle size close to the camera is $3\mathrm{\;{cm}}$ and the double-distance is ${14}\mathrm{\;m}$ , triangles beyond ${14}\mathrm{\;m}$ are doubled in size to $6\mathrm{\;{cm}}$ , and the double-distance is increased to ${28}\mathrm{\;m}$ . Beyond ${42}\mathrm{\;m}$ the triangles are doubled to ${12}\mathrm{\;{cm}}$ and the double-distance is doubled to ${56}\mathrm{\;m}$ , and so on until tessellation terminates at the furthest desired distance. In this example extending out to the horizon at $5\mathrm{\;{km}}$ , doubling happens 6 times and the triangle size at the furthest distance, near the horizon, is ${1.9}\mathrm{\;m}$ . The number of triangles is reduced from around ${10}^{10}$ to around ${10}^{8}$ , which is a large but manageable number of triangles in current Monte Carlo path tracing software. Even so, ${10}^{8}$ triangles takes time to assemble and distribute in an acceleration structure such as a BVH tree, and has a substantial memory burden. More aggressive doubling can further reduce the resources needed. If the above example starts with triangles with resolution $3\mathrm{\;{cm}}$ and a double-distance of only $6\mathrm{\;m}$ , the triangle count reduces to ${20}\%$ of the number for the ${14}\mathrm{\;m}$ double-distance. Similarly, keeping a double-distance of ${14}\mathrm{\;m}$ but increasing the smallest resolution to ${90}\mathrm{\;{cm}}$ reduces the triangle count to $3\%$ of the original. Aggressive application of LOD has consequences, however.
94
+
95
+ § 3 CONSEQUENCES OF LOD TESSELLATION NEAR THE HORIZON
96
+
97
+ Lighting of maritime scenes can be important near the horizon. When the sun is low in the sky, the brightest part of the sky is near the horizon, and the largest gradients of the light field are near the horizon. When the sun is high in the sky, the horizon still has substantial lighting impact because of volumetric scattering of the sunlight by the atmosphere. The loss of wave detail near the horizon due to LOD tessellation could lead to biased and/or incorrect rendering of near-horizon lighting. This concern is amplified by the fact that the reflectivity, direction of refraction, and direction of reflection at ray-intersection points with water surfaces are very sensitive to the surface normal. In turn, the surface normal is obtained by interpolation of vertex normals, increasing the potentially negative impact of LOD tessellation. The visual impact of LOD tessellation is demonstrated in Figure 2, of an ocean surface and sky rendering in a Monte Carlo path tracer, with two different choices of the near-camera triangle size.
98
+
99
+ < g r a p h i c s >
100
+
101
+ Figure 1: Two examples of tessellation patterns that use the doubling method. Top: Tessellation around the field of view of a perspective camera. Bottom: Tessellation in all directions in a square pattern of nested grids.
102
+
103
+ § 4 NORMAL MAPPING FOR OCEAN SURFACE RENDERING
104
+
105
+ A very successful way of adding and controlling detail in the shading of a surface is to inject normal maps to the shading algorithm. The normal arising from a normal map can modify or completely replace the interpolated-vertex-normal, and can be incorporate into rendering pipelines as an encoded texture. However, for rendering vast ocean surfaces with LOD tessellation, it is not necessary to generate a special-purpose texture for normal mapping, and in fact such a texture would potentially be very large when repetitive artifacts have been suppressed. Instead, we can continue to use the height $h\left( {\mathbf{x},t}\right)$ and displacement $\mathbf{D}\left( {\mathbf{x},t}\right)$ fields, composed of multiple layers ${h}_{i}$ and ${\mathbf{D}}_{i}$ , and their spatial gradients. At the location of the intersection of a ray with a triangle, the texture coordinates on the triangle are interpolated to produce a nominal texture coodinate horizontal position $\mathbf{x}$ at the intersection point. This coordinate can be applied in equation 7 with the ocean realization data to compute a normal at the intersection point. This normal is used for all subsequent shading and path spawning operations.
106
+
107
+ This normal, computed on the fly at each ray-triangle intersection, contains all of the spatial detail in ocean realization. Fresnel optical properties are sensitive to the surface normal, so capturing this spatial detail has important benefits, as demonstrated in the sections below. However, it does not capture the "hiding" effects that the full height field would include, in which rays may intersect the surface earlier or later than the triangle intersection as a consequence of the lost height-field detail.
108
+
109
+ < g r a p h i c s >
110
+
111
+ Figure 2: Two renders of the same ocean with differing amounts of tessellation. The camera is $2\mathrm{\;m}$ above the mean ocean surface, and tessellation extends to the horizon $5\mathrm{\;{km}}$ away. Left: Near-camera triangle size of $3\mathrm{\;{cm}}$ and double distance of ${18}\mathrm{\;m}$ (6 generations of doubling, with ${1.9}\mathrm{\;m}$ triangles near the horizon). Right: Near-camera triangle size of ${90}\mathrm{\;{cm}}$ and double-distance of ${18}\mathrm{\;m}$ (6 generations of doubling, with ${58}\mathrm{\;m}$ triangles at the horizon). The camera has a 360 degree field of view with equirectangular projection, and these frames are cropped from the full images.
112
+
113
+ § 5 IMPACT OF NORMAL MAPPING OCEAN SURFACE REN- DERING
114
+
115
+ The examples in this paper compare two tessellations, for a variety of lighting and ocean conditions. The double-distance is ${18}\mathrm{\;m}$ for both tessellations. The high resolution tessellation has near-camera triangles $3\mathrm{\;{cm}}$ in size, which grow to ${1.9}\mathrm{\;m}$ at the horizon $5\mathrm{\;{km}}$ from the camera, for a total of 22,341,600 triangles from the camera to the horizon in all directions. The low resolution tessellation has near-camera triangles with size ${90}\mathrm{\;{cm}}$ , which grow to ${58}\mathrm{\;m}$ at the horizon, for a total of 25,520 triangles, roughly 0.11% of the number of triangles for the high resolution tessellation. The Monte Carlo path tracer used 1000 samples per pixel. Each intersection with an ocean surface triangle generated a Fresnel reflection and refraction. Each Monte Carlo path was limited to no more than 10 segments because initial tests found no significant additional contribution from paths with more segments. The camera has a full 360 degree spherical field of view in order to capture the impact of resolution and normal mapping throughout the environment. The only lighting of the scene was from Image Based Lighting (IBL) [5] with 360 degree sky maps composed from 360 degree photos with ground cluttered removed. Figure 3 shows two of the IBL skies used. In this images, the horizontal bisector is at the horizon, the top of the image looks straight up and the bottom of the images is straight down.
116
+
117
+ A collection of 93 variations of maritime conditions and sky IBL map were generated. The ocean realizations were based on the TMA spectrum [10], with randomly generated spectrum parameters. The IBL sky was chosen from a collection of eighteen sky maps. Here we show specific results from five of the 93 variations, chosen to illustrate the range of outcomes found. The TMA spectrum parameters for each case are in Table 1.
118
+
119
+ The variations have been evaluated based on two criteria of the impact of resolution and normal mapping near the horizon. The visual impact criterion compares the rendered images side-by-side to show the qualitative relative contributions of tessellation resolution and normal mapping. A quantitative statistical criterion treats the high resolution normal mapped render as a baseline to statistically compute the mean and standard deviation of relative luminance [13] difference between the baseline and each the other three renders, for each case. Both criteria are presented below for five chosen illustrative cases. Figures 4 and 5 show one case which demonstates the impact of normal mapping, particulary when the strongest light in the sky is near the horizon. The visual demonstration in figure 4 shows that the low-resolution tessellation with normal mapping (bottom image), produces very nearly the same image detail as the high-resolution tessellation, whether the high-resolution tessellation case is normal mapped (middle image) or not (top image). This visual appearance is born out in the quantitative evaluation of the difference between the high tessellation, normal mapped render and each of the three other options (high tessellation without normal mapping, low tessellation with and without normal mapping). Figure 5 shows that difference, azimuthally-accumulated into mean and standard deviation of the relative luminance of the difference. The data is plotted for elevations from the horizon to five degrees below the horizon.
120
+
121
+ In this case, and in all of the cases, the green curve, representing the high tessellation without normal mapping, is most similar to the baseline high tessellation normal mapped image. Also true in all cases is that the low tessellation normal mapped result (blue curve) better matches the baseline than low tessellation without normal mapping (yellow curve). In this particular case of figure 5, normal mapping reduced the low tessellation standard deviation by a factor of 5 to 10 . This case demonstrates clear and substantial impacts from normal mapping. But the impact is dependent on both ocean surface conditions and lighting conditions. The four cases below show more outcomes from variations of sky and surface parameter choices.
122
+
123
+ Figure 6 shows an environment with an overcast sky, relatively uniform intensity across the IBL image, low windspeed and low wave height. The four images (high tessellation with and without mapping, low tessellation with and without mapping) are all very similar to each other. The statistical behavior in figure 7 shows that the low tessellation cases follow each other closely, although the low-tessellation-mapped data have lower standard deviation than the low-tessellation-not-mapped data, at all elevations. Note however, that wave height is not the single key factor driving improvement from normal mapping, because the case in figures 4 and 5 is also low wave height with substantial improvement from normal mapping, while also having more variation of lighting than in the current case.
124
+
125
+ The environment in figure 8 contains a partly cloudy sky, stronger windspeed than 6, and mild wave heights. The visual improvement from normal mapping is significant. The low-tessellation-mapped result has visible differences from the baseline near the horizon, although much less pronounced that the low-tessellation-unmapped image. The standard deviation in figure 9 shows about ${20}\%$ improvement from mapping.
126
+
127
+ The case in figure 11 has the same sky as figure 8 and similar windspeed and RMS wave height. But this case has much shorter fetch and much shallower bottom depth, producing ocean surface content that is smoother but with choppy waves. Visually, all four images have significant differences. The standard deviation in figure 12 is improved by normal mapping by approximately 50%, but the
128
+
129
+ < g r a p h i c s >
130
+
131
+ Figure 3: Two IBL skies used in rendering vast oceans. The images cover 360 degrees horizontally and 180 degrees vertically. A spherical skydome ${200}\mathrm{\;{km}}$ above the ocean surface uses the skymap as a texture.
132
+
133
+ Table 1: Maritime conditions for the example cases based on TMA spectrum.
134
+
135
+ max width=
136
+
137
+ Case Windspeed (m/s) RMS Height (m) Fetch (km) Depth (m)
138
+
139
+ 1-5
140
+ 12 1.3 0.098 194.8 381.4
141
+
142
+ 1-5
143
+ 22 4.16 0.225 204.5 997.9
144
+
145
+ 1-5
146
+ 34 2.38 0.239 117 150.9
147
+
148
+ 1-5
149
+ 50 4.91 0.548 99.95 678.8
150
+
151
+ 1-5
152
+ 87 1.78 0.127 141.35 263.4
153
+
154
+ 1-5
155
+
156
+ < g r a p h i c s >
157
+
158
+ Figure 4: Case 87. Renders using normal mapping of the ocean surface. Top: Near-camera triangle size of $3\mathrm{\;{cm}}$ and double distance of ${18}\mathrm{\;m}$ (6 generations of doubling, with ${1.9}\mathrm{\;m}$ triangles near the horizon) and no normal mapping, for reference. Top-Middle: Near-camera triangle size of $3\mathrm{\;{cm}}$ and double distance of ${18}\mathrm{\;m}$ (6 generations of doubling, with ${1.9}\mathrm{\;m}$ triangles near the horizon) with normal mapping. Bottom-Middle: Near-camera triangle size of ${90}\mathrm{\;{cm}}$ and double-distance of ${18}\mathrm{\;m}$ (6 generations of doubling, with ${58}\mathrm{\;m}$ triangles at the horizon) with normal mapping. Bottom: Near-camera triangle size of ${90}\mathrm{\;{cm}}$ and double-distance of ${18}\mathrm{\;m}$ (6 generations of doubling, with ${58}\mathrm{\;m}$ triangles at the horizon) without normal mapping. low-tessellation outcomes are statistically very different from the high-tessellation outcomes. This case is the kind of scenario that might be impacted by the lack of wave hiding in the near-horizon region with heavy loss of surface detail from LOD degradation.
159
+
160
+ < g r a p h i c s >
161
+
162
+ Figure 5: Case 87. Left: Azimuth-averaged mean difference from the high resolution mapped case. Right: Azimuth-averaged standard deviations from the high resolution mapped case. Green: High resolution unmapped; Blue: low resolution mapped; Yellow: low resolution unmapped.
163
+
164
+ < g r a p h i c s >
165
+
166
+ Figure 6: Case 12. From top to bottom: High resolution with normal map, high resolution without normap map, low resolution with normal map, low resolution without normal map.
167
+
168
+ < g r a p h i c s >
169
+
170
+ Figure 7: Case 12. Left: Azimuth-averaged mean difference from the high resolution mapped case. Right: Azimuth-averaged standard deviations from the high resolution mapped case. Green: High resolution unmapped; Blue: low resolution mapped; Yellow: low resolution unmapped.
171
+
172
+ < g r a p h i c s >
173
+
174
+ Figure 8: Case 22. From top to bottom: High resolution with normal map, high resolution without normap map, low resolution with normal map, low resolution without normal map.
175
+
176
+ < g r a p h i c s >
177
+
178
+ Figure 9: Case 22. Left: Azimuth-averaged mean difference from the high resolution mapped case. Right: Azimuth-averaged standard deviations from the high resolution mapped case. Green: High resolution unmapped; Blue: low resolution mapped; Yellow: low resolution unmapped.
179
+
180
+ < g r a p h i c s >
181
+
182
+ Figure 10: Scatter plot of azimuth-averaged standard deviation, averaged over angles from 0 to 5 degrees below the horizon, for 93 cases. The six outlier points at the top of the scatter are cases with the same sky as 87 (figure 4) but varying ocean surface conditions. The straight line is equal variance with and without mapping. Points below the line have lower standard deviation when not mapped, and points above the line have lower standard deviation when mapped.
183
+
184
+ Another case that may suffer from insufficient wave hiding is shown in figure 13. This case has extensive clouds while not being overcast, the highest windspeed, and more than twice the RMS wave height as the other cases. While the fetch is relatively short, the depth is large and the waves are not smooth like that of figure 11 . The normal mapping improves the standard deviation in figure 14 by a factor of 2 near the horizon. This case also illustrates that normal mapping a low-tessellation case can also improve the visual result near camera (toward the bottom of the images).
185
+
186
+ § 6 CONCLUSIONS
187
+
188
+ The impact of normal mapping is summarized in the scatter plot in figure 10. This plot compares the azimuthally-accumulated standard deviation of the luminace difference, averaged over the near-horizon ( 0 to -5 degrees) of the low resolution tesselled renders with and without mapping, for all 93 cases generated. One feature is the six "outlier" cases with much better performance, lying above the scatter. Figure 4 is one of them, and all six have the same sky but varying ocean conditions.
189
+
190
+ In all 93 cases, normal mapping has brought renders with low-resolution tessellation closer statistically to the high-resolution tessellation. The high standard deviation cases have the highest wave heights among the cases studied. As noted for cases 34 and 50, these high-standard-deviation cases may suffer from lack of wave hiding on large triangles near the horizon as one source for their higher values.
191
+
192
+ The resource requirements of this approach to normal mapping are modest, and in some cases favorable. The calculation of normals on-the-fly at each ray-triangle intersection added between $1\%$ and ${11}\%$ to the total render time for the various cases. However, the low-resolution tessellation cases saved an amount of time building the acceleration tree structure, in this study a BVH tree, that was comparable to, and sometimes more than, the additional time-cost of normal calculations.
193
+
194
+ In every one of the 93 cases studied, normal mapping improved the quality of the rendered result, both visually and statistically. The extent of the improvement was sensitive to the sky light and the ocean surface conditions. The application of this approach to normal mapping for ocean surface renders may have a systematic benefit under routine use.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/D9M6uwZGC5y/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,473 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Raniero Lara-Garduno*
2
+
3
+ Texas A&M University
4
+
5
+ # Detecting Mild Cognitive Impairment Through Digitized Trail-Making Test Interface
6
+
7
+ Yajun Jia ${}^{ \dagger }$ Nicolaas E. Deutz ${}^{ \ddagger }$ Marielle Engelen ${}^{§}$ Nancy Leslie ${}^{¶}$ Texas A&M University Texas A&M University Texas A&M University
8
+
9
+ Tracy Hammond ${}^{\parallel }$
10
+
11
+ Texas A&M University
12
+
13
+ ## Abstract
14
+
15
+ With the number of Alzheimer's patients reaching 5 million in 2014 according to the U.S. Center for Disease Control and Prevention, increasing emphasis has been placed on identifying and understanding its precursor condition, Mild Cognitive Impairment (MCI). MCI is characterized by subtle but abnormal cognitive decline and is challenging to detect without formal testing. Neuropsychologists use paper-and-pencil tests such as the Trail-Making Test (TMT) for diagnosis, and ongoing research places importance on high-granularity sketch data from digital TMTs. We present SmartStrokes, a digital TMT app designed to simulate the paper-and-pencil testing experience on a tablet and stylus. Our contribution frames the principles of digital sketch recognition and Human-Computer Interaction (HCI) into the existing neuropsychological test, outlining the creation of a pair of classification models that identify MCI on an individual segmented line basis. Such a per-line classification method which could provide localized sketching behavior indicative of MCI. We also present an interface for the digital TMT and a refinement of line segmentation algorithms from previous research to better distinguish between the actions that a participant takes when completing the exam.
16
+
17
+ Index Terms: Applied computing-Health Informatics; Human-centered computing-Human computer interaction; Human-centered computing-Tablet computers
18
+
19
+ ## 1 INTRODUCTION
20
+
21
+ The U.S. Center for Disease Control and Prevention has reported 5 million Alzheimer's patients in 2014, with the expected number to more than double to 13.9 million by 2060. Due to advancements in interventions aimed at mild-to-moderate cases of Alzheimer's disease, neuropsychologists have placed an increasing emphasis on early detection of Mild Cognitive Impairment (MCI) to better preserve quality of life $\left\lbrack {4,{20},{34}}\right\rbrack$ . A clinical neuropsychologist typically conducts paper-and-pencil cognitive examinations on a patient to help detect MCI. This process is historically laborious, requires multiple rounds of testing, and frequently requires non-standardized subjective analysis of a patient's subtle behavioral patterns. Digitizing these clinical examinations, specifically the Trail-Making Test among them, has allowed researchers to attempt to aid the diagnosis process by employing machine learning for behavioral analysis. Existing work in this space has not yet fully leveraged recognition techniques used in digital sketch recognition, particularly research that links sketching with cognition. In particular, the application of HCI principles to detect MCI via digitized testing interfaces in the context of neuropsychology is a topic we believe has not yet fully explored. Our contribution presented in this paper is to integrate HCI and digital sketch recognition into the domain of neuropsychology to deliver more granular recognition on a digitized TMT. We analyze and classify individual test segments rather than the more traditional method of one determination for an entire test. We also discuss the limitations and potential avenues for future research that surfaced during the completion of this research.
22
+
23
+ ![01963e6d-b352-7694-9f93-d88f2f170e70_0_1032_562_508_667_0.jpg](images/01963e6d-b352-7694-9f93-d88f2f170e70_0_1032_562_508_667_0.jpg)
24
+
25
+ Figure 1: A sample completed test in our SmartStrokes app. The interface is designed to be as close as possible to an actual paper-and-pencil test
26
+
27
+ ### 1.1 Mild Cognitive Impairment
28
+
29
+ The characteristics of MCI were initially established as part of the Global Deterioration Scale (DGS) [47], defining it as a syndrome where an individual's cognitive decline is greater than expected for their age $\left\lbrack {{24},{48}}\right\rbrack$ . It is considered to be a precursor to more severe cognitive decline that may advance into dementia, with Alzheimer's in particular being likely. The existence of MCI in itself, however, is not indicative that cognitive will necessarily decline further, as the cognition of many MCI patients never develops into dementia. Additionally, unlike these more severe forms of cognitive decline, MCI does not severely impact one's daily quality of life [64] and can thus be challenging to diagnose. This means that often the signs are subtle and can be easily dismissed as expected decline in executive function for an individual's age. MCI itself is characterized as not having a significant impact on daily activities, and may not manifest in a noticeable way for years, making it difficult to definitively diagnose and track.
30
+
31
+ ---
32
+
33
+ *e-mail: raniero@LGinbox.com
34
+
35
+ ${}^{ \dagger }$ e-mail: jia560@tamu.edu
36
+
37
+ ${}^{\frac{4}{9}}$ e-mail: nep.deutz@tamu.edu
38
+
39
+ §e-mail: mpkj.engelen@ctral.org
40
+
41
+ Te-mail: nleslie.phd@gmail.com
42
+
43
+ 'e-mail: hammond@tamu.edu
44
+
45
+ ---
46
+
47
+ ![01963e6d-b352-7694-9f93-d88f2f170e70_1_219_156_575_703_0.jpg](images/01963e6d-b352-7694-9f93-d88f2f170e70_1_219_156_575_703_0.jpg)
48
+
49
+ Figure 2: Sample of traditional paper-and-pencil versions of TrailMaking Test B
50
+
51
+ In cases where MCI does worsen, the characteristics of severe cognitive decline can vary depending on background and genetic conditions particular to the patient. Reisberg et al. [47] specifies the emergence of "behavioral disturbances", neurological abnormalities, electrophysiological changes, motor deficits, balance and coordination deficits, and general active daily living activity deficits. With an increase in life expectancy correlating to a rise in the prevalence of dementia and Alzheimer's disease, research attention has turned to the successful identification of MCI and how existing tools can be improved to assist.
52
+
53
+ ### 1.2 Trail-Making Test
54
+
55
+ Clinicians have historically relied on paper-and-pencil neuropsychological examinations as one of the primary methods to diagnose MCI. These typically involve a series of simple tasks for an individual to complete, and have been shown to be sensitive to the same cognitive functions affected by MCI through several decades of research [6]. We focus on Trail-Making Test, a connect-the-dots task that tests executive function and active memory. Initially conceived as a test to assess general intelligence, the TMT is known to be sensitive to cognitive decline and possible early signs of dementia. Currently the Trail-Making Test is widely used in neuropsychologists' test battery to assess for various signs of cognitive decline, including MCI [59]. The switching between numbers and letters found in the TMT-B relies on frontal lobe function $\left\lbrack {{12},{26},{29},{40},{53}}\right\rbrack$ , and is one of the primary reasons for the sensitivity of the test to MCI.
56
+
57
+ The test consists of two separate connect-the-dots tasks. The participant is handed a piece of paper with a series of labeled dots printed on it, and are handed a pen or pencil with which to connect the dots. The A variant of this test consists of a participant connecting dots in ascending numerical order $(1,2,3$ , and so on), while the B variant of the test requires connecting dots alternating between numbers and letters in ascending order $(1,\mathrm{\;A},2,\mathrm{\;B}$ , and so on). The participant is typically asked to complete the test as a pair, starting with variant A and immediately followed by variant B. Multiple layouts of these tests exist and are used when a clinician wishes to test the participant more than once, since a different arrangement of labeled dots is necessary to avoid the learning effect. Dot layout has been observed to directly affect time to completion on healthy populations [5]. Participants are asked to not lift their pen or pencil whenever possible, or when they connect to the wrong dot and must return to the previous dot.
58
+
59
+ ![01963e6d-b352-7694-9f93-d88f2f170e70_1_996_142_580_773_0.jpg](images/01963e6d-b352-7694-9f93-d88f2f170e70_1_996_142_580_773_0.jpg)
60
+
61
+ Figure 3: The Montreal Cognitive Assessment (MoCA). Image courtesy of mocatest.org [36]
62
+
63
+ Assessment of Trail-Making Tests is primarily done in two ways: comparing the test score with established normative data, and the qualitatively observed behavior of a participant as the test is being completed. The test score is calculated as the test's time to completion rounded to the nearest whole number. The fact that a score is reported as a single numerical value necessitated the qualitative observation, and over the decades clinicians have devised multiple methods for assessing a participant's performance as they complete the test. Colored pencils, video recordings, and observing behaviors from sitting posture to the way the patient holds their pen are just some of the qualitative observations made by clinicians.
64
+
65
+ These measures taken highlight the notion that the behavior a participant exhibits during the test is just as important, if not more so, as the single time-to-completion reported score. The subtle nature of MCI, however, has historically meant that clinicians rely on their own expertise and experience for qualitative observations. Most recent advancements to digital sketching technology have made it feasible for these tests to be assessed with much higher granularity than in previous decades, but research aiming to capitalize on this feasibility is limited.
66
+
67
+ ### 1.3 The Montreal Cognitive Assessment
68
+
69
+ The Montreal Cognitive Assessment (MoCA) is among the most widely used assessment protocols for gauging an individual's cognitive function. It consists of various short tasks, both written and verbal, aimed at testing various functions of a person's cognition and is frequently administered as a triage to help determine whether a patient requires further diagnosis and possible treatment, and is also frequently used to determine whether patients have symptoms of MCI [39]. Originally developed in 1995 by Ziad Nasreddine [23], it has since been the subject of various validation studies [26,57]. Normative data for the MoCA has been collected and analyzed for patients of various populations $\left\lbrack {{38},{54}}\right\rbrack$ , diseases $\left\lbrack {{11},{45}}\right\rbrack$ , cognitive states $\left\lbrack {{11},{25}}\right\rbrack$ , and post-trauma conditions $\left\lbrack {{21},{60}}\right\rbrack$ . The primary conditions that it has been validated for include MCI, Alzheimer's disease and Parkinson’s Disease dementias $\left\lbrack {{14},{39},{57}}\right\rbrack$ , and has been shown to be more sensitive to MCI-related decline than other examinations such as the Mini-Mental State Examination (MMSE) [49]. Hobson describes that the MoCA can assess cognitive domains including but not limited to "Visuospatial/Executive, Naming, Memory, Attention, Language, Abstraction, Delayed Recall and Orientation (to time and place)" [23].
70
+
71
+ The MoCA is frequently used in tandem with other neuropsychological examinations for ensuring that its results are consistent across various other examinations such as the Trail-Making Test. In effect, one might consider performance of the MoCA and the TMT to be correlated, such that exceptionally well or poor performance of one test is likely to lead to poor performance in the other. Indeed, a brief TMT-B appears in the MoCa as one of the tasks [26], and they both test of the same frontal lobe function.
72
+
73
+ ## 2 RELATED WORK
74
+
75
+ ### 2.1 Cognition in Digital Sketch Recognition
76
+
77
+ One of the prevalent methods of digital sketch recognition is through the analysis of sketches as "gestures" comprising of geometric properties of sketches. This includes but is not limited to line length, speed, acceleration, line straightness, and various trigonometric properties of line strokes. Individual features were calculated in early efforts from Rubine et al. $\left\lbrack {{50},{51}}\right\rbrack$ , and later expanded by Sta-hovich et al. [10, 27, 55], Long et al. [31], Paulson et al. [43, 44], and Alamudum et al. [3]. Digitial sketch recognition initially leveraged machine learning to afford developers tools to recognize simple geometric shapes. Shape recognition expanded to alphabets, scaffolded recognition to identify components of complex composite shapes, and entire sketches. Machine learning algorithms have allowed these analyses to be made feasible over a large corpus, resulting in models that are able to distinguish between objects depending on subtle changes in sketching behavior.
78
+
79
+ An increasingly common application of digital sketch recognition does not identify the shapes drawn, but rather characteristics of those who draw them. Kim et al. identified strong correlations between sketching behavior and early cognitive development in infants [28]. Davis et al. [63] and Muller et al. [37] similarly has focused on cognitive decline by analyzing sketches from Clock-Drawing tests [58]. Zham et al. identified the presence of Parkison's disease through the way a participant drew spirals with a smart-pen [63]. Digital variants on existing neuropsychological tests are numerous, with various proposed systems designed for test automation, diagnosis assistance, or self-administration $\left\lbrack {7,{18},{52},{62}}\right\rbrack$
80
+
81
+ ### 2.2 Digitized Trail-Making Tests
82
+
83
+ Multiple computerized variations on the Trail-Making Test have been developed and studied [22]. Drapeau et al. noted the clear difference in performance between a paper-and-pencil TMT and a digitized version completed with a computer mouse [16]. Jager et al. directly studied differences in performances between paper-and-pencil and computerized neuropsychological tests [15]. Smith et al. explored the possibility of implementing several cognitive testing tools with mobile technology [56]. Prange et al. uses a large amount of digital sketch recognition features to classify participants as "healthy" or "suspicious" [46] but does not heavily anchor the features on neuropsychological and HCI principles nor is there a granular per-line analysis made beyond determining whether a line connects two dots. More specifically in the tele-health space, Brehmer et al. contextualized the challenges and considerations to be taken with implementing computerized neuropsychological exams at home where there could be interruptions [9]. More novel research in this includes the work of Lara-Garduno et al. who presents a touch-based novel neuropsychological examination based on the TMT [30]. With the advancement and increasing affordability of pen and touch technology and mobile computing, interest turned to digitizing these tests to simulate the original pen-and-paper experience.
84
+
85
+ ![01963e6d-b352-7694-9f93-d88f2f170e70_2_975_147_622_597_0.jpg](images/01963e6d-b352-7694-9f93-d88f2f170e70_2_975_147_622_597_0.jpg)
86
+
87
+ Figure 4: Test analysis interface of SmartStrokes, demonstrating line deviation and separation of search and travel lines on a completed test
88
+
89
+ One of the most recent attempts at digitizing the Trail-Making Test and leveraging machine learning to aid in the diagnosis process comes from the work of Dahmen et al. [13]. This work involved the use of a tablet and stylus to re-create the Trail-Making Test and a user study consisting of $\left( {\mathrm{N} = {54}}\right)$ older adult participants. Digital sketches were used as traning data to predict two types of assessment: that same participant's scores of the Telephone Interview for Cognitive Status (TICS) [8] and Frontal Assessment Battery (FAB) [17] performance scores, and the prediction of a participant's condition as "healthy" or "neurologic". Prediction of a participant's condition using features mostly focusing on dwell-time yielded accuracies ranging from ${44}\%$ and ${67}\%$ . Predictions were done on a per-test basis using feature averages rather than localizing or segmenting lines.
90
+
91
+ ### 2.3 Proposed Contribution
92
+
93
+ Our proposed contribution presents an interface that collects digital sketch that that is then used to create a classification model to distinguish between MCI and healthy participants on a per-line basis. It builds on existing work from Dahmen et al. [13], which in its conclusion states the belief that the high-granularity digital sketch data from a digitized version of the TMT could provide higher-granularity analysis. Per-line classification would offer two advantages: 1) targeting individual lines for classification of MCI could give a more localized assessment of individual dots that challenged the participant, and 2) building a classification model that analyzes sketches on a per-line basis could allow generalizability for more layouts, since as previously mentioned the TMT frequently needs a wide variety of dot layouts to avoid the learning effect.
94
+
95
+ Further, our proposed contribution uses the scores from the Montreal Cognitive Assessment (MoCA) to detect MCI, whereas the existing work from Dahmen uses the TICS and FAB to assess more advanced dementia. MCI is characterized as being much more subtle in nature, meaning milder cases of MCI frequently result in sketching behavior that only slightly deviates from a healthy participant. Our proposed solution achieves an accuracy similar to Dahmen's existing work, with the added advantages of offering classifications on a more granular per-line basis, and classifying for a more subtle degree of cognitive decline.
96
+
97
+ ![01963e6d-b352-7694-9f93-d88f2f170e70_3_333_148_1135_661_0.jpg](images/01963e6d-b352-7694-9f93-d88f2f170e70_3_333_148_1135_661_0.jpg)
98
+
99
+ Figure 5: Separated travel and search lines. Travel lines are rotated to always face a top-to-bottom orientation.
100
+
101
+ ## 3 INTERFACE DESIGN
102
+
103
+ SmartStrokes is a digital testing suite focused on re-creating commonly-used Trail-Making Test layouts for use on Microsoft Surface Pro 4 devices. The Universal Windows Platform (UWP) was chosen for reasons that include rapid development of a mobile-style application on Windows devices, ease of exporting pen data for analysis, and its firmware-level digital pen integration with Surface Pen devices and allows us to easily extract pen pressure to supplement our feature set.
104
+
105
+ The the system has two simultaneous end users: medical and other personnel who proctor the exam (referred to in this paper as "proctors"), and participants who complete the test (referred to as "participants"). Every proctor is associated with individual proctor IDs and every test is directly associated with each participant. All participant data including completed tests can only be accessed from within the app if the test proctor username and password is entered at the login screen. Proctors have the option to export digital images of the completed examinations at the conclusion of each test, at which point the proctor should ensure proper data anonymization practices such as ensuring the file names and location do not contain identifiable information.
106
+
107
+ A total of 8 separate Trail-Making Test layouts were converted into a digital format, comprising of 4 pairs of the A and B variations. These Trail-Making Test layouts are among those that are generally used by neuropsychologists when conducting these tests in their practice. Dimensions of the white space were cropped to account for the different aspect ratio between the Surface Pro 4 and a regular 8" 1/2 x 11" piece of paper, and the layout and size of the dots were scaled accordingly. The test interface itself resembles a paper-and-pen test as much as possible. This includes extending the drawing canvas across the entire screen, beyond the black large rectangle where the dots are placed; on a real piece of paper some participants may draw outside of the large rectangle despite it not being advised to do so. Our intention with this interface is to capture the same types of mistakes a participant might make with a traditional pencil-and-paper modality. SmartStrokes also intentionally offers no indication of visual feedback given to participants when the next dot is connected in sequence. An earlier version of the test turned correctly connected dots green, but experts advisors suggested that feedback be only given in the case of a mistake since that is the only scenario in which a clinician would intervene. Although testing protocol dictates participants should only complete each pair once to avoid the Learning Effect, SmartStrokes has the ability to test each participant as many times as they wish on any arbitrary layout and order to accommodate for any testing procedure.
108
+
109
+ Completed tests can be viewed at any time if the application is signed into the proctor's profile. The time-series sketching data allows proctors to review each participant's tests at their leisure and can also choose to replay the test in real-time to qualitatively review the participant's performance. Additionally, SmartStrokes can display color-coded visualizations of the sketch that include: separation of travel and 'Search actions during the test, pen speed, pressure, location of "hesitation" regions, and line straightness.
110
+
111
+ SmartStrokes also assists in data analysis by performing feature calculation of individual tests and outputting the anonymized data into a local Comma-Separated Value file (CSV). Additionally, the proctor can choose to automatically perform this calculation for every test associated with that proctor. This allows proctors to conduct data analytics by easily importing the CSV for rapid visualization and machine-learning analytics tasks.
112
+
113
+ ## 4 ANALYZING DIGITIZED TRAIL-MAKING TESTS
114
+
115
+ One of the significant challenges in analyzing the Trail-Making Test is in the proper segmentation of the data. Although the task is designed to result in simple straight lines, the ideal resulting sketch consists of a singular line making 25 stops that change direction each time. Analysis is further complicated by behaviors arising from cognitive decline, most commonly involving repeated mistakes and prolonged periods of searching for the next dot, hesitation, or doubt.
116
+
117
+ Complicated line drawings are frequently segmented in the digital sketch recognition domain in order to properly characterize key elements in the sketch. The most appropriate domain-specific method of line segmentation separates the lines in two different categories: Search lines, and Travel lines; Search lines are all lines drawn when the participant is looking for the next dot, and Travel lines are the line segments where the participant is actively moving from one dot to the next
118
+
119
+ ![01963e6d-b352-7694-9f93-d88f2f170e70_4_243_151_538_335_0.jpg](images/01963e6d-b352-7694-9f93-d88f2f170e70_4_243_151_538_335_0.jpg)
120
+
121
+ Figure 6: A clear example of the search Line difference between an MCI participant and a healthy one. This discrepancy is usually the result of the participant unable to locate the next dot in the sequence for an extended period of time. Although the discrepancy is obvious in this example, not all MCI participants exhibit this behavior, making diagnosis challenging.
122
+
123
+ The following two subsections outline the differences between the two types of lines, what thresholds exist between the line segmentation, and which sketching characteristics we believed would be the most relevant to identifying MCI.
124
+
125
+ ### 4.1 Search Lines
126
+
127
+ According to the protocol of Trail-Making tests as outlined by the Compendium of Neuropsychological Examinations [59], participants are required to have their pen on the test at all times even when not moving between dots. This is done for two reasons. The first is that it is less likely that a participant loses their place if they do not lift their pen as they search for their next dot. The second reason is that this maximizes the data collected, since a participant who leaves their pen on the paper as they search for the next dot almost always results in randomized pen movements while they move their hand to see the rest of the test. This kind of sketching is typically characterized by noisy, erratic movement that tends to meander around the current dot as the participant searches for the next one. This is the kind of line that we identify as a search line.
128
+
129
+ We define the beginning of a search line as the instant a participant enters the next correct dot in the TMT sequence. We define the end of the search line as the moment the participant identifies the next dot and moves out of the area of the current dot. We complicate the definition of the end of the search line beyond simply "outside of a dot", because of how participants behave when searching for a dot for a long time; participants who meander around a dot for a long time frequently move the pen inside and outside of the dot's area as they look for the next dot in the sequence. They may also stray away from the dot before identifying the next one. For these reasons we include an additional speed threshold outside of a dot's area as the end of the search line segment.
130
+
131
+ Healthy participants typically do not pause for long as they search for the next dot in the sequence, with some participants not pausing at all. Indeed, search lines from typical healthy participants are usually shorter in length and have a single curve clearly detailing the change in direction from the previous dot in the sequence to the next with very little or no meandering behavior. MCI participants or any other participants who find the TMT challenging typically remain in this search state for longer, resulting in longer and more erratic search line segments. Figure 6 shows such an example where an MCI participants' search state results in a significantly longer and more meandering search line.
132
+
133
+ ![01963e6d-b352-7694-9f93-d88f2f170e70_4_1013_150_545_444_0.jpg](images/01963e6d-b352-7694-9f93-d88f2f170e70_4_1013_150_545_444_0.jpg)
134
+
135
+ Figure 7: Four of the color-coded features and sketch properties that SmartStrokes can display. Search and travel Lines are also used to segment data for constructing the classificaton models
136
+
137
+ ### 4.2 Travel Lines
138
+
139
+ Our parameters for defining travel lines are more straightforward, as we define travel lines as the moment the participant begins to move with intent to arrive at the next dot, and the travel line segment ends when the next dot in the sequence is reached. When done correctly, the travel line will be a single straight line from the previous dot to the next. We implemented a pen speed threshold to identify this "intent to move" to help us clearly delineate between a search line outside of a dot's area and the moment the participant moves toward the next.
140
+
141
+ Every dot in a Trail-Making Test in sequence can be connected with a single straight line. For that reason, participants who perform well in the TMT usually have a series of travel lines drawn straighter and without turning to change direction while moving from one dot to the next. Participants who perform poorly sometimes stop in the middle of a Travel line to either check their destination again or change direction as they realize they are going to the wrong dot.
142
+
143
+ Sometimes, such participants stop entirely in the middle of travel and begin a similar Search behavior to find the next dot. We call all of these mid-travel stops or significant reduction in speed as "hesitation". While not every participant with MCI enters this state, several instances of these hesitation states in one test is likely to point to a poorly-performing examination.
144
+
145
+ ## 5 DATA COLLECTION AND ANALYSIS
146
+
147
+ This subsection details the process by which TMT data collection was conducted, and the sketch recognition features that were selected and applied to a machine-learning classification model to detect MCI.
148
+
149
+ ### 5.1 Data Collection
150
+
151
+ 37 participants were recruited for data collection and classification purposes. Participants were screened and classified as MCI or healthy based on scores from the Montreal Cognitive Assessment (MoCA) [39], with the MoCA scores ranging from 0 to 30 . The inclusion criteria for participants were the following:
152
+
153
+ - Healthy subjects without MCI: Healthy older adults, normal cognition. MoCA score is 26 or above. Subject group labeled as "Healthy" for model classification purposes.
154
+
155
+ - Healthy subjects with MCI: Healthy older adults. MoCA score is between 19 and 26. Subject group labeled as "MCI" for model classification purposes.
156
+
157
+ Table 1: Participant demographics for user study. 95% confidence interval for participant age is ${71.43} \pm {2.41}$ , for MoCA scores ${24.54} \pm {0.91}$
158
+
159
+ <table><tr><td>Age Range</td><td>Male</td><td>Female</td><td>MCI</td><td>Non- MCI</td><td>Avg. Age</td><td>Avg. MoCA</td></tr><tr><td>55-59</td><td>1</td><td>1</td><td>0</td><td>2</td><td>57</td><td>26.5</td></tr><tr><td>60-64</td><td>1</td><td>5</td><td>1</td><td>5</td><td>63</td><td>26.5</td></tr><tr><td>65-69</td><td>5</td><td>2</td><td>5</td><td>2</td><td>67.7</td><td>23.9</td></tr><tr><td>70-74</td><td>3</td><td>5</td><td>4</td><td>4</td><td>71.5</td><td>25.5</td></tr><tr><td>75-79</td><td>4</td><td>5</td><td>7</td><td>2</td><td>76</td><td>23.3</td></tr><tr><td>80-84</td><td>2</td><td>0</td><td>1</td><td>1</td><td>82</td><td>25.5</td></tr><tr><td>85-89</td><td>2</td><td>1</td><td>3</td><td>0</td><td>85.7</td><td>21.3</td></tr><tr><td>Totals</td><td>18</td><td>19</td><td>21</td><td>16</td><td>71.43</td><td>24.54</td></tr></table>
160
+
161
+ - Exclusion criteria: Older adult subjects with a MoCA score below 19; any history of severe medical/neurological/psychiatric disease, including diabetes/hypertension; taking medication primarily targeting central nervous system; any other condition at investigator's judgment that clearly demonstrates severe cognitive decline Additional demographic information is available on Table 1.
162
+
163
+ All participants were recruited from a known pool of potential candidates, doctor referrals to this study, as well as open calls for participants via email. At the time of recruitment a pre-screening was conducted to ensure that the participants were not situated outside of the inclusion criteria. We administered a MoCA test to each potential candidate and is graded afterwards. If the candidate satisfied the inclusion criteria for one of the two possible categories of "with MCI" or "without MCI", a secondary visit was scheduled was scheduled to start at $8\mathrm{{AM}}$ and participants were asked to return as well-rested as possible.
164
+
165
+ At the time of the data collection procedure, all participants were given two sets of Trail-Making. Each test set consisted of a Trails A variant (numbers) and its accompanying Trails B variant (alternating numbers and letters). Each of the two sets used different standard dot layouts to eliminate a learning effect [35]. All participants were given the same Microsoft Surface Pro 3 device with accompanying Surface Pen to complete the digital tests. Participants were asked to connect the dots in ascending order as per the instructions detailed in the Compendium for Neuropsychological Examinations [59].
166
+
167
+ As SmartStrokes provides minimal feedback of mistakes so as to simulate the paper-and-pencil test taking experience, the test proctors similarly followed paper-and-pencil procedures that include notifying the participant whenever a mistake was made, but the participant is otherwise left to analyze the layout and make corrections to mistakes. Participants were instructed to place their pen down on the last correct labeled dot and try again. While we save the lines that were drawn to connect incorrect dots, those lines are made invisible in real-time while taking the test.
168
+
169
+ For the purposes of classification we refer to our subjects without MCI as "healthy", meaning subjects in the first category of participants. However, we must highlight that our MCI participants are not considered "unhealthy" by contrast. Indeed, MCI is considered a precursor to severe conditions such as Alzheimer's and dementia, and people with this condition are still considered "healthy" by every metric (see Section 1.1). In order to study the effects of a possible change of sketching behaviors, however, we elected to consider the two possible conditions as "healthy, without MCI", and "healthy, with MCI".
170
+
171
+ ### 5.2 Preprocessing
172
+
173
+ Several pre-processing steps are conducted on individual completed examinations. Each test's sketch data is separated by travel and search lines according to the description in Section 4. Sketch data is then resampled to uniform interspace $S$ using the formula:
174
+
175
+ $$
176
+ S = \frac{\sqrt{{\left( {x}_{m} - {x}_{n}\right) }^{2} + {\left( {y}_{m} - {y}_{n}\right) }^{2}}}{c} \tag{1}
177
+ $$
178
+
179
+ Where $S$ is the new spacing between each sample, $\left( {{x}_{m},{y}_{m}}\right)$ is the lower-right corner of the sketch $\left( {{x}_{n},{y}_{n}}\right)$ is the upper-left corner of the sketch, and $\mathrm{c}$ is an empirically derived constant $c = {40}$ that is frequently used in the domain of digital sketch recognition for optimal distance between samples that balances high enough granularity for feature calculation with few enough samples for computational efficiency.
180
+
181
+ Lastly, we implemented an additional key step in this process by normalizing individual line rotation for travel lines. The chosen features explained later in this section make significant use of sketch direction, either as a per-sample basis or the entire line. In more typical digital sketch recognition problems, features relating to direction inform a participant's style of drawing, or are directly related to the type of shape that the participant intends to draw. The Trail-Making Test, however, places all dots in pre-arranged locations that strongly influences the direction of a correct line. This would introduce a confounder, since differences between angles or sketch direction would not be attributed to MCI but rather the layout of the test's dots. We normalize travel lines by rotating every line such that the endpoint of the line is directly underneath the start point. This allows us to still be able to leverage direction-related sketch features to calculate characteristics like tremor, changes in direction due to mistakes made, and other types of directionality affected by the participant's performance rather than the layout of the TrailMaking Test. We are not aware of similar work in constructing a Trail-Making Test classification model that employs this segmented line direction normalization technique. To account for a physical range of motion confounder, participants were observed to ensure that they did not have physical difficulty in moving in a particular direction. To that end we observed no difficulties in participants nor did any participant report one themselves.
182
+
183
+ ### 5.3 Feature Calculation
184
+
185
+ #### 5.3.1 Rubine Features
186
+
187
+ We implemented a combination of digital sketch recognition features known to yield accurate models in similar research projects. The first set of 13 features introduced by Rubine et al., abbreviated as "Rubine features" [50]. The 13 features were first introduced alongside a recognition technology named GRANDMA (Gesture Recognizers Automated in a Novel Direct Manipulation Architecture), a toolkit that sought to provide end users with the ability to train any gesture for recognition using a click-and-drag interface. The Rubine features themselves have since then been implemented in various sketch recognition projects that can gauge not only the type of shape that is drawn, but also the cognitive state of the participant who drew them. Rubine features ${f}_{1}$ and ${f}_{2}$ specify the cosine and sine features of the first few samples, usually limited to the first two samples as was done in our implementation. The bounding box diagonal of the entire gesture is analyzed as features ${f}_{3}$ and ${f}_{4}$ . The distance in pixels between the first and the last point is specified in feature ${f}_{5}$ . The difference between the first and last point of a gesture is analyzed through features ${f}_{6}$ cosine and ${f}_{7}$ sine between the start and end points, the total length of the gesture is calculated for ${f}_{8}$ , and the total angle traversed is ${f}_{9}$ . Three total summations are calculated, with ${f}_{9}$ being the total angle traversed over the course of the gesture, ${f}_{10}$ being the sum of the absolute value of the angle per mouse point that does not take into account direction, and ${f}_{11}$ being the sum of the square of the value of ${f}_{9}$ . The square of the maximum speed achieved in the gesture is ${f}_{12}$ , and the last feature ${f}_{13}$ is the total duration of the gesture, measured in milliseconds. The calculations for the Rubine features are provided on Table 2.
188
+
189
+ Table 2: Rubine features ${f}_{1}$ through ${f}_{13}$ . Let $\Delta {x}_{p} = {x}_{p + 1} - {x}_{p}$ , and $\Delta {y}_{p} = {y}_{p + 1} - {y}_{p}$ , and $\Delta {t}_{p} = {t}_{p + 1} - {t}_{p}$
190
+
191
+ <table><tr><td>Rubine Features</td><td/></tr><tr><td>${f}_{1} = \frac{{x}_{2} - {x}_{0}}{\sqrt{{\left( {x}_{2} - {x}_{0}\right) }^{2} + {\left( {y}_{2} - {y}_{0}\right) }^{2}}}$ ${f}_{2} = \frac{{y}_{2} - {y}_{0}}{\sqrt{{\left( {x}_{2} - {x}_{0}\right) }^{2} + {\left( {y}_{2} - {y}_{0}\right) }^{2}}}$ ${f}_{3} = \sqrt{{\left( {x}_{mx} - {x}_{mn}\right) }^{2} + {\left( {y}_{mx} - {y}_{mn}\right) }^{2}}$ ${f}_{4} = {\arctan }^{\underline{{y}_{\text{max }}} - {y}_{\text{min }}}$ ${f}_{5} = \sqrt{{\left( {x}_{p - 1} - {x}_{0}\right) }^{2} + {\left( {y}_{p - 1} - {y}_{0}\right) }^{2}}$ ${f}_{6} = \frac{\left( {x}_{p - 1} - {x}_{0}\right) }{{f}_{5}}$ ${f}_{7} = \frac{\left( {y}_{p - 1} - {y}_{0}\right) }{{f}_{5}}$</td><td>${f}_{8} = \mathop{\sum }\limits_{{p = 1}}^{{P - 2}}\sqrt{\Delta {x}_{p}^{2} + \Delta {y}_{p}^{2}}$ ${f}_{9} = \mathop{\sum }\limits_{{p = 1}}^{{P - 2}}{\theta }_{p}$ ${f}_{10} = \mathop{\sum }\limits_{{p = 1}}^{{P - 2}}\left| {\theta }_{p}\right|$ ${f}_{11} = \mathop{\sum }\limits_{{p = 1}}^{{P - 2}}{\theta }_{p}^{2}$ ${f}_{12} = \mathop{\max }\limits_{{p = 0}}^{{P - 2}}\frac{\Delta {x}_{p}^{2} + \Delta {y}_{p}^{2}}{\Delta {t}_{p}^{2}}$ ${f}_{13} = {t}_{P - 1} - {t}_{0}$</td></tr><tr><td colspan="2">${\theta }_{p} = \arctan \frac{\Delta {x}_{p}\Delta {y}_{p - 1} - \Delta {x}_{p - 1}\Delta {y}_{p}}{\Delta {x}_{p}\Delta {x}_{p - 1} + \Delta {y}_{p}\Delta {y}_{p - 1}}$</td></tr></table>
192
+
193
+ The Rubine features represent the various geometric properties of any given gesture. They can measure speed, curvature, direction at the start and ends of the gesture, total time taken, and the properties of the total area (referred to by Rubine as the "bounding box") of any particular gesture. These features offer an alternative to template-matching recognition in that they do not require a point-for-point comparison, but rather are geometric calculations of the gestures themselves. Although these have been used mostly for recognizing gestures, their frequent use in recognizing shapes provides us with an opportunity for analysis of cognitive impairment.
194
+
195
+ #### 5.3.2 Fitts' and Steering Law Features
196
+
197
+ We leverage principles from Fitts' Law by calculating that law's Index of Difficulty [32]:
198
+
199
+ $$
200
+ I{D}_{F} = {\log }_{2}\frac{2D}{W} \tag{2}
201
+ $$
202
+
203
+ Fitts' Law was originally conceived as a method to quantify complexity [19] and has has been widely used in HCI research, particularly UI navigation tasks [33]. Fitts' Law is rooted in tracing lines across distances between targets and measures that task's complexity into measures of performance, which we believe could be leveraged to help identify task performance.
204
+
205
+ A related feature we use is the more recent variant, the Steering Law. The Steering Law assesses the difficulty of a participant navigating a pointer through a path with a set width $\left\lbrack {1,2}\right\rbrack$ . For a generic tunnel $C$ , and a width $W\left( s\right)$ along the path, the Steering Law’s Index of Difficulty $I{D}_{S}$ is:
206
+
207
+ $$
208
+ I{D}_{S} = {\int }_{C}\frac{ds}{W\left( s\right) } \tag{3}
209
+ $$
210
+
211
+ For our purposes, we use a straight path of length $L$ and a constant with $W$ as defined by Pastel et al. [41], which reduces $I{D}_{S}$ to:
212
+
213
+ $$
214
+ I{D}_{S} = \frac{L}{W} \tag{4}
215
+ $$
216
+
217
+ By using the participant's input lines as the basis for calculating $W$ , we essentially create a form of performance index using the Steering Law. For the Trail-Making Test, a narrower line width $W$ is straighter and effectively more difficult to recreate. We integrated this metric as a feature for the classification model to test whether a participant with MCI would create lines with a generally lower $I{D}_{S}$ . We also scaled and averaged $I{D}_{F}$ and $I{D}_{S}$ as a separate feature to explore a possible combination of the two. It is reported in Table 3 as fittsSteering.
218
+
219
+ ![01963e6d-b352-7694-9f93-d88f2f170e70_6_1015_153_542_514_0.jpg](images/01963e6d-b352-7694-9f93-d88f2f170e70_6_1015_153_542_514_0.jpg)
220
+
221
+ Figure 8: The traditional application of the Steering Law is on top, with $W$ and $L$ being predetermined. Our use of Steering Law, on bottom, creates a simple tunnel with $W$ based on the total "width" of the pen trajectory.
222
+
223
+ #### 5.3.3 Additional Behavioral Features
224
+
225
+ Hesitation is a feature that we briefly discussed in section - as a feature unique to travel lines. It characterizes the prevalence of stop-and-go motion for participants who start connecting a dot but stop or slow down significantly while inside a travel state. Hesitation begins when the pen slows down to an empirically-derived speed of 0.4 over five consecutive sampled points, and our calculated feature is distance the pen traveled while the pen remains in this state. The pen exits this state when at least five consecutive sampled points have a speed above 0.4 . This threshold was determined when observing participants during pilot studies, where we sought to capture the most accurate subset of drawn lines during the time that participants hesitated when observing the need to change direction. The threshold was refined over a series of iterations to most accurately capture the hesitation state. If the pen enters Hesitation state multiple times inside a single travel line, the total distance across all of these states is reported for the one travel line
226
+
227
+ Line Ratio is a feature meant to normalize the length of a participant's drawn line. We believe the length of the line is important to understand how confident and accurate the lines were connected, since meandering behaviors and course correction would naturally result in a longer line than a straight line drawn directly from dot to dot. However, a drawn line will also be longer if the correct dots are placed further apart. The Trail-Making Test is explicitly designed to place dots a variety of distances from each other to measure a participant's ability to identify dots that might be further away from their immediate location. To take relative line length into account we divide the total distance drawn from one dot to the next by the theoretical "perfect" line drawn from one dot to the next. The closer the number is to 1 , the closer to "perfect" this distance becomes and the better a participant performs. The formula for Line Ratio ${R}_{\text{ln }}$ is found below, where $\left( {{x}_{n},{y}_{n}}\right)$ is the final sampled dot of the input line:
228
+
229
+ $$
230
+ {R}_{ln} = \frac{\sqrt{{\left( {x}_{n} - {x}_{0}\right) }^{2} + {\left( {y}_{n} - {y}_{0}\right) }^{2}}}{\mathop{\sum }\limits_{{i = 0}}^{n}\sqrt{{\left( {x}_{i} - {x}_{i - 1}\right) }^{2} + {\left( {y}_{i} - {y}_{i - 1}\right) }^{2}}} \tag{5}
231
+ $$
232
+
233
+ Pen Lift Time is the amount of time during each segment that the participant lifts their pen. Although participants are required to leave their pen on the tablet at all times as per the instructions of the Trail-Making Test, some participants still absent-mindedly lift the pen when searching for a dot or when correcting a mistake. This feature is intended to capture the behavior of both of these scenarios to explore a possible correlation with MCI.
234
+
235
+ Pen Pressure Average and Pen Pressure Standard Deviation are features pertaining to the pressure that a participant places on the pen as they complete the test. We wanted to explore the possibility that a participant places more pressure on the tablet if they are unsure of their trajectory or if the test is difficult for them to complete.
236
+
237
+ We complete the feature set by adding a few sets from existing sketch and gesture recognition literature. We implemented 11 features from Long et al. [31] as a supplement to the Rubine features for general-purpose sketch recognition. Alamudun et al. [3] applied Rubine and Long features and added two direction-based features to help with saccade detection in an eye-tracking task, but have we believe can also be implemented as general-purpose sketch recognition as well. Finally, Paulson et al. introduced two features, normalized distance between direction extremes (NDDE) and direction change ratio (DCR), as general-purpose sketch recognition features that we also included for this study $\left\lbrack {{42},{43}}\right\rbrack$ .
238
+
239
+ ### 5.4 Model Construction
240
+
241
+ Because Trail-Making Test behavior is characterized by the distinct actions of travelling to the next line and searching for the next, we decided to produce two separate classification models to explore the possibility of either being more indicative of MCI and compare their performance. Additionally, because the actions yield different behaviors, not all features were applicable for both types of actions. For example, line direction is important for travel lines to identify incorrect line deviation after we normalize travel lines as shown in Fig. 5. However, search lines cannot be normalized since direction at entry and at exit of a dot, even for healthy participants, depends heavily on the test layout itself. Table 3 lists every feature initially integrated into the feature set, and the subscripts next to the feature names indicate which were chosen for the models.
242
+
243
+ Some features were also removed from search and travel classification models due to a high collinearity value (> 0.90). Fig. 11 shows the collinearity heatmap of the remaining features that were used for both search and travel lines. Further, all values were normalized between 0 and 1 .
244
+
245
+ Every segmented line from the 149 tests is included and is given the label according to the participant's cognitive state (MCI or healthy). 3,490 search lines and an equal number of travel lines were used for their respective classification models.
246
+
247
+ Models were constructed according to a ${90}/{10}$ split for a 10 -fold cross-validation. The models were trained and evaluated according to the two labels of MCI or healthy assigned during the screening phase of the study. We used 7 binary classification models to compare performance.
248
+
249
+ ### 5.5 Prediction of MoCA Scores
250
+
251
+ Participant labels of "healthy, without MCI" and "healthy, with MCI" essentially is dividing participants between two broad categories of MoCA scores. Section 5.1 specifies the categories are a MoCA score of 26 and above for "healthy, without MCI" and between 19 and 26 for "healthy, with MCI". This in effect means our classification attempts to predict a wide range of the MoCA scores of the participants. However, we also sought to more directly predict the MoCA score in a more granular fashion as part of the data analysis of this study.
252
+
253
+ Our approach to MoCA score prediction is similar to the prediction of broad categories in that we are using the same training and classification features, and the same 90/10 split for 10 -fold cross-validation. The similarity also extends to the training and classification being performed on the individual lines, and the F1-score and accuracy being calculated on how closely each line is being predicted to the actual MoCA score associated with that line's entire test. This is distinct from other methods of classification that seek to analyze the entire page and create a single prediction. We believe that participants of Trail-Making Test do not perform evenly through the entirety of the test; while they might perform well for a few dots, a single dot might prove to be difficult for participants to find. Indeed, several participants from our empirical observations who performed poorly would find certain dots easy while finding others significantly more difficult. Our wish to capture this particular type of behavior is the reason behind our use of the segmented lines. We believe per-line sketch analysis and predictions might yield a novel insight into the participants' behavior. Because the original scoring system was conceived at a time when granular sketch analysis was not possible, we believe per-line analysis can provide a more granular and complete picture of a participant's behavior during the test. Prediction was trained and tested on both the Travel and the Search models.
254
+
255
+ ![01963e6d-b352-7694-9f93-d88f2f170e70_7_936_200_639_370_0.jpg](images/01963e6d-b352-7694-9f93-d88f2f170e70_7_936_200_639_370_0.jpg)
256
+
257
+ Figure 9: Box plot of amount of features chosen for Recursive Feature Elimination (search lines).
258
+
259
+ Recursive Feature Elimination vs. Accuracy for MoCA Prediction (Travel)
260
+
261
+ ![01963e6d-b352-7694-9f93-d88f2f170e70_7_944_728_631_367_0.jpg](images/01963e6d-b352-7694-9f93-d88f2f170e70_7_944_728_631_367_0.jpg)
262
+
263
+ Figure 10: Box plot of amount of features chosen for Recursive Feature Elimination (travel lines).
264
+
265
+ We performed recursive feature elimination (RFE) on the Travel and Search Models to determine the top-ranking features that can be included in a logistic regression to predict the MoCA scores. The number of features ideal for both was determined to be 6 since that is the number of features where the accuracy plateaus for both Search and Travel models. The features selected for the Search and Travel models are listed in Table 4, and the box plot depicting the comparison of feature selection to accuracy is shown in Figures 9 and 10 .
266
+
267
+ A logistic regressor with an iteration of limit of $n = {100000}$ was employed for both models to predict the MoCA scores based on the features selected by the RFE. We then used a repeated K-Fold cross-validator, with a ${90}/{10}$ split repeated 3 times for a total of 30 comparisons in the calculation of the predictors. To gauge the performance of the predictions being made, we calculated the average Mean Absolute Error (MAE) and the Root Mean Squared
268
+
269
+ Table 3: Classification features. Model describes whether the feature was used for the model for classification of travel (T) or search (S) lines. Some features were excluded due to high collinearity and/or were inappropriate for a specific model. fittsSteering is a scaled and averaged combination of the features fitts and steering.
270
+
271
+ <table><tr><td>Name</td><td>Model</td><td>Name</td><td>Model</td><td>Name</td><td>Model</td><td>Name</td><td>Model</td></tr><tr><td>rubine1</td><td>T</td><td>rubine10</td><td/><td>avgPressure</td><td>T+S</td><td>openness</td><td>T+S</td></tr><tr><td>rubine2</td><td>T</td><td>rubinell</td><td/><td>stdevPressure</td><td>T+S</td><td>boundBoxArea</td><td/></tr><tr><td>rubine3</td><td>T+S</td><td>rubine12</td><td>T+S</td><td>avgSpeed</td><td>T+S</td><td>$\log$ Area</td><td>T+S</td></tr><tr><td>rubine4</td><td>T+S</td><td>rubine13</td><td>T+S</td><td>stdevSpeed</td><td>T+S</td><td>rotRatio</td><td>T+S</td></tr><tr><td>rubine5</td><td>S</td><td>fitts</td><td/><td>aspect</td><td/><td>lengthLog</td><td>T+S</td></tr><tr><td>rubine6</td><td/><td>steering</td><td>T+S</td><td>curviness</td><td>T+S</td><td>aspectLog</td><td>T+S</td></tr><tr><td>rubine7</td><td/><td>lineRatio</td><td>T</td><td>relativeRot</td><td>T+S</td><td>fittsSteering</td><td/></tr><tr><td>rubine8</td><td>S</td><td>hesitation</td><td>T</td><td>densityMetric1</td><td/><td>ndde</td><td>T</td></tr><tr><td>rubine9</td><td>T+S</td><td>penLiftTime</td><td>T+S</td><td>densityMetric2</td><td>T+S</td><td>dcr</td><td/></tr></table>
272
+
273
+ Table 4: Features chosen by Recursive Feature Elimination to directly predict MoCA scores.
274
+
275
+ <table><tr><td>Search Model RFE Features</td><td>Travel Model RFE Features</td></tr><tr><td>aspectLog</td><td>avgPressure</td></tr><tr><td>avgPressure</td><td>avgSpeed</td></tr><tr><td>avgSpeed</td><td>rubine12</td></tr><tr><td>logArea</td><td>rubine13</td></tr><tr><td>rubinell</td><td>stdDevPressure</td></tr><tr><td>stdDevPressure</td><td>steering</td></tr></table>
276
+
277
+ Error (RMSE) of the predicted vs. the actual MoCa test scores for all of the lines. In this prediction algorithm all of the segmented lines of test and training participants have been labeled their respective MoCA scores. Although MoCA is not typically labeled on a per-line basis, our experiment is to determine whether such a prediction can be accurately made on per-line granularity. MAE and RSME were both used to help determine the mean error between the predictions of the logistic regressions. Both of the Travel and Search prediction algorithms had RSME and MAE as well as their standard deviation calculations and are shown in Table 5.
278
+
279
+ ## 6 RESULTS
280
+
281
+ ### 6.1 Accuracy Metrics of MCI Prediction
282
+
283
+ The main results of model performance are reported on Table 6, meant to report on how well a classification model trained and tested on travel lines and search lines independently is able to identify whether the author of those lines had MCI or was a healthy participant. A total of eight different classification models, listed in the Classifier column on the table, were trained with the features listed in section 5.3. Results are reported for both the search line model and the travel line model, and we report the models' accuracy, F1-score, precision, and recall. For both travel and search lines, Table 6 shows that the best performing models were created using a Random Forest classifier. Additionally, pressure-related features had among the highest feature importances when analyzing drop-column importances for the random-forest classifiers.
284
+
285
+ ### 6.2 Accuracy Metrics of MoCA Score Prediction
286
+
287
+ Two sets of metrics can be reported for the MoCA score prediction: the results of Recursive Feature Elimination and how the number of features affects the prediction accuracy, and the results of the average MAE and RSME of the predictions made on the test data. Prediction of the MoCA, as opposed to the prediction of MCI, is non-binary and more of a continuous set of data in nature. For this exercise we allowed fractions of numbers to be predicted, since our chief method of comparison is the calculation of RSME and MAE. Small discrepancies in MoCA scoring due to the inclusion of non-whole fractions would be minor, if that were the chief difference between predicted and actual scores. The accuracy metrics are reported in Table 5.
288
+
289
+ Table 5: Root Mean Squared Error (RSME) and Mean Absolute Error (MAE) of predicted points of MoCA scores.
290
+
291
+ <table><tr><td/><td colspan="2">Travel Lines</td><td colspan="2">Search Lines</td></tr><tr><td>Error Metric</td><td>Average</td><td>Std. Dev</td><td>Average</td><td>Std. Dev</td></tr><tr><td>RSME</td><td>3.325</td><td>0.132</td><td>3.315</td><td>0.134</td></tr><tr><td>MAE</td><td>2.415</td><td>0.110</td><td>2.406</td><td>0.105</td></tr></table>
292
+
293
+ ### 6.3 Discussion
294
+
295
+ #### 6.3.1 Mild Cognitive Impairment
296
+
297
+ One of the primary challenges in detecting MCI is the inherently subtle nature of changes. Research such as that of Zhang et al. [64] outline difficulties in formalizing behaviors that correlate significantly with the manifestation of MCI in the Trail-Making Test. Depending on the severity of cognitive decline and multiple factors in how MCI affects each participant, they may not find the TMT specifically that challenging. For that reason, it is generally believed that the TMT, while proven sensitive to MCI in many cases, is not alone the only tool needed to reliably detect MCI.
298
+
299
+ The results from the accuracy metrics of the travel and search lines supports the notion that detecting subtle levels of MCI is inherently challenging if only analyzing one test. In several of our observed cases, participants who we classified as just under our MCI threshold based on their MoCA score completed the test in a similar manner as a typical healthy participant. In these cases a clinical neuropsychologist would continue testing their patient with several other kinds of exams or use the Trail-Making Test to primarily to identify other conditions of cognitive decline. This differs from other digital sketch recognition problems where the exhibited behaviors are not subtle by nature, or if the goal is to differentiate between discrete shapes. Models for those problems typically result in much higher accuracy and F1-scores (closer to $> {0.9}$ ) since the labels are more cleanly delineated.
300
+
301
+ Overall, we believe the results present a meaningful contribution on the analysis of MCI through the TMT, largely due to the analysis and model construction on a per-line basis. Our implementation refined steps to segment the sketches by integrating speed thresholds to identify when the participant has found the next dot in the process. Whereas previous work in analyzing digitized TMT sketch data tends to average behaviors over an entire test, we sought to leverage the high-granularity nature of sketch data to provide analysis on individual lines. Our contribution also extends to the normalization of line direction and total length to avoid differences between lines that are due to the TMT individual dot locations. The key is to eliminate potential confounders introduced by the fact that the TMT stimulates all participants to change line directions and total line length. We chose not to map a "perfect" line for each of the different segments to gauge performance, since Trail-Making Test layouts are numerous and clinicians frequently use modified versions for their own purposes. We sought to create a classification model that would work regardless of the dot layout to avoid creating a model that only works on that specific layout. Ultimately we sought to explore whether segmented lines could individually be labeled as MCI or healthy with at least similar performance as existing work.
302
+
303
+ ![01963e6d-b352-7694-9f93-d88f2f170e70_9_257_155_1276_642_0.jpg](images/01963e6d-b352-7694-9f93-d88f2f170e70_9_257_155_1276_642_0.jpg)
304
+
305
+ Figure 11: Feature collinearity for both search and travel lines. Features with collinearity above 0.9 were removed from the model
306
+
307
+ A popular method for creating behavioral models is in the leveraging of deep learning techniques such as neural networks. These techniques are becoming more prevalent due to its ease of deployment for large datasets and higher efficacy in classification. However, we did not believe these techniques appropriate for this experiment for two primary reasons. The first is due to the necessity of collecting a considerably larger dataset for the creation of a classification algorithm using deep learning techniques. Challenges related to the proper collection of data for this experiment are explained in the following section. The second reason is due to the lack of explainability in deep learning techniques. While it would be possible to produce a more accurate behavioral model provided we acquire a considerably larger dataset, we would be unable to explain to a clinician which behaviors of the participant are responsible for the conclusion they are likely to have MCI. We believe that behavioral analysis in these types of domains should be usable to domain experts, thus motivating the manual creation of features to explain behavior.
308
+
309
+ We believe these results to be of interest in the HCI community, primarily due to the inherent nature of linking a cognitive examination with the analysis afforded by a high-granularity data collection protocol. In particular, the creation of an Index of Performance of sorts for the Steering Law (see Fig. 8) proved useful enough in both the search and travel line prediction models. For this particular project this calculation was different enough from Fitts' existing Index of Performance to warrant its inclusion as its own feature, and is potentially something that could be implemented to UX research. Indeed, we hope the results and explanations of the TMT can allow HCI researchers to see the TMT as a decades-old UI navigation task, and that the same principles and techniques that led to the creation of the Fitts' and Steering Laws can be applied to a digital TMT.
310
+
311
+ #### 6.3.2 Montreal Cognitive Assessment Score Prediction
312
+
313
+ As previously mentioned, MAE and RSME outlined in Table 5 show our predictions for MoCA scores, both search and travel lines possessing a Mean Absolute Error of around 2.4 on average and a Root Mean Squared error of around 3.3. Essentially, regardless of whether the travel lines or search line data and features were used to predict the MoCA score of that individual user, the resulting error remained consistent. Although MoCA scores range from 0 to 30 , our study ethics protocol prevented us from conducting research on participants with scores below 19 as previously mentioned, reducing the range of scores available to us to train and test on to between 19 and 30. The error rates reported implicitly become wider due to this range of scores being reduced, but we believe the reported MAE and RSME values still are small enough to be of interest to report. Overall, the scores suggest that the feature set presented in this paper can be used to predict MoCA scores based on a participants' digitized TMT sketch data.
314
+
315
+ Challenges of the MoCA score prediction were similar to those of predicting MCI, but were exacerbated by the labeling of a single score point to every line. Per-dot line segmentation resulted in likely an unbalanced training set, since a small subset of participants who performend fairly well the MoCA could skew the training and test sets considerably. This unevenness in MoCA distribution suggests that a much larger and wider range of MoCA scores is needed for accurate score prediction. As it stands, the Recursive Feature Elimination for both models as shown in Figure ?? suggests that even the optimal amount of chosen features yields only an accuracy above 0.35 for the Search model and an accuracy of up to 0.30 . For this current version of the calculated features and those chosen by the RFE, we believe that additional features and changes to the existing ones would be necessary to increase the prediction accuracy.
316
+
317
+ At present the results for predicting MoCA scores are inconclusive. The errors as reported in Table 5 might suggest an average error of about ${10}\%$ given the range of the MoCA scores to be from 0 to 30 points. The demographic data shown in Table 1 and discussed in section 5.1 shows an average MoCA score of 24.54 for all participants as well as the total overall criteria for inclusion of participants from 19 onward. Due to limitations on protocol safety, we are unable at present to recruit and test for participants with more severe cognitive
318
+
319
+ Table 6: Classification metrics. Acc is accuracy, F1 is F1-score, Prec is precision. For both the travel lines and search lines models, n=3,490 impairment for participants who score below 19. This is largely due to safety protocols requiring participants below that age to be accompanied by a guardian or healthcare official, since institutional review boards consider severely cognitive impaired individuals who would be unable to provide informed consent of their own volition. Following the safety protocols fortunately does not impair significantly the prediction of MCI vs. non-MCI populations since MCI participants are still able to provide informed consent, but this does reduce the efficacy of predicting MoCA performance as a continuous score. In order to create a more accurate $\mathrm{{MoCA}}$ predictor, we will require a larger corpus of data with a more even distribution of MoCA scores such that the scores are more evenly distributed as per established normative data. At present the inclusion of an MCI predictor did somewhat limit the performance of a MoCA score predictor.
320
+
321
+ <table><tr><td/><td colspan="3">Travel Lines</td><td colspan="5">Search Lines</td></tr><tr><td>Classifier</td><td>Acc</td><td>F1</td><td>Prec</td><td>Recall</td><td>Acc</td><td>F1</td><td>Prec</td><td>Recall</td></tr><tr><td>Majority</td><td>0.51</td><td>0.51</td><td>0.50</td><td>0.50</td><td>0.53</td><td>0.53</td><td>0.52</td><td>0.52</td></tr><tr><td>Gaussian Naive-Bayes</td><td>0.47</td><td>0.36</td><td>0.60</td><td>0.53</td><td>0.47</td><td>0.38</td><td>0.58</td><td>0.53</td></tr><tr><td>Decision Tree</td><td>0.59</td><td>0.59</td><td>0.58</td><td>0.58</td><td>0.60</td><td>0.60</td><td>0.59</td><td>0.59</td></tr><tr><td>K-Nearest Neighbor</td><td>0.60</td><td>0.60</td><td>0.59</td><td>0.59</td><td>0.58</td><td>0.58</td><td>0.57</td><td>0.57</td></tr><tr><td>Linear Regression</td><td>0.65</td><td>0.64</td><td>0.65</td><td>0.63</td><td>0.62</td><td>0.59</td><td>0.61</td><td>0.58</td></tr><tr><td>SVM</td><td>0.65</td><td>0.63</td><td>0.66</td><td>0.62</td><td>0.63</td><td>0.61</td><td>0.64</td><td>0.60</td></tr><tr><td>LDA</td><td>0.65</td><td>0.63</td><td>0.65</td><td>0.62</td><td>0.62</td><td>0.60</td><td>0.61</td><td>0.59</td></tr><tr><td>Random Forest*</td><td>0.67</td><td>0.73</td><td>0.67</td><td>0.80</td><td>0.66</td><td>0.72</td><td>0.68</td><td>0.77</td></tr></table>
322
+
323
+ ## 7 LIMITATIONS AND FUTURE WORK
324
+
325
+ One of the main challenges in building an accurate predictive behavioral model is the creation of a new dataset for that specific purpose. Despite the fact that the Trail-Making Test has been in use for several decades, the granularity of digital data and the requirement of a digital pen necessitated the creation of a new dataset. The prevalence of different Trails Test layouts and the small differences of protocol that vary from clinician to clinician also necessitated a unified testing protocol. Accompanying this challenge is the laborious recruitment process. Although the task is simple, the administration of the MoCA and the proper administration of the Trail-Making Test resulted in a slower rate of data collection that is typical of sketch recognition tasks.
326
+
327
+ Currently, the age ranges of the Trail-Making Test's normative data, as found Tombaugh's stratified normative data for paper-and-pencil Trail Making Tests [61], divides the age range into 11 distinct categories. Our normative data covers the latter 7 bins with our participants ranging from 57 to 86 years of age. Since the focus of this experiment is in identifying MCI among middle aged and older individuals, the study focused on that age range. Future studies will continue the data collection process to build a more complete normative body of data across all age ranges. These might potentially result in differing behaviors between patients with MCI from different age ranges, but a solid body of data from those age ranges is necessary for verification. We also aim to further expand on localizing areas that were difficult for participants with MCI, reporting these lines on a UI-level in real time and evaluating a clinician's diagnosis experience with such an automated tool.
328
+
329
+ Although the system has two primary end users, the scope of this paper focused on the participant. We aim to investigate the user experience for proctors to deploy the system and use the predictions in their diagnosis. Specifically we aim to gather feedback on the experience of reporting the system's findings, since the proctors have access to a wide variety of sketch visualizations as mentioned in Section 3. Reporting on predicting MCI and non-MCI participants in addition to highlighting hesitation, line deviation, and visually color-coding search and travel lines offers proctors a large range of information and future work will investigate on the usefulness and overall user experience. Additionally we would like to use other peripherals for additional features such as heart rate sensors and integrated eye-tracking solutions to create an even more feature-rich data set that enhances participant behavior analysis.
330
+
331
+ Also of note is the fact that a protocol of collecting data on an MCI population inherently removes a full range of ages and conditions for normative data. This impacts the ability for a predictive system to make an ML-based prediction of the actual MoCA score. The prediction of the MoCA scores yielded relatively small error percentages but when taking into account the reduced range of MoCA scores that were available for testing and training, we conclude the results for direct prediction of exact MoCA scores are inconclusive despite being somewhat promising. We are considerably more confident about the binary classification between non-MCI and MCI populations precisely because the range of data and the collection protocol yielded the most appropriate data for that kind of classification. Future work will require a wider range of participants with normative data closer to that of Tombaugh et al. [61]. We are confident the digital sketch data from digital TMTs can be used to make much more accurate predictions about the participants' MoCA scores if said data were available.
332
+
333
+ Subtle changes in behavior due to Mild Cognitive Impairment continue to present significant challenges in identifying the earliest possible signs for conditions that may lead to dementia and Alzheimer's disease. Existing efforts to aid in this challenge highlight the difficulty of finding the nuances in behavioral changes present in a Trail-Making Test. However, with significant improvements over previous efforts we present a solution that suggests individual lines, regardless of their direction, can distinguish between MCI and Healthy with noticeably higher levels of accuracy. We look forward to employing additional preprocessing methods, features, and a larger digital sketch dataset to further improve on this effort. We believe sketch data from the Trail-Making Test still has the potential to yield insights into behavioral changes that are yet to be discovered.
334
+
335
+ ## REFERENCES
336
+
337
+ [1] J. Accot and S. Zhai. Performance evaluation of input devices in trajectory-based tasks: an application of the steering law. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, pp. 466-472, 1999.
338
+
339
+ [2] J. Accot and S. Zhai. Scale effects in steering law tasks. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. $1 - 8,{2001}$ .
340
+
341
+ [3] F. T. Alamudun, T. Hammond, H.-J. Yoon, and G. D. Tourassi. Geometry and gesture-based features from saccadic eye-movement as a
342
+
343
+ biometric in radiology. In International Conference on Augmented Cognition, pp. 123-138. Springer, 2017.
344
+
345
+ [4] M. S. Albert, S. T. DeKosky, D. Dickson, B. Dubois, H. H. Feldman,
346
+
347
+ N. C. Fox, A. Gamst, D. M. Holtzman, W. J. Jagust, R. C. Petersen, et al. The diagnosis of mild cognitive impairment due to alzheimer's disease: recommendations from the national institute on aging-alzheimer's association workgroups on diagnostic guidelines for alzheimer's disease. Alzheimer's & dementia, 7(3):270-279, 2011.
348
+
349
+ [5] J. A. Arnett and S. S. Labovitz. Effect of physical layout in performance of the trail making test. Psychological Assessment, 7(2):220, 1995.
350
+
351
+ [6] L. Ashendorf, A. L. Jefferson, M. K. O'Connor, C. Chaisson, R. C. Green, and R. A. Stern. Trail making test errors in normal aging, mild cognitive impairment, and dementia. Archives of Clinical Neuropsychology, 23(2):129-137, 2008.
352
+
353
+ [7] R. M. Bauer, G. L. Iverson, A. N. Cernich, L. M. Binder, R. M. Ruff, and R. I. Naugle. Computerized neuropsychological assessment devices: joint position paper of the american academy of clinical neuropsychology and the national academy of neuropsychology. Archives of Clinical Neuropsychology, 27(3):362-373, 2012.
354
+
355
+ [8] J. Brandt, M. Spencer, M. Folstein, et al. The telephone interview for cognitive status. Neuropsychiatry Neuropsychol Behav Neurol, 1(2):111-117, 1988.
356
+
357
+ [9] M. Brehmer, J. McGrenere, C. Tang, and C. Jacova. Investigating interruptions in the context of computerised cognitive testing for older adults. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 2649-2658, 2012.
358
+
359
+ [10] C. Calhoun, T. F. Stahovich, T. Kurtoglu, and L. B. Kara. Recognizing multi-stroke symbols. In AAAI Spring Symposium on Sketch Understanding, pp. 15-23, 2002.
360
+
361
+ [11] G. Chiti and L. Pantoni. Use of montreal cognitive assessment in patients with stroke. Stroke, 45(10):3135-3140, 2014.
362
+
363
+ [12] S. F. Crowe. The differential contribution of mental tracking, cognitive flexibility, visual search, and motor speed to performance on parts a and b of the trail making test. Journal of clinical psychology, 54(5):585- 591, 1998.
364
+
365
+ [13] J. Dahmen, D. Cook, R. Fellows, and M. Schmitter-Edgecombe. An analysis of a digital variant of the trail making test using machine learning techniques. Technology and Health Care, 25(2):251-264, 2017.
366
+
367
+ [14] J. Dalrymple-Alford, M. MacAskill, C. Nakas, L. Livingston, C. Graham, G. Crucian, T. Melzer, J. Kirwan, R. Keenan, S. Wells, et al. The moca: well-suited screen for cognitive impairment in parkinson disease. Neurology, 75(19):1717-1725, 2010.
368
+
369
+ [15] C. A. de Jager, A.-C. M. Schrijnemaekers, T. E. Honey, and M. M. Budge. Detection of mci in the clinic: evaluation of the sensitivity and specificity of a computerised test battery, the hopkins verbal learning test and the mmse. Age and ageing, 38(4):455-460, 2009.
370
+
371
+ [16] C. E. Drapeau, M. Bastien-Toniazzo, C. Rous, and M. Carlier. Nonequivalence of computerized and paper-and-pencil versions of trail making test. Perceptual and motor skills, 104(3):785-791, 2007.
372
+
373
+ [17] B. Dubois, A. Slachevsky, I. Litvan, and B. Pillon. The fab: a frontal assessment battery at bedside. Neurology, 55(11):1621-1626, 2000.
374
+
375
+ [18] R. P. Fellows, J. Dahmen, D. Cook, and M. Schmitter-Edgecombe. Multicomponent analysis of a digital trail making test. The Clinical Neuropsychologist, 31(1):154-167, 2017.
376
+
377
+ [19] P. M. Fitts. The information capacity of the human motor system in controlling the amplitude of movement. Journal of experimental psychology, 47(6):381, 1954.
378
+
379
+ [20] C. Flicker, S. H. Ferris, and B. Reisberg. A two-year longitudinal study of cognitive function in normal aging and alzheimer's disease. Journal of Geriatric Psychiatry and Neurology, 6(2):84-96, 1993.
380
+
381
+ [21] O. Godefroy, A. Fickl, M. Roussel, C. Auribault, J. M. Bugnicourt, C. Lamy, S. Canaple, and G. Petitnicolas. Is the montreal cognitive assessment superior to the mini-mental state examination to detect poststroke cognitive impairment? a study with neuropsychological evaluation. Stroke, 42(6):1712-1716, 2011.
382
+
383
+ [22] C. T. Gualtieri. Dementia screening using computerized tests. JOURNAL OF INSURANCE MEDICINE-NEW YORK THEN DENVER-, 36:213-227, 2004.
384
+
385
+ [23] J. Hobson. The montreal cognitive assessment (moca). Occupational
386
+
387
+ Medicine, 65(9):764-765, 2015.
388
+
389
+ [24] C. P. Hughes, L. Berg, W. Danziger, L. A. Coben, and R. L. Martin. A new clinical scale for the staging of dementia. The British journal of psychiatry, 140(6):566-572, 1982.
390
+
391
+ [25] M. Janssen, M. Bosch, P. Koopmans, and R. Kessels. Validity of the montreal cognitive assessment and the hiv dementia scale in the assessment of cognitive impairment in hiv-1 infected patients. Journal of neurovirology, 21(4):383-390, 2015.
392
+
393
+ [26] P. Julayanont and Z. S. Nasreddine. Montreal cognitive assessment (moca): concept and clinical review. In Cognitive screening instruments, pp. 139-195. Springer, 2017.
394
+
395
+ [27] L. B. Kara and T. F. Stahovich. An image-based, trainable symbol recognizer for hand-drawn sketches. Computers & Graphics, 29(4):501- 517, 2005.
396
+
397
+ [28] H.-h. Kim, P. Taele, J. Seo, J. Liew, and T. Hammond. Easysketch2: A novel sketch-based interface for improving children's fine motor skills and school readiness. In Proceedings of the Joint Symposium on Computational Aesthetics and Sketch Based Interfaces and Modeling and Non-Photorealistic Animation and Rendering, pp. 69-78. Eurographics Association, 2016.
398
+
399
+ [29] K. B. Kortte, M. D. Horner, and W. K. Windham. The trail making test, part b: cognitive flexibility or ability to maintain set? Applied neuropsychology, 9(2):106-109, 2002.
400
+
401
+ [30] R. Lara-Garduno, T. Igarashi, and T. Hammond. 3d-trail-making test: A touch-tablet cognitive test to support intelligent behavioral recognition. In Graphics Interface, pp. 10-1, 2019.
402
+
403
+ [31] A. C. Long Jr, J. A. Landay, L. A. Rowe, and J. Michiels. Visual similarity of pen gestures. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, pp. 360-367, 2000.
404
+
405
+ [32] I. S. MacKenzie. Fitts' law as a research and design tool in human-computer interaction. Human-computer interaction, 7(1):91-139, 1992.
406
+
407
+ [33] I. S. MacKenzie and W. Buxton. Extending fitts' law to two-dimensional tasks. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 219-226, 1992.
408
+
409
+ [34] D. M. Masur, M. Sliwinski, R. Lipton, A. Blau, and H. Crystal. Neuropsychological prediction of dementia and the absence of dementia in healthy elderly persons. Neurology, 44(8):1427-1427, 1994.
410
+
411
+ [35] T. Miner and F. R. Ferraro. The role of speed of processing, inhibitory mechanisms, and presentation order in trail-making test performance. Brain and cognition, 38(2):246-253, 1998.
412
+
413
+ [36] MoCA Montreal-Cognitive Assment. Montreal cognitive assessment, 2019. [Online; accessed May 18, 2019].
414
+
415
+ [37] S. Müller, O. Preische, P. Heymann, U. Elbing, and C. Laske. Increased diagnostic accuracy of digital vs. conventional clock drawing test for discrimination of patients in the early course of alzheimer's disease from cognitively healthy individuals. Frontiers in aging neuroscience, 9:101, 2017.
416
+
417
+ [38] Z. S. Nasreddine and B. B. Patel. Validation of montreal cognitive assessment, moca, alternate french versions. Canadian Journal of Neurological Sciences, 43(5):665-671, 2016.
418
+
419
+ [39] Z. S. Nasreddine, N. A. Phillips, V. Bédirian, S. Charbonneau, V. Whitehead, I. Collin, J. L. Cummings, and H. Chertkow. The montreal cognitive assessment, moca: a brief screening tool for mild cognitive impairment. Journal of the American Geriatrics Society, 53(4):695- 699, 2005.
420
+
421
+ [40] J. J. O'Rourke, L. J. Beglinger, M. M. Smith, J. Mills, D. J. Moser, K. C. Rowe, D. R. Langbehn, K. Duff, J. C. Stout, D. L. Harrington, et al. The trail making test in prodromal huntington disease: contributions of disease progression to test performance. Journal of clinical and experimental neuropsychology, 33(5):567-579, 2011.
422
+
423
+ [41] R. Pastel. Measuring the difficulty of steering through corners. In Proceedings of the SIGCHI conference on Human Factors in computing systems, pp. 1087-1096, 2006.
424
+
425
+ [42] B. Paulson and T. Hammond. A system for recognizing and beautifying low-level sketch shapes using ndde and dcr. In 20th Annual ACM Symposium on User Interface Software and Technology Posters, 2007.
426
+
427
+ [43] B. Paulson and T. Hammond. Paleosketch: accurate primitive sketch recognition and beautification. In Proceedings of the 13th international conference on Intelligent user interfaces, pp. 1-10, 2008.
428
+
429
+ [44] B. Paulson, P. Rajan, P. Davalos, R. Gutierrez-Osuna, and T. Hammond. What!?! no rubine features?: Using geometric-based features to produce normalized confidence values for sketch recognition. In ${HCC}$
430
+
431
+ Workshop: Sketch Tools for Diagramming, pp. 57-63, 2008.
432
+
433
+ [45] S. T. Pendlebury, F. C. Cuthbertson, S. J. Welch, Z. Mehta, and P. M. Rothwell. Underestimation of cognitive impairment by mini-mental state examination versus the montreal cognitive assessment in patients with transient ischemic attack and stroke: a population-based study. Stroke, 41(6):1290-1293, 2010.
434
+
435
+ [46] A. Prange and D. Sonntag. Assessing cognitive test performance using automatic digital pen features analysis. In Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization, pp. 33-43, 2021.
436
+
437
+ [47] B. Reisberg, S. H. Ferris, A. Kluger, E. Franssen, J. Wegiel, and M. J. De Leon. Mild cognitive impairment (mci): a historical perspective. International Psychogeriatrics, 20(1):18-31, 2008.
438
+
439
+ [48] B. Reisberg and S. Gauthier. Current evidence for subjective cognitive impairment (sci) as the pre-mild cognitive impairment (mci) stage of subsequently manifest alzheimer's disease. International psychogeriatrics, 20(1):1-16, 2008.
440
+
441
+ [49] D. R. Roalf, P. J. Moberg, S. X. Xie, D. A. Wolk, S. T. Moelter, and S. E. Arnold. Comparative accuracies of two common screening instruments for classification of alzheimer's disease, mild cognitive impairment, and healthy aging. Alzheimer's & Dementia, 9(5):529-537, 2013.
442
+
443
+ [50] D. Rubine. Specifying gestures by example. ACM SIGGRAPH computer graphics, 25(4):329-337, 1991.
444
+
445
+ [51] D. Rubine. Combining gestures and direct manipulation. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 659-660, 1992.
446
+
447
+ [52] M. N. Sabbagh, M. Boada, S. Borson, M. Chilukuri, P. Doraiswamy, B. Dubois, J. Ingram, A. Iwata, A. Porsteinsson, K. Possin, et al. Rationale for early diagnosis of mild cognitive impairment (mci) supported by emerging digital technologies. The Journal of Prevention of Alzheimer's Disease, pp. 1-7, 2020.
448
+
449
+ [53] I. . Sánchez-Cubillo, J. Periáñez, D. Adrover-Roig, J. Rodríguez-Sánchez, M. Rios-Lago, J. Tirapu, and F. Barcelo. Construct validity of the trail making test: role of task-switching, working memory, inhibition/interference control, and visuomotor abilities. Journal of the International Neuropsychological Society: JINS, 15(3):438, 2009.
450
+
451
+ [54] G. Santangelo, M. Siciliano, R. Pedone, C. Vitale, F. Falco, R. Bisogno, P. Siano, P. Barone, D. Grossi, F. Santangelo, et al. Normative data for the montreal cognitive assessment in an italian population sample. Neurological Sciences, 36(4):585-591, 2015.
452
+
453
+ [55] T. M. Sezgin, T. Stahovich, and R. Davis. Sketch based interfaces: early processing for sketch understanding. In ACM SIGGRAPH 2007 courses, pp. 37-es. ACM, 2007.
454
+
455
+ [56] S.-R. Smith. Mobile context-aware cognitive testing system. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services, pp. 1-4, 2017.
456
+
457
+ [57] T. Smith, N. Gildeh, and C. Holmes. The montreal cognitive assessment: validity and utility in a memory clinic setting. The Canadian Journal of Psychiatry, 52(5):329-332, 2007.
458
+
459
+ [58] W. Souillard-Mandar, R. Davis, C. Rudin, R. Au, D. J. Libon, R. Swenson, C. C. Price, M. Lamar, and D. L. Penney. Learning classification models of cognitive conditions from subtle behaviors in the digital clock drawing test. Machine learning, 102(3):393-441, 2016.
460
+
461
+ [59] E. Strauss, E. M. Sherman, O. Spreen, et al. A compendium of neuropsychological tests: Administration, norms, and commentary. American Chemical Society, 2006.
462
+
463
+ [60] J. Toglia, K. A. Fitzgerald, M. W. O'Dell, A. R. Mastrogiovanni, and C. D. Lin. The mini-mental state examination and montreal cognitive assessment in persons with mild subacute stroke: relationship to functional outcome. Archives of physical medicine and rehabilitation, 92(5):792-798, 2011.
464
+
465
+ [61] T. N. Tombaugh. Trail making test a and b: normative data stratified by age and education. Archives of clinical neuropsychology, 19(2):203- 214, 2004.
466
+
467
+ [62] J. B. Tornatore, E. Hill, J. A. Laboff, and M. E. McGann. Self-administered screening for mild cognitive impairment: initial validation of a computerized test battery. The Journal of neuropsychiatry and
468
+
469
+ clinical neurosciences, 17(1):98-105, 2005.
470
+
471
+ [63] P. Zham, D. K. Kumar, P. Dabnichki, S. Poosapadi Arjunan, and S. Raghav. Distinguishing different stages of parkinson's disease using composite index of speed and pen-pressure of sketching a spiral. Frontiers in neurology, 8:435, 2017.
472
+
473
+ [64] Y. Zhang, B. Han, P. Verhaeghen, and L.-G. Nilsson. Executive functioning in older adults with mild cognitive impairment: Mci has effects on planning, but not on inhibition. Aging, Neuropsychology, and Cognition, 14(6):557-570, 2007.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/D9M6uwZGC5y/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,464 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Raniero Lara-Garduno*
2
+
3
+ Texas A&M University
4
+
5
+ § DETECTING MILD COGNITIVE IMPAIRMENT THROUGH DIGITIZED TRAIL-MAKING TEST INTERFACE
6
+
7
+ Yajun Jia ${}^{ \dagger }$ Nicolaas E. Deutz ${}^{ \ddagger }$ Marielle Engelen ${}^{§}$ Nancy Leslie ${}^{¶}$ Texas A&M University Texas A&M University Texas A&M University
8
+
9
+ Tracy Hammond ${}^{\parallel }$
10
+
11
+ Texas A&M University
12
+
13
+ § ABSTRACT
14
+
15
+ With the number of Alzheimer's patients reaching 5 million in 2014 according to the U.S. Center for Disease Control and Prevention, increasing emphasis has been placed on identifying and understanding its precursor condition, Mild Cognitive Impairment (MCI). MCI is characterized by subtle but abnormal cognitive decline and is challenging to detect without formal testing. Neuropsychologists use paper-and-pencil tests such as the Trail-Making Test (TMT) for diagnosis, and ongoing research places importance on high-granularity sketch data from digital TMTs. We present SmartStrokes, a digital TMT app designed to simulate the paper-and-pencil testing experience on a tablet and stylus. Our contribution frames the principles of digital sketch recognition and Human-Computer Interaction (HCI) into the existing neuropsychological test, outlining the creation of a pair of classification models that identify MCI on an individual segmented line basis. Such a per-line classification method which could provide localized sketching behavior indicative of MCI. We also present an interface for the digital TMT and a refinement of line segmentation algorithms from previous research to better distinguish between the actions that a participant takes when completing the exam.
16
+
17
+ Index Terms: Applied computing-Health Informatics; Human-centered computing-Human computer interaction; Human-centered computing-Tablet computers
18
+
19
+ § 1 INTRODUCTION
20
+
21
+ The U.S. Center for Disease Control and Prevention has reported 5 million Alzheimer's patients in 2014, with the expected number to more than double to 13.9 million by 2060. Due to advancements in interventions aimed at mild-to-moderate cases of Alzheimer's disease, neuropsychologists have placed an increasing emphasis on early detection of Mild Cognitive Impairment (MCI) to better preserve quality of life $\left\lbrack {4,{20},{34}}\right\rbrack$ . A clinical neuropsychologist typically conducts paper-and-pencil cognitive examinations on a patient to help detect MCI. This process is historically laborious, requires multiple rounds of testing, and frequently requires non-standardized subjective analysis of a patient's subtle behavioral patterns. Digitizing these clinical examinations, specifically the Trail-Making Test among them, has allowed researchers to attempt to aid the diagnosis process by employing machine learning for behavioral analysis. Existing work in this space has not yet fully leveraged recognition techniques used in digital sketch recognition, particularly research that links sketching with cognition. In particular, the application of HCI principles to detect MCI via digitized testing interfaces in the context of neuropsychology is a topic we believe has not yet fully explored. Our contribution presented in this paper is to integrate HCI and digital sketch recognition into the domain of neuropsychology to deliver more granular recognition on a digitized TMT. We analyze and classify individual test segments rather than the more traditional method of one determination for an entire test. We also discuss the limitations and potential avenues for future research that surfaced during the completion of this research.
22
+
23
+ < g r a p h i c s >
24
+
25
+ Figure 1: A sample completed test in our SmartStrokes app. The interface is designed to be as close as possible to an actual paper-and-pencil test
26
+
27
+ § 1.1 MILD COGNITIVE IMPAIRMENT
28
+
29
+ The characteristics of MCI were initially established as part of the Global Deterioration Scale (DGS) [47], defining it as a syndrome where an individual's cognitive decline is greater than expected for their age $\left\lbrack {{24},{48}}\right\rbrack$ . It is considered to be a precursor to more severe cognitive decline that may advance into dementia, with Alzheimer's in particular being likely. The existence of MCI in itself, however, is not indicative that cognitive will necessarily decline further, as the cognition of many MCI patients never develops into dementia. Additionally, unlike these more severe forms of cognitive decline, MCI does not severely impact one's daily quality of life [64] and can thus be challenging to diagnose. This means that often the signs are subtle and can be easily dismissed as expected decline in executive function for an individual's age. MCI itself is characterized as not having a significant impact on daily activities, and may not manifest in a noticeable way for years, making it difficult to definitively diagnose and track.
30
+
31
+ *e-mail: raniero@LGinbox.com
32
+
33
+ ${}^{ \dagger }$ e-mail: jia560@tamu.edu
34
+
35
+ ${}^{\frac{4}{9}}$ e-mail: nep.deutz@tamu.edu
36
+
37
+ §e-mail: mpkj.engelen@ctral.org
38
+
39
+ Te-mail: nleslie.phd@gmail.com
40
+
41
+ 'e-mail: hammond@tamu.edu
42
+
43
+ < g r a p h i c s >
44
+
45
+ Figure 2: Sample of traditional paper-and-pencil versions of TrailMaking Test B
46
+
47
+ In cases where MCI does worsen, the characteristics of severe cognitive decline can vary depending on background and genetic conditions particular to the patient. Reisberg et al. [47] specifies the emergence of "behavioral disturbances", neurological abnormalities, electrophysiological changes, motor deficits, balance and coordination deficits, and general active daily living activity deficits. With an increase in life expectancy correlating to a rise in the prevalence of dementia and Alzheimer's disease, research attention has turned to the successful identification of MCI and how existing tools can be improved to assist.
48
+
49
+ § 1.2 TRAIL-MAKING TEST
50
+
51
+ Clinicians have historically relied on paper-and-pencil neuropsychological examinations as one of the primary methods to diagnose MCI. These typically involve a series of simple tasks for an individual to complete, and have been shown to be sensitive to the same cognitive functions affected by MCI through several decades of research [6]. We focus on Trail-Making Test, a connect-the-dots task that tests executive function and active memory. Initially conceived as a test to assess general intelligence, the TMT is known to be sensitive to cognitive decline and possible early signs of dementia. Currently the Trail-Making Test is widely used in neuropsychologists' test battery to assess for various signs of cognitive decline, including MCI [59]. The switching between numbers and letters found in the TMT-B relies on frontal lobe function $\left\lbrack {{12},{26},{29},{40},{53}}\right\rbrack$ , and is one of the primary reasons for the sensitivity of the test to MCI.
52
+
53
+ The test consists of two separate connect-the-dots tasks. The participant is handed a piece of paper with a series of labeled dots printed on it, and are handed a pen or pencil with which to connect the dots. The A variant of this test consists of a participant connecting dots in ascending numerical order $(1,2,3$ , and so on), while the B variant of the test requires connecting dots alternating between numbers and letters in ascending order $(1,\mathrm{\;A},2,\mathrm{\;B}$ , and so on). The participant is typically asked to complete the test as a pair, starting with variant A and immediately followed by variant B. Multiple layouts of these tests exist and are used when a clinician wishes to test the participant more than once, since a different arrangement of labeled dots is necessary to avoid the learning effect. Dot layout has been observed to directly affect time to completion on healthy populations [5]. Participants are asked to not lift their pen or pencil whenever possible, or when they connect to the wrong dot and must return to the previous dot.
54
+
55
+ < g r a p h i c s >
56
+
57
+ Figure 3: The Montreal Cognitive Assessment (MoCA). Image courtesy of mocatest.org [36]
58
+
59
+ Assessment of Trail-Making Tests is primarily done in two ways: comparing the test score with established normative data, and the qualitatively observed behavior of a participant as the test is being completed. The test score is calculated as the test's time to completion rounded to the nearest whole number. The fact that a score is reported as a single numerical value necessitated the qualitative observation, and over the decades clinicians have devised multiple methods for assessing a participant's performance as they complete the test. Colored pencils, video recordings, and observing behaviors from sitting posture to the way the patient holds their pen are just some of the qualitative observations made by clinicians.
60
+
61
+ These measures taken highlight the notion that the behavior a participant exhibits during the test is just as important, if not more so, as the single time-to-completion reported score. The subtle nature of MCI, however, has historically meant that clinicians rely on their own expertise and experience for qualitative observations. Most recent advancements to digital sketching technology have made it feasible for these tests to be assessed with much higher granularity than in previous decades, but research aiming to capitalize on this feasibility is limited.
62
+
63
+ § 1.3 THE MONTREAL COGNITIVE ASSESSMENT
64
+
65
+ The Montreal Cognitive Assessment (MoCA) is among the most widely used assessment protocols for gauging an individual's cognitive function. It consists of various short tasks, both written and verbal, aimed at testing various functions of a person's cognition and is frequently administered as a triage to help determine whether a patient requires further diagnosis and possible treatment, and is also frequently used to determine whether patients have symptoms of MCI [39]. Originally developed in 1995 by Ziad Nasreddine [23], it has since been the subject of various validation studies [26,57]. Normative data for the MoCA has been collected and analyzed for patients of various populations $\left\lbrack {{38},{54}}\right\rbrack$ , diseases $\left\lbrack {{11},{45}}\right\rbrack$ , cognitive states $\left\lbrack {{11},{25}}\right\rbrack$ , and post-trauma conditions $\left\lbrack {{21},{60}}\right\rbrack$ . The primary conditions that it has been validated for include MCI, Alzheimer's disease and Parkinson’s Disease dementias $\left\lbrack {{14},{39},{57}}\right\rbrack$ , and has been shown to be more sensitive to MCI-related decline than other examinations such as the Mini-Mental State Examination (MMSE) [49]. Hobson describes that the MoCA can assess cognitive domains including but not limited to "Visuospatial/Executive, Naming, Memory, Attention, Language, Abstraction, Delayed Recall and Orientation (to time and place)" [23].
66
+
67
+ The MoCA is frequently used in tandem with other neuropsychological examinations for ensuring that its results are consistent across various other examinations such as the Trail-Making Test. In effect, one might consider performance of the MoCA and the TMT to be correlated, such that exceptionally well or poor performance of one test is likely to lead to poor performance in the other. Indeed, a brief TMT-B appears in the MoCa as one of the tasks [26], and they both test of the same frontal lobe function.
68
+
69
+ § 2 RELATED WORK
70
+
71
+ § 2.1 COGNITION IN DIGITAL SKETCH RECOGNITION
72
+
73
+ One of the prevalent methods of digital sketch recognition is through the analysis of sketches as "gestures" comprising of geometric properties of sketches. This includes but is not limited to line length, speed, acceleration, line straightness, and various trigonometric properties of line strokes. Individual features were calculated in early efforts from Rubine et al. $\left\lbrack {{50},{51}}\right\rbrack$ , and later expanded by Sta-hovich et al. [10, 27, 55], Long et al. [31], Paulson et al. [43, 44], and Alamudum et al. [3]. Digitial sketch recognition initially leveraged machine learning to afford developers tools to recognize simple geometric shapes. Shape recognition expanded to alphabets, scaffolded recognition to identify components of complex composite shapes, and entire sketches. Machine learning algorithms have allowed these analyses to be made feasible over a large corpus, resulting in models that are able to distinguish between objects depending on subtle changes in sketching behavior.
74
+
75
+ An increasingly common application of digital sketch recognition does not identify the shapes drawn, but rather characteristics of those who draw them. Kim et al. identified strong correlations between sketching behavior and early cognitive development in infants [28]. Davis et al. [63] and Muller et al. [37] similarly has focused on cognitive decline by analyzing sketches from Clock-Drawing tests [58]. Zham et al. identified the presence of Parkison's disease through the way a participant drew spirals with a smart-pen [63]. Digital variants on existing neuropsychological tests are numerous, with various proposed systems designed for test automation, diagnosis assistance, or self-administration $\left\lbrack {7,{18},{52},{62}}\right\rbrack$
76
+
77
+ § 2.2 DIGITIZED TRAIL-MAKING TESTS
78
+
79
+ Multiple computerized variations on the Trail-Making Test have been developed and studied [22]. Drapeau et al. noted the clear difference in performance between a paper-and-pencil TMT and a digitized version completed with a computer mouse [16]. Jager et al. directly studied differences in performances between paper-and-pencil and computerized neuropsychological tests [15]. Smith et al. explored the possibility of implementing several cognitive testing tools with mobile technology [56]. Prange et al. uses a large amount of digital sketch recognition features to classify participants as "healthy" or "suspicious" [46] but does not heavily anchor the features on neuropsychological and HCI principles nor is there a granular per-line analysis made beyond determining whether a line connects two dots. More specifically in the tele-health space, Brehmer et al. contextualized the challenges and considerations to be taken with implementing computerized neuropsychological exams at home where there could be interruptions [9]. More novel research in this includes the work of Lara-Garduno et al. who presents a touch-based novel neuropsychological examination based on the TMT [30]. With the advancement and increasing affordability of pen and touch technology and mobile computing, interest turned to digitizing these tests to simulate the original pen-and-paper experience.
80
+
81
+ < g r a p h i c s >
82
+
83
+ Figure 4: Test analysis interface of SmartStrokes, demonstrating line deviation and separation of search and travel lines on a completed test
84
+
85
+ One of the most recent attempts at digitizing the Trail-Making Test and leveraging machine learning to aid in the diagnosis process comes from the work of Dahmen et al. [13]. This work involved the use of a tablet and stylus to re-create the Trail-Making Test and a user study consisting of $\left( {\mathrm{N} = {54}}\right)$ older adult participants. Digital sketches were used as traning data to predict two types of assessment: that same participant's scores of the Telephone Interview for Cognitive Status (TICS) [8] and Frontal Assessment Battery (FAB) [17] performance scores, and the prediction of a participant's condition as "healthy" or "neurologic". Prediction of a participant's condition using features mostly focusing on dwell-time yielded accuracies ranging from ${44}\%$ and ${67}\%$ . Predictions were done on a per-test basis using feature averages rather than localizing or segmenting lines.
86
+
87
+ § 2.3 PROPOSED CONTRIBUTION
88
+
89
+ Our proposed contribution presents an interface that collects digital sketch that that is then used to create a classification model to distinguish between MCI and healthy participants on a per-line basis. It builds on existing work from Dahmen et al. [13], which in its conclusion states the belief that the high-granularity digital sketch data from a digitized version of the TMT could provide higher-granularity analysis. Per-line classification would offer two advantages: 1) targeting individual lines for classification of MCI could give a more localized assessment of individual dots that challenged the participant, and 2) building a classification model that analyzes sketches on a per-line basis could allow generalizability for more layouts, since as previously mentioned the TMT frequently needs a wide variety of dot layouts to avoid the learning effect.
90
+
91
+ Further, our proposed contribution uses the scores from the Montreal Cognitive Assessment (MoCA) to detect MCI, whereas the existing work from Dahmen uses the TICS and FAB to assess more advanced dementia. MCI is characterized as being much more subtle in nature, meaning milder cases of MCI frequently result in sketching behavior that only slightly deviates from a healthy participant. Our proposed solution achieves an accuracy similar to Dahmen's existing work, with the added advantages of offering classifications on a more granular per-line basis, and classifying for a more subtle degree of cognitive decline.
92
+
93
+ < g r a p h i c s >
94
+
95
+ Figure 5: Separated travel and search lines. Travel lines are rotated to always face a top-to-bottom orientation.
96
+
97
+ § 3 INTERFACE DESIGN
98
+
99
+ SmartStrokes is a digital testing suite focused on re-creating commonly-used Trail-Making Test layouts for use on Microsoft Surface Pro 4 devices. The Universal Windows Platform (UWP) was chosen for reasons that include rapid development of a mobile-style application on Windows devices, ease of exporting pen data for analysis, and its firmware-level digital pen integration with Surface Pen devices and allows us to easily extract pen pressure to supplement our feature set.
100
+
101
+ The the system has two simultaneous end users: medical and other personnel who proctor the exam (referred to in this paper as "proctors"), and participants who complete the test (referred to as "participants"). Every proctor is associated with individual proctor IDs and every test is directly associated with each participant. All participant data including completed tests can only be accessed from within the app if the test proctor username and password is entered at the login screen. Proctors have the option to export digital images of the completed examinations at the conclusion of each test, at which point the proctor should ensure proper data anonymization practices such as ensuring the file names and location do not contain identifiable information.
102
+
103
+ A total of 8 separate Trail-Making Test layouts were converted into a digital format, comprising of 4 pairs of the A and B variations. These Trail-Making Test layouts are among those that are generally used by neuropsychologists when conducting these tests in their practice. Dimensions of the white space were cropped to account for the different aspect ratio between the Surface Pro 4 and a regular 8" 1/2 x 11" piece of paper, and the layout and size of the dots were scaled accordingly. The test interface itself resembles a paper-and-pen test as much as possible. This includes extending the drawing canvas across the entire screen, beyond the black large rectangle where the dots are placed; on a real piece of paper some participants may draw outside of the large rectangle despite it not being advised to do so. Our intention with this interface is to capture the same types of mistakes a participant might make with a traditional pencil-and-paper modality. SmartStrokes also intentionally offers no indication of visual feedback given to participants when the next dot is connected in sequence. An earlier version of the test turned correctly connected dots green, but experts advisors suggested that feedback be only given in the case of a mistake since that is the only scenario in which a clinician would intervene. Although testing protocol dictates participants should only complete each pair once to avoid the Learning Effect, SmartStrokes has the ability to test each participant as many times as they wish on any arbitrary layout and order to accommodate for any testing procedure.
104
+
105
+ Completed tests can be viewed at any time if the application is signed into the proctor's profile. The time-series sketching data allows proctors to review each participant's tests at their leisure and can also choose to replay the test in real-time to qualitatively review the participant's performance. Additionally, SmartStrokes can display color-coded visualizations of the sketch that include: separation of travel and 'Search actions during the test, pen speed, pressure, location of "hesitation" regions, and line straightness.
106
+
107
+ SmartStrokes also assists in data analysis by performing feature calculation of individual tests and outputting the anonymized data into a local Comma-Separated Value file (CSV). Additionally, the proctor can choose to automatically perform this calculation for every test associated with that proctor. This allows proctors to conduct data analytics by easily importing the CSV for rapid visualization and machine-learning analytics tasks.
108
+
109
+ § 4 ANALYZING DIGITIZED TRAIL-MAKING TESTS
110
+
111
+ One of the significant challenges in analyzing the Trail-Making Test is in the proper segmentation of the data. Although the task is designed to result in simple straight lines, the ideal resulting sketch consists of a singular line making 25 stops that change direction each time. Analysis is further complicated by behaviors arising from cognitive decline, most commonly involving repeated mistakes and prolonged periods of searching for the next dot, hesitation, or doubt.
112
+
113
+ Complicated line drawings are frequently segmented in the digital sketch recognition domain in order to properly characterize key elements in the sketch. The most appropriate domain-specific method of line segmentation separates the lines in two different categories: Search lines, and Travel lines; Search lines are all lines drawn when the participant is looking for the next dot, and Travel lines are the line segments where the participant is actively moving from one dot to the next
114
+
115
+ < g r a p h i c s >
116
+
117
+ Figure 6: A clear example of the search Line difference between an MCI participant and a healthy one. This discrepancy is usually the result of the participant unable to locate the next dot in the sequence for an extended period of time. Although the discrepancy is obvious in this example, not all MCI participants exhibit this behavior, making diagnosis challenging.
118
+
119
+ The following two subsections outline the differences between the two types of lines, what thresholds exist between the line segmentation, and which sketching characteristics we believed would be the most relevant to identifying MCI.
120
+
121
+ § 4.1 SEARCH LINES
122
+
123
+ According to the protocol of Trail-Making tests as outlined by the Compendium of Neuropsychological Examinations [59], participants are required to have their pen on the test at all times even when not moving between dots. This is done for two reasons. The first is that it is less likely that a participant loses their place if they do not lift their pen as they search for their next dot. The second reason is that this maximizes the data collected, since a participant who leaves their pen on the paper as they search for the next dot almost always results in randomized pen movements while they move their hand to see the rest of the test. This kind of sketching is typically characterized by noisy, erratic movement that tends to meander around the current dot as the participant searches for the next one. This is the kind of line that we identify as a search line.
124
+
125
+ We define the beginning of a search line as the instant a participant enters the next correct dot in the TMT sequence. We define the end of the search line as the moment the participant identifies the next dot and moves out of the area of the current dot. We complicate the definition of the end of the search line beyond simply "outside of a dot", because of how participants behave when searching for a dot for a long time; participants who meander around a dot for a long time frequently move the pen inside and outside of the dot's area as they look for the next dot in the sequence. They may also stray away from the dot before identifying the next one. For these reasons we include an additional speed threshold outside of a dot's area as the end of the search line segment.
126
+
127
+ Healthy participants typically do not pause for long as they search for the next dot in the sequence, with some participants not pausing at all. Indeed, search lines from typical healthy participants are usually shorter in length and have a single curve clearly detailing the change in direction from the previous dot in the sequence to the next with very little or no meandering behavior. MCI participants or any other participants who find the TMT challenging typically remain in this search state for longer, resulting in longer and more erratic search line segments. Figure 6 shows such an example where an MCI participants' search state results in a significantly longer and more meandering search line.
128
+
129
+ < g r a p h i c s >
130
+
131
+ Figure 7: Four of the color-coded features and sketch properties that SmartStrokes can display. Search and travel Lines are also used to segment data for constructing the classificaton models
132
+
133
+ § 4.2 TRAVEL LINES
134
+
135
+ Our parameters for defining travel lines are more straightforward, as we define travel lines as the moment the participant begins to move with intent to arrive at the next dot, and the travel line segment ends when the next dot in the sequence is reached. When done correctly, the travel line will be a single straight line from the previous dot to the next. We implemented a pen speed threshold to identify this "intent to move" to help us clearly delineate between a search line outside of a dot's area and the moment the participant moves toward the next.
136
+
137
+ Every dot in a Trail-Making Test in sequence can be connected with a single straight line. For that reason, participants who perform well in the TMT usually have a series of travel lines drawn straighter and without turning to change direction while moving from one dot to the next. Participants who perform poorly sometimes stop in the middle of a Travel line to either check their destination again or change direction as they realize they are going to the wrong dot.
138
+
139
+ Sometimes, such participants stop entirely in the middle of travel and begin a similar Search behavior to find the next dot. We call all of these mid-travel stops or significant reduction in speed as "hesitation". While not every participant with MCI enters this state, several instances of these hesitation states in one test is likely to point to a poorly-performing examination.
140
+
141
+ § 5 DATA COLLECTION AND ANALYSIS
142
+
143
+ This subsection details the process by which TMT data collection was conducted, and the sketch recognition features that were selected and applied to a machine-learning classification model to detect MCI.
144
+
145
+ § 5.1 DATA COLLECTION
146
+
147
+ 37 participants were recruited for data collection and classification purposes. Participants were screened and classified as MCI or healthy based on scores from the Montreal Cognitive Assessment (MoCA) [39], with the MoCA scores ranging from 0 to 30 . The inclusion criteria for participants were the following:
148
+
149
+ * Healthy subjects without MCI: Healthy older adults, normal cognition. MoCA score is 26 or above. Subject group labeled as "Healthy" for model classification purposes.
150
+
151
+ * Healthy subjects with MCI: Healthy older adults. MoCA score is between 19 and 26. Subject group labeled as "MCI" for model classification purposes.
152
+
153
+ Table 1: Participant demographics for user study. 95% confidence interval for participant age is ${71.43} \pm {2.41}$ , for MoCA scores ${24.54} \pm {0.91}$
154
+
155
+ max width=
156
+
157
+ Age Range Male Female MCI Non- MCI Avg. Age Avg. MoCA
158
+
159
+ 1-7
160
+ 55-59 1 1 0 2 57 26.5
161
+
162
+ 1-7
163
+ 60-64 1 5 1 5 63 26.5
164
+
165
+ 1-7
166
+ 65-69 5 2 5 2 67.7 23.9
167
+
168
+ 1-7
169
+ 70-74 3 5 4 4 71.5 25.5
170
+
171
+ 1-7
172
+ 75-79 4 5 7 2 76 23.3
173
+
174
+ 1-7
175
+ 80-84 2 0 1 1 82 25.5
176
+
177
+ 1-7
178
+ 85-89 2 1 3 0 85.7 21.3
179
+
180
+ 1-7
181
+ Totals 18 19 21 16 71.43 24.54
182
+
183
+ 1-7
184
+
185
+ * Exclusion criteria: Older adult subjects with a MoCA score below 19; any history of severe medical/neurological/psychiatric disease, including diabetes/hypertension; taking medication primarily targeting central nervous system; any other condition at investigator's judgment that clearly demonstrates severe cognitive decline Additional demographic information is available on Table 1.
186
+
187
+ All participants were recruited from a known pool of potential candidates, doctor referrals to this study, as well as open calls for participants via email. At the time of recruitment a pre-screening was conducted to ensure that the participants were not situated outside of the inclusion criteria. We administered a MoCA test to each potential candidate and is graded afterwards. If the candidate satisfied the inclusion criteria for one of the two possible categories of "with MCI" or "without MCI", a secondary visit was scheduled was scheduled to start at $8\mathrm{{AM}}$ and participants were asked to return as well-rested as possible.
188
+
189
+ At the time of the data collection procedure, all participants were given two sets of Trail-Making. Each test set consisted of a Trails A variant (numbers) and its accompanying Trails B variant (alternating numbers and letters). Each of the two sets used different standard dot layouts to eliminate a learning effect [35]. All participants were given the same Microsoft Surface Pro 3 device with accompanying Surface Pen to complete the digital tests. Participants were asked to connect the dots in ascending order as per the instructions detailed in the Compendium for Neuropsychological Examinations [59].
190
+
191
+ As SmartStrokes provides minimal feedback of mistakes so as to simulate the paper-and-pencil test taking experience, the test proctors similarly followed paper-and-pencil procedures that include notifying the participant whenever a mistake was made, but the participant is otherwise left to analyze the layout and make corrections to mistakes. Participants were instructed to place their pen down on the last correct labeled dot and try again. While we save the lines that were drawn to connect incorrect dots, those lines are made invisible in real-time while taking the test.
192
+
193
+ For the purposes of classification we refer to our subjects without MCI as "healthy", meaning subjects in the first category of participants. However, we must highlight that our MCI participants are not considered "unhealthy" by contrast. Indeed, MCI is considered a precursor to severe conditions such as Alzheimer's and dementia, and people with this condition are still considered "healthy" by every metric (see Section 1.1). In order to study the effects of a possible change of sketching behaviors, however, we elected to consider the two possible conditions as "healthy, without MCI", and "healthy, with MCI".
194
+
195
+ § 5.2 PREPROCESSING
196
+
197
+ Several pre-processing steps are conducted on individual completed examinations. Each test's sketch data is separated by travel and search lines according to the description in Section 4. Sketch data is then resampled to uniform interspace $S$ using the formula:
198
+
199
+ $$
200
+ S = \frac{\sqrt{{\left( {x}_{m} - {x}_{n}\right) }^{2} + {\left( {y}_{m} - {y}_{n}\right) }^{2}}}{c} \tag{1}
201
+ $$
202
+
203
+ Where $S$ is the new spacing between each sample, $\left( {{x}_{m},{y}_{m}}\right)$ is the lower-right corner of the sketch $\left( {{x}_{n},{y}_{n}}\right)$ is the upper-left corner of the sketch, and $\mathrm{c}$ is an empirically derived constant $c = {40}$ that is frequently used in the domain of digital sketch recognition for optimal distance between samples that balances high enough granularity for feature calculation with few enough samples for computational efficiency.
204
+
205
+ Lastly, we implemented an additional key step in this process by normalizing individual line rotation for travel lines. The chosen features explained later in this section make significant use of sketch direction, either as a per-sample basis or the entire line. In more typical digital sketch recognition problems, features relating to direction inform a participant's style of drawing, or are directly related to the type of shape that the participant intends to draw. The Trail-Making Test, however, places all dots in pre-arranged locations that strongly influences the direction of a correct line. This would introduce a confounder, since differences between angles or sketch direction would not be attributed to MCI but rather the layout of the test's dots. We normalize travel lines by rotating every line such that the endpoint of the line is directly underneath the start point. This allows us to still be able to leverage direction-related sketch features to calculate characteristics like tremor, changes in direction due to mistakes made, and other types of directionality affected by the participant's performance rather than the layout of the TrailMaking Test. We are not aware of similar work in constructing a Trail-Making Test classification model that employs this segmented line direction normalization technique. To account for a physical range of motion confounder, participants were observed to ensure that they did not have physical difficulty in moving in a particular direction. To that end we observed no difficulties in participants nor did any participant report one themselves.
206
+
207
+ § 5.3 FEATURE CALCULATION
208
+
209
+ § 5.3.1 RUBINE FEATURES
210
+
211
+ We implemented a combination of digital sketch recognition features known to yield accurate models in similar research projects. The first set of 13 features introduced by Rubine et al., abbreviated as "Rubine features" [50]. The 13 features were first introduced alongside a recognition technology named GRANDMA (Gesture Recognizers Automated in a Novel Direct Manipulation Architecture), a toolkit that sought to provide end users with the ability to train any gesture for recognition using a click-and-drag interface. The Rubine features themselves have since then been implemented in various sketch recognition projects that can gauge not only the type of shape that is drawn, but also the cognitive state of the participant who drew them. Rubine features ${f}_{1}$ and ${f}_{2}$ specify the cosine and sine features of the first few samples, usually limited to the first two samples as was done in our implementation. The bounding box diagonal of the entire gesture is analyzed as features ${f}_{3}$ and ${f}_{4}$ . The distance in pixels between the first and the last point is specified in feature ${f}_{5}$ . The difference between the first and last point of a gesture is analyzed through features ${f}_{6}$ cosine and ${f}_{7}$ sine between the start and end points, the total length of the gesture is calculated for ${f}_{8}$ , and the total angle traversed is ${f}_{9}$ . Three total summations are calculated, with ${f}_{9}$ being the total angle traversed over the course of the gesture, ${f}_{10}$ being the sum of the absolute value of the angle per mouse point that does not take into account direction, and ${f}_{11}$ being the sum of the square of the value of ${f}_{9}$ . The square of the maximum speed achieved in the gesture is ${f}_{12}$ , and the last feature ${f}_{13}$ is the total duration of the gesture, measured in milliseconds. The calculations for the Rubine features are provided on Table 2.
212
+
213
+ Table 2: Rubine features ${f}_{1}$ through ${f}_{13}$ . Let $\Delta {x}_{p} = {x}_{p + 1} - {x}_{p}$ , and $\Delta {y}_{p} = {y}_{p + 1} - {y}_{p}$ , and $\Delta {t}_{p} = {t}_{p + 1} - {t}_{p}$
214
+
215
+ max width=
216
+
217
+ Rubine Features X
218
+
219
+ 1-2
220
+ ${f}_{1} = \frac{{x}_{2} - {x}_{0}}{\sqrt{{\left( {x}_{2} - {x}_{0}\right) }^{2} + {\left( {y}_{2} - {y}_{0}\right) }^{2}}}$ ${f}_{2} = \frac{{y}_{2} - {y}_{0}}{\sqrt{{\left( {x}_{2} - {x}_{0}\right) }^{2} + {\left( {y}_{2} - {y}_{0}\right) }^{2}}}$ ${f}_{3} = \sqrt{{\left( {x}_{mx} - {x}_{mn}\right) }^{2} + {\left( {y}_{mx} - {y}_{mn}\right) }^{2}}$ ${f}_{4} = {\arctan }^{\underline{{y}_{\text{ max }}} - {y}_{\text{ min }}}$ ${f}_{5} = \sqrt{{\left( {x}_{p - 1} - {x}_{0}\right) }^{2} + {\left( {y}_{p - 1} - {y}_{0}\right) }^{2}}$ ${f}_{6} = \frac{\left( {x}_{p - 1} - {x}_{0}\right) }{{f}_{5}}$ ${f}_{7} = \frac{\left( {y}_{p - 1} - {y}_{0}\right) }{{f}_{5}}$ ${f}_{8} = \mathop{\sum }\limits_{{p = 1}}^{{P - 2}}\sqrt{\Delta {x}_{p}^{2} + \Delta {y}_{p}^{2}}$ ${f}_{9} = \mathop{\sum }\limits_{{p = 1}}^{{P - 2}}{\theta }_{p}$ ${f}_{10} = \mathop{\sum }\limits_{{p = 1}}^{{P - 2}}\left| {\theta }_{p}\right|$ ${f}_{11} = \mathop{\sum }\limits_{{p = 1}}^{{P - 2}}{\theta }_{p}^{2}$ ${f}_{12} = \mathop{\max }\limits_{{p = 0}}^{{P - 2}}\frac{\Delta {x}_{p}^{2} + \Delta {y}_{p}^{2}}{\Delta {t}_{p}^{2}}$ ${f}_{13} = {t}_{P - 1} - {t}_{0}$
221
+
222
+ 1-2
223
+ 2|c|${\theta }_{p} = \arctan \frac{\Delta {x}_{p}\Delta {y}_{p - 1} - \Delta {x}_{p - 1}\Delta {y}_{p}}{\Delta {x}_{p}\Delta {x}_{p - 1} + \Delta {y}_{p}\Delta {y}_{p - 1}}$
224
+
225
+ 1-2
226
+
227
+ The Rubine features represent the various geometric properties of any given gesture. They can measure speed, curvature, direction at the start and ends of the gesture, total time taken, and the properties of the total area (referred to by Rubine as the "bounding box") of any particular gesture. These features offer an alternative to template-matching recognition in that they do not require a point-for-point comparison, but rather are geometric calculations of the gestures themselves. Although these have been used mostly for recognizing gestures, their frequent use in recognizing shapes provides us with an opportunity for analysis of cognitive impairment.
228
+
229
+ § 5.3.2 FITTS' AND STEERING LAW FEATURES
230
+
231
+ We leverage principles from Fitts' Law by calculating that law's Index of Difficulty [32]:
232
+
233
+ $$
234
+ I{D}_{F} = {\log }_{2}\frac{2D}{W} \tag{2}
235
+ $$
236
+
237
+ Fitts' Law was originally conceived as a method to quantify complexity [19] and has has been widely used in HCI research, particularly UI navigation tasks [33]. Fitts' Law is rooted in tracing lines across distances between targets and measures that task's complexity into measures of performance, which we believe could be leveraged to help identify task performance.
238
+
239
+ A related feature we use is the more recent variant, the Steering Law. The Steering Law assesses the difficulty of a participant navigating a pointer through a path with a set width $\left\lbrack {1,2}\right\rbrack$ . For a generic tunnel $C$ , and a width $W\left( s\right)$ along the path, the Steering Law’s Index of Difficulty $I{D}_{S}$ is:
240
+
241
+ $$
242
+ I{D}_{S} = {\int }_{C}\frac{ds}{W\left( s\right) } \tag{3}
243
+ $$
244
+
245
+ For our purposes, we use a straight path of length $L$ and a constant with $W$ as defined by Pastel et al. [41], which reduces $I{D}_{S}$ to:
246
+
247
+ $$
248
+ I{D}_{S} = \frac{L}{W} \tag{4}
249
+ $$
250
+
251
+ By using the participant's input lines as the basis for calculating $W$ , we essentially create a form of performance index using the Steering Law. For the Trail-Making Test, a narrower line width $W$ is straighter and effectively more difficult to recreate. We integrated this metric as a feature for the classification model to test whether a participant with MCI would create lines with a generally lower $I{D}_{S}$ . We also scaled and averaged $I{D}_{F}$ and $I{D}_{S}$ as a separate feature to explore a possible combination of the two. It is reported in Table 3 as fittsSteering.
252
+
253
+ < g r a p h i c s >
254
+
255
+ Figure 8: The traditional application of the Steering Law is on top, with $W$ and $L$ being predetermined. Our use of Steering Law, on bottom, creates a simple tunnel with $W$ based on the total "width" of the pen trajectory.
256
+
257
+ § 5.3.3 ADDITIONAL BEHAVIORAL FEATURES
258
+
259
+ Hesitation is a feature that we briefly discussed in section - as a feature unique to travel lines. It characterizes the prevalence of stop-and-go motion for participants who start connecting a dot but stop or slow down significantly while inside a travel state. Hesitation begins when the pen slows down to an empirically-derived speed of 0.4 over five consecutive sampled points, and our calculated feature is distance the pen traveled while the pen remains in this state. The pen exits this state when at least five consecutive sampled points have a speed above 0.4 . This threshold was determined when observing participants during pilot studies, where we sought to capture the most accurate subset of drawn lines during the time that participants hesitated when observing the need to change direction. The threshold was refined over a series of iterations to most accurately capture the hesitation state. If the pen enters Hesitation state multiple times inside a single travel line, the total distance across all of these states is reported for the one travel line
260
+
261
+ Line Ratio is a feature meant to normalize the length of a participant's drawn line. We believe the length of the line is important to understand how confident and accurate the lines were connected, since meandering behaviors and course correction would naturally result in a longer line than a straight line drawn directly from dot to dot. However, a drawn line will also be longer if the correct dots are placed further apart. The Trail-Making Test is explicitly designed to place dots a variety of distances from each other to measure a participant's ability to identify dots that might be further away from their immediate location. To take relative line length into account we divide the total distance drawn from one dot to the next by the theoretical "perfect" line drawn from one dot to the next. The closer the number is to 1, the closer to "perfect" this distance becomes and the better a participant performs. The formula for Line Ratio ${R}_{\text{ ln }}$ is found below, where $\left( {{x}_{n},{y}_{n}}\right)$ is the final sampled dot of the input line:
262
+
263
+ $$
264
+ {R}_{ln} = \frac{\sqrt{{\left( {x}_{n} - {x}_{0}\right) }^{2} + {\left( {y}_{n} - {y}_{0}\right) }^{2}}}{\mathop{\sum }\limits_{{i = 0}}^{n}\sqrt{{\left( {x}_{i} - {x}_{i - 1}\right) }^{2} + {\left( {y}_{i} - {y}_{i - 1}\right) }^{2}}} \tag{5}
265
+ $$
266
+
267
+ Pen Lift Time is the amount of time during each segment that the participant lifts their pen. Although participants are required to leave their pen on the tablet at all times as per the instructions of the Trail-Making Test, some participants still absent-mindedly lift the pen when searching for a dot or when correcting a mistake. This feature is intended to capture the behavior of both of these scenarios to explore a possible correlation with MCI.
268
+
269
+ Pen Pressure Average and Pen Pressure Standard Deviation are features pertaining to the pressure that a participant places on the pen as they complete the test. We wanted to explore the possibility that a participant places more pressure on the tablet if they are unsure of their trajectory or if the test is difficult for them to complete.
270
+
271
+ We complete the feature set by adding a few sets from existing sketch and gesture recognition literature. We implemented 11 features from Long et al. [31] as a supplement to the Rubine features for general-purpose sketch recognition. Alamudun et al. [3] applied Rubine and Long features and added two direction-based features to help with saccade detection in an eye-tracking task, but have we believe can also be implemented as general-purpose sketch recognition as well. Finally, Paulson et al. introduced two features, normalized distance between direction extremes (NDDE) and direction change ratio (DCR), as general-purpose sketch recognition features that we also included for this study $\left\lbrack {{42},{43}}\right\rbrack$ .
272
+
273
+ § 5.4 MODEL CONSTRUCTION
274
+
275
+ Because Trail-Making Test behavior is characterized by the distinct actions of travelling to the next line and searching for the next, we decided to produce two separate classification models to explore the possibility of either being more indicative of MCI and compare their performance. Additionally, because the actions yield different behaviors, not all features were applicable for both types of actions. For example, line direction is important for travel lines to identify incorrect line deviation after we normalize travel lines as shown in Fig. 5. However, search lines cannot be normalized since direction at entry and at exit of a dot, even for healthy participants, depends heavily on the test layout itself. Table 3 lists every feature initially integrated into the feature set, and the subscripts next to the feature names indicate which were chosen for the models.
276
+
277
+ Some features were also removed from search and travel classification models due to a high collinearity value (> 0.90). Fig. 11 shows the collinearity heatmap of the remaining features that were used for both search and travel lines. Further, all values were normalized between 0 and 1 .
278
+
279
+ Every segmented line from the 149 tests is included and is given the label according to the participant's cognitive state (MCI or healthy). 3,490 search lines and an equal number of travel lines were used for their respective classification models.
280
+
281
+ Models were constructed according to a ${90}/{10}$ split for a 10 -fold cross-validation. The models were trained and evaluated according to the two labels of MCI or healthy assigned during the screening phase of the study. We used 7 binary classification models to compare performance.
282
+
283
+ § 5.5 PREDICTION OF MOCA SCORES
284
+
285
+ Participant labels of "healthy, without MCI" and "healthy, with MCI" essentially is dividing participants between two broad categories of MoCA scores. Section 5.1 specifies the categories are a MoCA score of 26 and above for "healthy, without MCI" and between 19 and 26 for "healthy, with MCI". This in effect means our classification attempts to predict a wide range of the MoCA scores of the participants. However, we also sought to more directly predict the MoCA score in a more granular fashion as part of the data analysis of this study.
286
+
287
+ Our approach to MoCA score prediction is similar to the prediction of broad categories in that we are using the same training and classification features, and the same 90/10 split for 10 -fold cross-validation. The similarity also extends to the training and classification being performed on the individual lines, and the F1-score and accuracy being calculated on how closely each line is being predicted to the actual MoCA score associated with that line's entire test. This is distinct from other methods of classification that seek to analyze the entire page and create a single prediction. We believe that participants of Trail-Making Test do not perform evenly through the entirety of the test; while they might perform well for a few dots, a single dot might prove to be difficult for participants to find. Indeed, several participants from our empirical observations who performed poorly would find certain dots easy while finding others significantly more difficult. Our wish to capture this particular type of behavior is the reason behind our use of the segmented lines. We believe per-line sketch analysis and predictions might yield a novel insight into the participants' behavior. Because the original scoring system was conceived at a time when granular sketch analysis was not possible, we believe per-line analysis can provide a more granular and complete picture of a participant's behavior during the test. Prediction was trained and tested on both the Travel and the Search models.
288
+
289
+ < g r a p h i c s >
290
+
291
+ Figure 9: Box plot of amount of features chosen for Recursive Feature Elimination (search lines).
292
+
293
+ Recursive Feature Elimination vs. Accuracy for MoCA Prediction (Travel)
294
+
295
+ < g r a p h i c s >
296
+
297
+ Figure 10: Box plot of amount of features chosen for Recursive Feature Elimination (travel lines).
298
+
299
+ We performed recursive feature elimination (RFE) on the Travel and Search Models to determine the top-ranking features that can be included in a logistic regression to predict the MoCA scores. The number of features ideal for both was determined to be 6 since that is the number of features where the accuracy plateaus for both Search and Travel models. The features selected for the Search and Travel models are listed in Table 4, and the box plot depicting the comparison of feature selection to accuracy is shown in Figures 9 and 10 .
300
+
301
+ A logistic regressor with an iteration of limit of $n = {100000}$ was employed for both models to predict the MoCA scores based on the features selected by the RFE. We then used a repeated K-Fold cross-validator, with a ${90}/{10}$ split repeated 3 times for a total of 30 comparisons in the calculation of the predictors. To gauge the performance of the predictions being made, we calculated the average Mean Absolute Error (MAE) and the Root Mean Squared
302
+
303
+ Table 3: Classification features. Model describes whether the feature was used for the model for classification of travel (T) or search (S) lines. Some features were excluded due to high collinearity and/or were inappropriate for a specific model. fittsSteering is a scaled and averaged combination of the features fitts and steering.
304
+
305
+ max width=
306
+
307
+ Name Model Name Model Name Model Name Model
308
+
309
+ 1-8
310
+ rubine1 T rubine10 X avgPressure T+S openness T+S
311
+
312
+ 1-8
313
+ rubine2 T rubinell X stdevPressure T+S boundBoxArea X
314
+
315
+ 1-8
316
+ rubine3 T+S rubine12 T+S avgSpeed T+S $\log$ Area T+S
317
+
318
+ 1-8
319
+ rubine4 T+S rubine13 T+S stdevSpeed T+S rotRatio T+S
320
+
321
+ 1-8
322
+ rubine5 S fitts X aspect X lengthLog T+S
323
+
324
+ 1-8
325
+ rubine6 X steering T+S curviness T+S aspectLog T+S
326
+
327
+ 1-8
328
+ rubine7 X lineRatio T relativeRot T+S fittsSteering X
329
+
330
+ 1-8
331
+ rubine8 S hesitation T densityMetric1 X ndde T
332
+
333
+ 1-8
334
+ rubine9 T+S penLiftTime T+S densityMetric2 T+S dcr X
335
+
336
+ 1-8
337
+
338
+ Table 4: Features chosen by Recursive Feature Elimination to directly predict MoCA scores.
339
+
340
+ max width=
341
+
342
+ Search Model RFE Features Travel Model RFE Features
343
+
344
+ 1-2
345
+ aspectLog avgPressure
346
+
347
+ 1-2
348
+ avgPressure avgSpeed
349
+
350
+ 1-2
351
+ avgSpeed rubine12
352
+
353
+ 1-2
354
+ logArea rubine13
355
+
356
+ 1-2
357
+ rubinell stdDevPressure
358
+
359
+ 1-2
360
+ stdDevPressure steering
361
+
362
+ 1-2
363
+
364
+ Error (RMSE) of the predicted vs. the actual MoCa test scores for all of the lines. In this prediction algorithm all of the segmented lines of test and training participants have been labeled their respective MoCA scores. Although MoCA is not typically labeled on a per-line basis, our experiment is to determine whether such a prediction can be accurately made on per-line granularity. MAE and RSME were both used to help determine the mean error between the predictions of the logistic regressions. Both of the Travel and Search prediction algorithms had RSME and MAE as well as their standard deviation calculations and are shown in Table 5.
365
+
366
+ § 6 RESULTS
367
+
368
+ § 6.1 ACCURACY METRICS OF MCI PREDICTION
369
+
370
+ The main results of model performance are reported on Table 6, meant to report on how well a classification model trained and tested on travel lines and search lines independently is able to identify whether the author of those lines had MCI or was a healthy participant. A total of eight different classification models, listed in the Classifier column on the table, were trained with the features listed in section 5.3. Results are reported for both the search line model and the travel line model, and we report the models' accuracy, F1-score, precision, and recall. For both travel and search lines, Table 6 shows that the best performing models were created using a Random Forest classifier. Additionally, pressure-related features had among the highest feature importances when analyzing drop-column importances for the random-forest classifiers.
371
+
372
+ § 6.2 ACCURACY METRICS OF MOCA SCORE PREDICTION
373
+
374
+ Two sets of metrics can be reported for the MoCA score prediction: the results of Recursive Feature Elimination and how the number of features affects the prediction accuracy, and the results of the average MAE and RSME of the predictions made on the test data. Prediction of the MoCA, as opposed to the prediction of MCI, is non-binary and more of a continuous set of data in nature. For this exercise we allowed fractions of numbers to be predicted, since our chief method of comparison is the calculation of RSME and MAE. Small discrepancies in MoCA scoring due to the inclusion of non-whole fractions would be minor, if that were the chief difference between predicted and actual scores. The accuracy metrics are reported in Table 5.
375
+
376
+ Table 5: Root Mean Squared Error (RSME) and Mean Absolute Error (MAE) of predicted points of MoCA scores.
377
+
378
+ max width=
379
+
380
+ X 2|c|Travel Lines 2|c|Search Lines
381
+
382
+ 1-5
383
+ Error Metric Average Std. Dev Average Std. Dev
384
+
385
+ 1-5
386
+ RSME 3.325 0.132 3.315 0.134
387
+
388
+ 1-5
389
+ MAE 2.415 0.110 2.406 0.105
390
+
391
+ 1-5
392
+
393
+ § 6.3 DISCUSSION
394
+
395
+ § 6.3.1 MILD COGNITIVE IMPAIRMENT
396
+
397
+ One of the primary challenges in detecting MCI is the inherently subtle nature of changes. Research such as that of Zhang et al. [64] outline difficulties in formalizing behaviors that correlate significantly with the manifestation of MCI in the Trail-Making Test. Depending on the severity of cognitive decline and multiple factors in how MCI affects each participant, they may not find the TMT specifically that challenging. For that reason, it is generally believed that the TMT, while proven sensitive to MCI in many cases, is not alone the only tool needed to reliably detect MCI.
398
+
399
+ The results from the accuracy metrics of the travel and search lines supports the notion that detecting subtle levels of MCI is inherently challenging if only analyzing one test. In several of our observed cases, participants who we classified as just under our MCI threshold based on their MoCA score completed the test in a similar manner as a typical healthy participant. In these cases a clinical neuropsychologist would continue testing their patient with several other kinds of exams or use the Trail-Making Test to primarily to identify other conditions of cognitive decline. This differs from other digital sketch recognition problems where the exhibited behaviors are not subtle by nature, or if the goal is to differentiate between discrete shapes. Models for those problems typically result in much higher accuracy and F1-scores (closer to $> {0.9}$ ) since the labels are more cleanly delineated.
400
+
401
+ Overall, we believe the results present a meaningful contribution on the analysis of MCI through the TMT, largely due to the analysis and model construction on a per-line basis. Our implementation refined steps to segment the sketches by integrating speed thresholds to identify when the participant has found the next dot in the process. Whereas previous work in analyzing digitized TMT sketch data tends to average behaviors over an entire test, we sought to leverage the high-granularity nature of sketch data to provide analysis on individual lines. Our contribution also extends to the normalization of line direction and total length to avoid differences between lines that are due to the TMT individual dot locations. The key is to eliminate potential confounders introduced by the fact that the TMT stimulates all participants to change line directions and total line length. We chose not to map a "perfect" line for each of the different segments to gauge performance, since Trail-Making Test layouts are numerous and clinicians frequently use modified versions for their own purposes. We sought to create a classification model that would work regardless of the dot layout to avoid creating a model that only works on that specific layout. Ultimately we sought to explore whether segmented lines could individually be labeled as MCI or healthy with at least similar performance as existing work.
402
+
403
+ < g r a p h i c s >
404
+
405
+ Figure 11: Feature collinearity for both search and travel lines. Features with collinearity above 0.9 were removed from the model
406
+
407
+ A popular method for creating behavioral models is in the leveraging of deep learning techniques such as neural networks. These techniques are becoming more prevalent due to its ease of deployment for large datasets and higher efficacy in classification. However, we did not believe these techniques appropriate for this experiment for two primary reasons. The first is due to the necessity of collecting a considerably larger dataset for the creation of a classification algorithm using deep learning techniques. Challenges related to the proper collection of data for this experiment are explained in the following section. The second reason is due to the lack of explainability in deep learning techniques. While it would be possible to produce a more accurate behavioral model provided we acquire a considerably larger dataset, we would be unable to explain to a clinician which behaviors of the participant are responsible for the conclusion they are likely to have MCI. We believe that behavioral analysis in these types of domains should be usable to domain experts, thus motivating the manual creation of features to explain behavior.
408
+
409
+ We believe these results to be of interest in the HCI community, primarily due to the inherent nature of linking a cognitive examination with the analysis afforded by a high-granularity data collection protocol. In particular, the creation of an Index of Performance of sorts for the Steering Law (see Fig. 8) proved useful enough in both the search and travel line prediction models. For this particular project this calculation was different enough from Fitts' existing Index of Performance to warrant its inclusion as its own feature, and is potentially something that could be implemented to UX research. Indeed, we hope the results and explanations of the TMT can allow HCI researchers to see the TMT as a decades-old UI navigation task, and that the same principles and techniques that led to the creation of the Fitts' and Steering Laws can be applied to a digital TMT.
410
+
411
+ § 6.3.2 MONTREAL COGNITIVE ASSESSMENT SCORE PREDICTION
412
+
413
+ As previously mentioned, MAE and RSME outlined in Table 5 show our predictions for MoCA scores, both search and travel lines possessing a Mean Absolute Error of around 2.4 on average and a Root Mean Squared error of around 3.3. Essentially, regardless of whether the travel lines or search line data and features were used to predict the MoCA score of that individual user, the resulting error remained consistent. Although MoCA scores range from 0 to 30, our study ethics protocol prevented us from conducting research on participants with scores below 19 as previously mentioned, reducing the range of scores available to us to train and test on to between 19 and 30. The error rates reported implicitly become wider due to this range of scores being reduced, but we believe the reported MAE and RSME values still are small enough to be of interest to report. Overall, the scores suggest that the feature set presented in this paper can be used to predict MoCA scores based on a participants' digitized TMT sketch data.
414
+
415
+ Challenges of the MoCA score prediction were similar to those of predicting MCI, but were exacerbated by the labeling of a single score point to every line. Per-dot line segmentation resulted in likely an unbalanced training set, since a small subset of participants who performend fairly well the MoCA could skew the training and test sets considerably. This unevenness in MoCA distribution suggests that a much larger and wider range of MoCA scores is needed for accurate score prediction. As it stands, the Recursive Feature Elimination for both models as shown in Figure ?? suggests that even the optimal amount of chosen features yields only an accuracy above 0.35 for the Search model and an accuracy of up to 0.30 . For this current version of the calculated features and those chosen by the RFE, we believe that additional features and changes to the existing ones would be necessary to increase the prediction accuracy.
416
+
417
+ At present the results for predicting MoCA scores are inconclusive. The errors as reported in Table 5 might suggest an average error of about ${10}\%$ given the range of the MoCA scores to be from 0 to 30 points. The demographic data shown in Table 1 and discussed in section 5.1 shows an average MoCA score of 24.54 for all participants as well as the total overall criteria for inclusion of participants from 19 onward. Due to limitations on protocol safety, we are unable at present to recruit and test for participants with more severe cognitive
418
+
419
+ Table 6: Classification metrics. Acc is accuracy, F1 is F1-score, Prec is precision. For both the travel lines and search lines models, n=3,490 impairment for participants who score below 19. This is largely due to safety protocols requiring participants below that age to be accompanied by a guardian or healthcare official, since institutional review boards consider severely cognitive impaired individuals who would be unable to provide informed consent of their own volition. Following the safety protocols fortunately does not impair significantly the prediction of MCI vs. non-MCI populations since MCI participants are still able to provide informed consent, but this does reduce the efficacy of predicting MoCA performance as a continuous score. In order to create a more accurate $\mathrm{{MoCA}}$ predictor, we will require a larger corpus of data with a more even distribution of MoCA scores such that the scores are more evenly distributed as per established normative data. At present the inclusion of an MCI predictor did somewhat limit the performance of a MoCA score predictor.
420
+
421
+ max width=
422
+
423
+ X 3|c|Travel Lines 5|c|Search Lines
424
+
425
+ 1-9
426
+ Classifier Acc F1 Prec Recall Acc F1 Prec Recall
427
+
428
+ 1-9
429
+ Majority 0.51 0.51 0.50 0.50 0.53 0.53 0.52 0.52
430
+
431
+ 1-9
432
+ Gaussian Naive-Bayes 0.47 0.36 0.60 0.53 0.47 0.38 0.58 0.53
433
+
434
+ 1-9
435
+ Decision Tree 0.59 0.59 0.58 0.58 0.60 0.60 0.59 0.59
436
+
437
+ 1-9
438
+ K-Nearest Neighbor 0.60 0.60 0.59 0.59 0.58 0.58 0.57 0.57
439
+
440
+ 1-9
441
+ Linear Regression 0.65 0.64 0.65 0.63 0.62 0.59 0.61 0.58
442
+
443
+ 1-9
444
+ SVM 0.65 0.63 0.66 0.62 0.63 0.61 0.64 0.60
445
+
446
+ 1-9
447
+ LDA 0.65 0.63 0.65 0.62 0.62 0.60 0.61 0.59
448
+
449
+ 1-9
450
+ Random Forest* 0.67 0.73 0.67 0.80 0.66 0.72 0.68 0.77
451
+
452
+ 1-9
453
+
454
+ § 7 LIMITATIONS AND FUTURE WORK
455
+
456
+ One of the main challenges in building an accurate predictive behavioral model is the creation of a new dataset for that specific purpose. Despite the fact that the Trail-Making Test has been in use for several decades, the granularity of digital data and the requirement of a digital pen necessitated the creation of a new dataset. The prevalence of different Trails Test layouts and the small differences of protocol that vary from clinician to clinician also necessitated a unified testing protocol. Accompanying this challenge is the laborious recruitment process. Although the task is simple, the administration of the MoCA and the proper administration of the Trail-Making Test resulted in a slower rate of data collection that is typical of sketch recognition tasks.
457
+
458
+ Currently, the age ranges of the Trail-Making Test's normative data, as found Tombaugh's stratified normative data for paper-and-pencil Trail Making Tests [61], divides the age range into 11 distinct categories. Our normative data covers the latter 7 bins with our participants ranging from 57 to 86 years of age. Since the focus of this experiment is in identifying MCI among middle aged and older individuals, the study focused on that age range. Future studies will continue the data collection process to build a more complete normative body of data across all age ranges. These might potentially result in differing behaviors between patients with MCI from different age ranges, but a solid body of data from those age ranges is necessary for verification. We also aim to further expand on localizing areas that were difficult for participants with MCI, reporting these lines on a UI-level in real time and evaluating a clinician's diagnosis experience with such an automated tool.
459
+
460
+ Although the system has two primary end users, the scope of this paper focused on the participant. We aim to investigate the user experience for proctors to deploy the system and use the predictions in their diagnosis. Specifically we aim to gather feedback on the experience of reporting the system's findings, since the proctors have access to a wide variety of sketch visualizations as mentioned in Section 3. Reporting on predicting MCI and non-MCI participants in addition to highlighting hesitation, line deviation, and visually color-coding search and travel lines offers proctors a large range of information and future work will investigate on the usefulness and overall user experience. Additionally we would like to use other peripherals for additional features such as heart rate sensors and integrated eye-tracking solutions to create an even more feature-rich data set that enhances participant behavior analysis.
461
+
462
+ Also of note is the fact that a protocol of collecting data on an MCI population inherently removes a full range of ages and conditions for normative data. This impacts the ability for a predictive system to make an ML-based prediction of the actual MoCA score. The prediction of the MoCA scores yielded relatively small error percentages but when taking into account the reduced range of MoCA scores that were available for testing and training, we conclude the results for direct prediction of exact MoCA scores are inconclusive despite being somewhat promising. We are considerably more confident about the binary classification between non-MCI and MCI populations precisely because the range of data and the collection protocol yielded the most appropriate data for that kind of classification. Future work will require a wider range of participants with normative data closer to that of Tombaugh et al. [61]. We are confident the digital sketch data from digital TMTs can be used to make much more accurate predictions about the participants' MoCA scores if said data were available.
463
+
464
+ Subtle changes in behavior due to Mild Cognitive Impairment continue to present significant challenges in identifying the earliest possible signs for conditions that may lead to dementia and Alzheimer's disease. Existing efforts to aid in this challenge highlight the difficulty of finding the nuances in behavioral changes present in a Trail-Making Test. However, with significant improvements over previous efforts we present a solution that suggests individual lines, regardless of their direction, can distinguish between MCI and Healthy with noticeably higher levels of accuracy. We look forward to employing additional preprocessing methods, features, and a larger digital sketch dataset to further improve on this effort. We believe sketch data from the Trail-Making Test still has the potential to yield insights into behavioral changes that are yet to be discovered.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/E-PcUeaDbzv/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,515 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Automatic Asymmetric Weight Distribution Detection and Correction Utilizing Electrical Muscle Stimulation
2
+
3
+ Kattoju Ravi Kiran* Eugene Taranta† Ryan Ghamandi ${}^{ \ddagger }$ Joseph J.Laviola Jr. ${}^{§}$
4
+
5
+ Interactive Systems and User Experience Lab
6
+
7
+ University of Central Florida, USA
8
+
9
+ ![01963e70-cfea-78a9-980d-701049e5a4eb_0_395_485_1006_421_0.jpg](images/01963e70-cfea-78a9-980d-701049e5a4eb_0_395_485_1006_421_0.jpg)
10
+
11
+ Figure 1: Impaired balance can have long-term health ramifications. Presented here are images of asymmetric weight distribution (AWD) due to prolonged standing and restored balance conditions using electrical muscle stimulation (EMS): (A) AWD right, (C) AWD left, (B) & (D) EMS feedback based stabilization and restoration of balanced posture. The red arrows indicate direction of progressive AWD and green arrows indicate a counter-weight shift balance stabilization due to EMS feedback correction to the tibialis muscle.
12
+
13
+ ## Abstract
14
+
15
+ Postural control is a constant re-establishment process for the maintenance of balance and stability. Asymmetric weight distribution (AWD), characterized by uneven leg loading, leads to increased instability, injury, and progressive deterioration of posture and gait. Postural self-correction is automatically affected by the human body in response to visual, vestibular, and proprioceptive sensory information. However, simultaneous cognitive loads can increase the demand for extra resources and require balance monitoring and correction techniques. We address these issues with a novel physiological feedback system that utilizes load sensors for AWD detection, and electrical muscle stimulation (EMS) for automatic correction and restoration of balance by affecting a counter-weight shift. In a user study involving 36 participants, we compare our automatic approach against two alternative feedback systems (Audio and Vibro-tactile). We find that our automatic approach delivered faster correction and outperformed alternative feedback mechanisms and perceived to be interesting, comfortable and a potential commercial product.
16
+
17
+ Index Terms: Human-centered computing-Human computer interaction (HCI)-Wearable computing-Preventive Healthcare Posture correction-Asymmetric weight distribution-Electrical Muscle Stimulation;
18
+
19
+ *e-mail:Kattoju.ravikiran@knights.ucf.edu
20
+
21
+ ${}^{ \dagger }$ e-mail:etaranta@gmail.com
22
+
23
+ ${}^{ \ddagger }$ e-mail:ryanghamandi1@gmail.com
24
+
25
+ §e-mail:jjl@cs.ucf.edu
26
+
27
+ ## 1 INTRODUCTION
28
+
29
+ The maintenance of stable posture is important as two-thirds of our body mass, and delicate organs are being supported by our legs which form a narrow base of support. Asymmetric weight distribution (AWD) characterized by postural sway and impaired standing balance has been known to be responsible for multiple health conditions resulting in reduced functional ability [109]. Numerous posture-related health issues such as lower back pain [81], anterior cruciate ligament ruptures $\left\lbrack {{46},{78},{86}}\right\rbrack$ , and knee and ankle injuries $\left\lbrack {{32},{64}}\right\rbrack$ are associated with an increase in postural sway and AWD. Postural control is a constant re-establishment process of balance and is integral to the safe execution of most movements in our daily life. Posture adjustment relies primarily on the integration of different sensory feedback such as the visual, vestibular, and proprioceptive control systems. Subconscious proprioception, in the form of awareness from muscle receptors, and joints also play an important role in the control of posture and balance. However, the effectiveness of our body's postural control system decreases with cognitive demand, age, and injuries, and imposes a critical demand on the postural control system especially while being engaged in additional cognitive tasks during standing activities. Although conscious proprioception plays a crucial role in gross muscular and full-body posture adjustments, poor postural habits and impaired proprioception may lead to increased postural sway, AWD, and even loss of balance [3]. AWD may lead to increasing instability, subsequent injury, and progressive deterioration of posture and gait [107]. Investigation of AWD has provided valuable information in an array of situations such as fall detection and prediction in the elderly [48], evaluation of balance-related disabilities (Parkinson's disease, stroke, and concussions), and lower body post-surgery rehabilitation $\left\lbrack {1,2,{30},{79},{99}}\right\rbrack$ .
30
+
31
+ Nearly \$90 billion are spent annually in the USA, for treating repetitive strain injuries (RSI) and lower body injuries arising out of poor workplace postures and prolonged standing [20,22]. Prolonged standing causes muscle dysfunction, or dystrophy of the muscles of the leg and often leads to unequal load distribution on the hips, knees, ankles, and feet which are responsible for stabilizing the torso in an upright position and is directly associated with lower back pain [94]. Lower body injuries are one of the noted root causes of disability in the world and affect approximately ${80}\%$ of the world population at some point in their lives $\left\lbrack {{54},{98}}\right\rbrack$ . As existing intervention technology attempts only postural sway detection and necessitates the participants' attention and effort to self-correct imbalance, there is a need for the development of an automatic wearable intervention technology with the capability for AWD detection and subsequent correction to facilitate proper posture maintenance during tasks involving prolonged standing hours such as work, recreational, and gaming activities.
32
+
33
+ As EMS has been shown to induce involuntary muscular contractions for generating physiological responses $\left\lbrack {{21},{91},{102}}\right\rbrack$ , we integrated EMS with an AWD detection system to automatically detect and correct habitual AWD posture and restore balance in posture through involuntary contractions of the muscles in the legs. Our work aims to explore and provide insights into differences between our approach of automatic posture correction and self-correction in traditional feedback techniques. We evaluated the performance of our automatic approach across two different applications with varying levels of engagement and posture awareness, and a novel between-subjects study was conducted. The performance of our automatic approach was measured by the correction response times to the EMS feedback. Qualitative data in the form of user perception rankings for different usability parameters were recorded and analyzed. In comparison to the previous research, the main contributions of this work include
34
+
35
+ 1. The development of a novel intervention prototype that autonomously detects and corrects AWD posture through a physiological feedback loop utilizing EMS.
36
+
37
+ 2. A user study for quantitative, and qualitative evaluation of performance, and usability of our automatic AWD detection and correction utilizing EMS feedback against two traditional feedback techniques (audio and vibro-tactile), and under two different conditions of posture awareness and engagement in breaking the habit of AWD, and for training and developing good postural habits.
38
+
39
+ ## 2 RELATED WORK
40
+
41
+ Owing to the increasing awareness of workplace injuries, health, and wellness, there has been a renewed interest in the relationship between postural control and cognitive load in recent times [3]. Self-correction of posture is affected automatically by the human body to a certain extent in response to sensory information such as visual, vestibular, and proprioceptive information. However, any additional loads due to simultaneous cognitive tasks demand extra resources, and necessitates balance monitoring and correction techniques $\left\lbrack {4,{55},{62},{80},{105}}\right\rbrack$ . Previous research on AWD monitoring and detection can be classified into two main categories: balance and stability monitoring, and asymmetric weight distribution with real-time feedback solutions.
42
+
43
+ ### 2.1 Balance and Stability Monitoring
44
+
45
+ Balance and stability monitoring has primarily been an area of research for detecting neurological disorders, gait imbalance, lower-body injury, and post-surgery rehabilitation. Traditionally, the measurement of impaired balance and AWD employed highly specialized equipment such as force plates [6, 43], electrogoniome-ters [87], video motion analysis [23], electromyography [82], and magnetic tracking systems [101]. Balance and stability monitoring techniques using force plates often measured the center of pressure/gravity and balance ratios [35], while Inertial measurement units (IMU) $\left\lbrack {7,{11},{36},{90}}\right\rbrack$ and video analysis techniques $\left\lbrack {{45},{47},{50},{113}}\right\rbrack$ relied on computed angular changes. However, expensive equipment developed for medical rehabilitation and clinical research was found to be cumbersome due to the attachment of markers and sensors to the skin/clothing. This resulted in difficulties in conducting easy, non-invasive data collection concerning AWD. As a cost-effective alternative, standing balance has also been evaluated using a Wii Balance Board (WBB) in different clinical settings $\left\lbrack {{16},{17},{34},{111}}\right\rbrack$ . WBB was utilized in clinical trials in brain injury patients to determine the effectiveness of balance rehabilitation [34], and predicting fall risks in older adults [111]. Additionally, other researchers also investigated postural sway and standing balance in a quiet standing condition among young adults $\left\lbrack {5,{66},{89}}\right\rbrack$ , elderly $\left\lbrack {8,{25},{26}}\right\rbrack$ , athletes $\left\lbrack {{61},{65}}\right\rbrack$ , and brain injury, and Parkinson’s Disease (PD) patients [31, 97, 106].
46
+
47
+ Research on postural sway was conducted to investigate steadiness in different stances [35] and different postural control tasks [17], to determine influence of standing duration on sway [63], and to expose impairments leading to disequilibrium and evaluate compensatory strategies in quiet standing positions in patients [9]. Postural sway was also investigated to determine the effect of dual tasks on standing balance [92]. Researchers also developed postural control strategies for clinical rehabilitation of patients suffering from Parkinson's disease (PD), and diagnosing sports-based impairments by investigating effects of altered postural control and balance on the ankle and hip in PD patients during quiet standing [7].Further, researchers also investigated the effects of anticipatory postural adjustments in patients with PD [11], detecting balance irregularities in athletes at risk of AWD [90], incidence of head impacts due to imbalance [36], and sway assessment for detecting balance impairments in athletic populations. Finally, postural sway and balance impairment studies have also been conducted by different researchers for postural control in concussion patients [24], neurological disorders [112], and injury prevention [95]. However, the above-mentioned research studies are only focused on the assessment and monitoring of balance and postural sway for diagnosis of balance impairments, and the development of rehabilitation protocols for balance training. These techniques do not provide any posture correction feedback to the participants. To address this, our research focuses on both detection of AWD conditions and subsequently provide real time automatic correction feedback to restore balance using EMS.
48
+
49
+ ### 2.2 Asymmetric Weight Distribution Detection with Feed- back
50
+
51
+ Maintaining balance and stability is a complex activity that is accomplished by a synergy between the brain and different sensory information from the vestibular, somatosensory, and visual systems. Postural instability or abnormal postural sway coincides with asymmetric weight distribution or weight-bearing asymmetry when feedback from sensory systems is inaccurate. However, this loss or absence of sensory information can be compensated by providing additional external sensory feedback to the brain for effecting posture correction and maintaining balance $\left\lbrack {{44},{83}}\right\rbrack$ . Due to the advancements in sensor technology, and smarter algorithms, the past decade has seen an increased interest in the design and development of biofeedback-based postural control devices for maintaining balance.
52
+
53
+ Audio feedback systems were developed by researchers for improving balance in patients suffering from bilateral vestibular loss [27], for comparing the effect of visual senses and environmental conditions on postural control [28], and for improving balance in comparison to absence or unreliable sensory feedback [14]. Alternatively, visual feedback was utilized to evaluate the effectiveness of human balance improvement in quiet standing tasks [39], to explore the effect of interactive balance training on postural stability in daily physical activities [37], and for developing balance rehabilitation strategies based on ankle movement to compensate for impaired joint proprioception in patients [38]. Augmented sensory feedback through visual, and auditory feedback was further explored to investigate the relative effectiveness of the two types of feedback on improving postural control [41] and it was concluded that audio feedback was more effective for motor learning and maintaining balance. Virtual reality integrated with visual feedback has also been employed in developing balance training rehabilitation protocols and biofeedback for minimizing fall risks [108], to investigate the influence of moving visual immersive environments on postural control [56], improving standing balance in patients suffering from hemiplegia [10], and PD [33]. All the above-mentioned balance and stability detection techniques focused on alerting the user through traditional audio, visual or vibro-tactile feedback techniques and relied entirely on the participants' ability to process the feedback, and their willingness to self-correct their AWD. Although, these AWD detection techniques enabled minimization of postural sway and restoration of balance using different types of feedback, they still required the users' willingness to self-correct posture when AWD or postural sway is detected. Additionally no posture correction feedback response times and user perception parameters have been reported. The traditional feedback types are also known to place a cognitive load on the user by relying solely on the user's intent and desire to self-correct their posture based on the received feedback especially when engaged in a cognitively demanding task [44,83].
54
+
55
+ Additionally, clinical research on gait rehabilitation for stroke patients has been conducted through the application of electrical stimulation to the gluteus medius and tibialis anterior [58], hip abductor and ankle dorsiflexor [15]. However, these systems are not automatic and utilize a manual trigger mechanism for providing the correction feedback to improve spatio-temporal parameters during dynamic activities like walking by controlling pronation, and foot placement during walking activities. Further, the tactile component of EMS was utilized on the thigh to help provide notifications to improve walking gait in post-stroke survivors [57]. Their system utilized force sensitive resistors capable of detecting impact of heel and foot strikes for detecting improper gait during walking activities and utilized EMS to provide only a vibro-tactile sensory stimulation without invoking any involuntary muscular activity that can alter the patient's gait. Their technique also relied on the user to make a conscious effort to correct their pronation and foot strikes to improve gait. Although, the above techniques utilized EMS for gait rehabilitation in clinical settings to address pronation, foot placement, and foot striking in dynamic activities such as walking in stroke patients, AWD detection and subsequent automatic balance stabilization during prolonged standing conditions in every day activities is not fully explored. This presents a gap in the research for the design and development of autonomous AWD detection and correction systems for preventing fall risks, gait imbalance, and proper rehabilitation after injury and surgery. Our automatic AWD detection and correction prototype system addresses this research gap by employing EMS to automatically generate a physiological counter weight shift response through involuntary muscular contractions of the tibialis muscle for restoring balanced posture when AWD conditions are detected during two prolonged standing conditions (quiet Standing and mobile gaming) under different levels of posture awareness, thereby reducing the additional cognitive load required for self correcting their posture.
56
+
57
+ ### 2.3 Electrical Muscle Stimulation (EMS)
58
+
59
+ Primarily, EMS has been utilized in pain management therapy to deliver electrical impulses to the muscles, nerves, and joints in a noninvasive manner via surface electrodes placed on the skin. Besides being used for alleviating chronic conditions of muscle strains and spasms, EMS was also employed in post-surgery rehabilitation to regain normal function [102], and in post-injury recovery for rebuilding muscle strength $\left\lbrack {{12},{21}}\right\rbrack$ . EMS has also been applied in clinical research to generate involuntary muscle contractions for restoring normal function to impaired muscles due to injury, surgery, or disuse, and also to restore normal functional actions such as hand grasping in hemiplegic patients [21], generating reflex actions for disorders involving swallowing [96], and to enable control of neuro-prosthetic implants [91].
60
+
61
+ ### 2.4 EMS in Human Computer Interaction
62
+
63
+ The capability of EMS to deliver haptic and somatosensory feedback has led to a newfound interest in the human-computer interaction (HCI) domain for the development of immersive training, and gaming in virtual, augmented, and mixed reality applications $\left\lbrack {{59},{67} - {70},{100}}\right\rbrack$ . Due to its adaptability, EMS has enabled the development of new interactive approaches for dynamic activity training, delivering more immersive experiences through somatosensory feedback, and in the development of spatial interfaces for user interaction. Dynamic activity training using EMS has been explored to enable users to acquire and develop new motor skills such as learning to play a musical instrument [104], learn offered affordances of different objects [74], and enable the development of fast reflexes for preemptive actions $\left\lbrack {{51},{52},{84}}\right\rbrack$ . Additionally, with its ability to generate physiological responses through invoked involuntary muscular contractions, EMS had been utilized to develop force feedback applications to emulate impact [29, 71], increase dexterity by flexing individual fingers [103], apply physical forces to gaming devices [77], objects [76], and walls and barriers in virtual environments [72, 75]. EMS has also permitted researchers to develop increased immersion in virtual reality applications through sharing kinesthetic experiences from tremors in patients with Parkinson's disease [85], arousing fear and pain in In-pulse [60], and transmitting emotions between individuals in Emotion Actuator [42]. Further, integration of EMS with input/output devices has enabled the development of physiological feedback loops in Pose-IO for proprioceptive interaction [73], induced navigation [88], bio-metric user authentication [13], influenced sketching [77], running assistant [19], discrete notification systems [40], and involuntary motor learning [18].
64
+
65
+ The current literature suggests that traditional feedback-based posture alert systems relied entirely on the users' intent and willingness to correct their improper posture and that EMS feedback-based posture correction has not been fully explored. Although, the above-mentioned characteristic interactive and adaptive features of EMS-based technologies have validated its ability to deliver latent, distinct, and more distinguished feedback for delivering immersive experiences, dynamic activity training, and input/output interfaces, our work investigates the feasibility of automatic posture correction for restoring balance and stabilization through a counter weight shift strategy utilizing EMS.
66
+
67
+ ## 3 AUTOMATIC DETECTION AND CORRECTION OF AWD
68
+
69
+ For automatic detection and correction of AWD, we developed an intervention prototype based on a physiological feedback loop that relied on load sensors and EMS (illustrated in Figure 2). Our prototype employed a wireless Wii Balance Board (WBB) for measuring changes in weight distribution across the two legs using the balance ratio of the weights displaced by the two legs separately, and the openEMSstim package [69] for presenting the EMS correction feedback. A C#-based user interface using a Wii-mote library was developed to integrate the WBB with the EMS hardware to complete the physiological feedback loop. As AWD is mainly characterized by progressive and/or unusual leaning to either side [49], our system was designed to detect these changes in weight distribution across the two legs using the shift in balance ratio representing the AWD conditions.
70
+
71
+ ![01963e70-cfea-78a9-980d-701049e5a4eb_3_222_150_581_528_0.jpg](images/01963e70-cfea-78a9-980d-701049e5a4eb_3_222_150_581_528_0.jpg)
72
+
73
+ Figure 2: Physiological feedback loop: Automatic asymmetric weight distribution detection and correction system. Asymmetric weight distribution posture (top) illustrates leaning to either side and the auto-corrected posture (bottom) illustrates the restored balanced posture through counter weight shift using EMS.
74
+
75
+ ### 3.1 Time and Balance Thresholds
76
+
77
+ Asymmetrical leg loading can be detected from the shift in balance ratio calculated from the weight displacement information obtained from the load sensors in the WBB. Our proposed system detected AWD when the user's balance ratio approached and crossed preset balance ratio and time thresholds. To improve our system robustness and tune our system for optimal performance, we collected ecologically valid balance ratio data from 10 participants performing 10 typical actions one performs consciously or unconsciously when they are standing idly (illustrated in Figure 3). These 10 unique actions were identified based on general movement observations of employees taking breaks from standing. These actions were interleaved with moderate and extreme leaning actions to ensure AWD conditions were embedded in each session. The balance ratio patterns of the 10 actions are shown in Figure 4. A grid search was then employed to find the balance ratio and time thresholds that optimized the accuracy of AWD detection. Since our primary concern was the impact of false positives on user perception and to prevent unwarranted correction feedback, we selected thresholds that minimized false positives first, maximized true positives second, and maximized the per-frame Jaccard index of similarity [93] with the manually marked per-frame ground truth third. With valid data collected from 10 participants, using a leave-one-subject-out protocol, we found that at a time threshold of 2.9 seconds and balance ratio threshold of 1.25, our system achieved high accuracy of 96% for true positive AWD detection, 0.1% for false-positive AWD detection, and ${0.3}\%$ for false rate. The balance ratio of 1.25 translates to a left-to-right or right-to-left AWD balance ratio of 55.5 : 44.5.
78
+
79
+ The preset time and balance ratio thresholds obtained through our tuning process allowed the AWD detection system to overcome measurement errors, mitigate false positives, and ensured that typical movements such as actions illustrated in Figure 3 did not lead to false-positive AWD detection or activate unwarranted correction feedback. When the user's balance ratio approached and crossed the preset balance ratio threshold of 1.25, a countdown timer set to the preset time threshold value of 2.9 seconds was initiated to provide correction feedback after the time threshold had elapsed. The purpose of the timer is to ensure that false positives due to participant behavior do not trigger a correction feedback response.
80
+
81
+ ![01963e70-cfea-78a9-980d-701049e5a4eb_3_934_149_704_534_0.jpg](images/01963e70-cfea-78a9-980d-701049e5a4eb_3_934_149_704_534_0.jpg)
82
+
83
+ Figure 3: Some examples of typical actions performed during standing activities based on movement observations of employees taking breaks after standing. (A) Lean slight left, (B) Lean slight right, (C) Balanced, (D) Calf raise and reset, (E) Lift left leg and reset, (F) Scratch leg and reset, (G) Sway and reset, (H) Lean extreme right, (I) Lift right leg and reset, (J) Lean extreme left.
84
+
85
+ ### 3.2 Correction Feedback
86
+
87
+ The Wii balance board contains load sensors at each corner (top left, bottom left, top right, and bottom right) allowing measurement of the weight distributed across each leg and calculation of the weight balance ratio for AWD detection. When AWD is detected, automatic correction feedback would be presented to the user by applying electrical stimulus to the tibialis muscles for generating a counter-weight shift force in the opposite leg to the direction of the AWD leaning and thereby, generating a physiological response to stabilize the user back to a 50:50 balanced equal weight distribution position. A pair of electrodes on each leg (illustrated in Figure 5) would be utilized for contraction of the tibialis muscle which causes the foot to roll outward, thus generating a physiological response of a counter-weight shift. This generated counter-weight shift attempts to redistribute the weight more evenly across the two legs, thereby stabilizing the user back to the balanced 50:50 weight distribution position. Calibration of the WBB and EMS intensity play a crucial role in the effectiveness of the system. The calibration process includes correcting offset values of the load sensors in the WBB prior to start of the study session. The users' balance ratio in balanced position and emulated AWD leaning positions relative to the balanced position are monitored to ensure WBB is calibrated. For the EMS calibration, the EMS intensity would be manually incremented to deliver an intensity that is optimal for generating involuntary muscular contraction, comfortable, and avoid any discomfort or pain to the user. This EMS intensity, provided to the user for generating the necessary force for correcting AWD posture and restoring the balanced position, would be recorded and utilized during the experiment. The Trans-cutaneous electrical stimulation (TENS) device can deliver intensities between(0 - 70mA). A continuous square wave at a pulse width of ${100\mu }\mathrm{s}$ with a frequency of ${75}\mathrm{\;{Hz}}$ at the recorded EMS intensity would be presented as EMS feedback to the participants. The EMS calibration procedure is described in detail in section 4.5.
88
+
89
+ ### 3.3 Operation
90
+
91
+ Our Physiological feedback loop for detecting and correcting AWD relied on the changes in balance ratio along with the total weight distributed on each leg. This allowed our system to detect AWD left/right conditions when the balance and time thresholds have been crossed. AWD occurs when a user unevenly distributed body weight across the two legs. This places an additional stress on the ankle, knee, hip, and lower back. To detect these AWD conditions, our system utilized the balance and time thresholds determined in Section 3.1. Figure 6 illustrates the activation and deactivation of EMS correction feedback when an AWD left condition was detected and corrected for a participant during the study. Initially, under a balanced posture condition, the EMS left leg and EMS right leg remain deactivated. A timer with preset time threshold of 2.9 Seconds was activated when the user's balance ratio gradually increased and crossed the preset threshold of 1.2. Upon completion of the timer, if the balance ratio still remained above the threshold, the EMS was activated to apply a stimulus of ${50}\mathrm{{mA}}$ to invoke a muscular contraction on the right tibilais muscle (EMS Right Leg) for generating a counter weight shift and restoring balanced posture. The EMS was deactivated immediately after the balanced posture is restored. A correction response time of 1.2 Seconds was recorded between activation and deactivation of the EMS Right Leg. The AWD right condition is similarly detected and corrected by activating and deactivating the EMS Left Leg.
92
+
93
+ ![01963e70-cfea-78a9-980d-701049e5a4eb_4_221_146_1354_528_0.jpg](images/01963e70-cfea-78a9-980d-701049e5a4eb_4_221_146_1354_528_0.jpg)
94
+
95
+ Figure 4: Balance ratio patterns of the 10 actions performed by users (illustrated in Figure 3) for the tuning process to determine balance and time thresholds for AWD detection. The lean actions representative of AWD exhibited higher balance ratios and for prolonged time durations in comparison to the other actions.
96
+
97
+ ![01963e70-cfea-78a9-980d-701049e5a4eb_4_398_819_229_300_0.jpg](images/01963e70-cfea-78a9-980d-701049e5a4eb_4_398_819_229_300_0.jpg)
98
+
99
+ Figure 5: EMS electrode placement on tibialis muscle for affecting counter weight shift.
100
+
101
+ ## 4 METHODS
102
+
103
+ The goal of this study was to evaluate the overall effectiveness and user perception of our automatic AWD detection and correction feedback system using EMS compared to traditional audio and vibro-tactile feedback modalities. The audio and vibro-tactile feedback modalities required self-correction by the user based on audio and vibro-tactile notifications delivered to them, respectively. We also identified two common use cases of everyday activities with varying levels of engagement and posture awareness such as quiet standing (QS) and playing a mobile game (MG) (illustrated in Figure 7) to in-
104
+
105
+ ![01963e70-cfea-78a9-980d-701049e5a4eb_4_924_816_724_314_0.jpg](images/01963e70-cfea-78a9-980d-701049e5a4eb_4_924_816_724_314_0.jpg)
106
+
107
+ Figure 6: Automatic detection and correction of AWD: Graph showing EMS activation and deactivation. When the user's balance ratio approached and crossed preset balance ratio and time thresholds, EMS was activated for AWD correction. EMS was deactivated when 50:50 balance was restored.
108
+
109
+ vestigate the effect of cognitive demand on posture awareness, AWD occurrence, and type of correction feedback. Our objective was to determine if our automatic AWD detection and correction system using EMS feedback would be a viable technique for correcting AWD as opposed to the audio and the vibro-tactile feedback types while standing idly or being engaged in cognitively demanding task.
110
+
111
+ ### 4.1 Subjects and Apparatus
112
+
113
+ We recruited 36 participants (Male $= {29}$ , Female $= 7$ ) for the study with 18 participants for each application-quiet standing, and mobile game. All participants were aged 18 years and above with mean age of 24.67 years (S.D. = 3.98 years), mean weight of 71.1 ${Kg}\left( {S.D = {10.88}\mathrm{{Kg}}}\right)$ , and mean height of ${167.3}\mathrm{\;{cm}}(S.D = {8.94}$ ${cm})$ . All participants were able-bodied and had corrective ${20}/{20}$ vision. For monitoring the balance ratio along the medial lateral axis, a Wii balance board was utilized. A Grove-vibration motor with double-sided disposable adhesives was utilized for delivering the vibro-tactile feedback (illustrated in Figure 9 (a)). An off-the-shelf TENS unit (TN SM MF2), and openEMSStim package [68] was utilized for generating the EMS feedback and controlling the activation and modulation of the intensity of the electrical stimuli supplied to the muscles, respectively. A 14" Intel i7 laptop was utilized for the study user interface and an iPhone ${SE}$ 2nd generation was employed for the mobile game application. Qualitative data from the pre-questionnaire survey on participants' prior exposure to balance alert devices and EMS, experience with posture problems, and AWD is illustrated in Table 1. Participants ranked their exposure and experience on a 7-point Likert scale with 1 meaning never/no experience and 7 meaning frequently/very experienced.
114
+
115
+ Table 1: User ranking on posture awareness, devices, and EMS. User ranking on a 7-point Likert scale. QS: Quiet standing, MG: Mobile game
116
+
117
+ <table><tr><td>User Experience</td><td>Application</td><td>$\mathbf{{Mean}}$</td><td>S.D</td></tr><tr><td rowspan="2">Exposure to balance alert devices</td><td>QS</td><td>1.44</td><td>0.70</td></tr><tr><td>MG</td><td>2.11</td><td>1.28</td></tr><tr><td rowspan="2">Exposure to EMS</td><td>QS</td><td>2.56</td><td>1.39</td></tr><tr><td>$\mathbf{{MG}}$</td><td>1.94</td><td>1.25</td></tr><tr><td rowspan="2">Prolonged standing</td><td>QS</td><td>4.39</td><td>1.87</td></tr><tr><td>$\mathbf{{MG}}$</td><td>4.11</td><td>1.67</td></tr><tr><td rowspan="2">Experienced AWD</td><td>QS</td><td>4.33</td><td>2.01</td></tr><tr><td>MG</td><td>3.67</td><td>2.08</td></tr></table>
118
+
119
+ ![01963e70-cfea-78a9-980d-701049e5a4eb_5_238_677_545_261_0.jpg](images/01963e70-cfea-78a9-980d-701049e5a4eb_5_238_677_545_261_0.jpg)
120
+
121
+ Figure 7: Participants played PUBG mobile in the mobile game condition. Image shows the lobby area of the game prior to starting.
122
+
123
+ ### 4.2 Experimental Design
124
+
125
+ To investigate the performance and feasibility of our approach, a 2 by 3 mixed subjects experiment with 36 participants was conducted. The within-subject factor was the feedback type (audio, vibro-tactile, and EMS) and the between subject factor was the application type (Quiet standing (QS) and Mobile game (MG)). The performance of our automatic AWD correction using the EMS feedback was compared against the self-correction in the audio and vibro-tactile feedback techniques. A quantitative evaluation of the average correction response times and a qualitative evaluation of the perceived usability of our system was conducted across the three feedback and the two application types. In both applications, participants were required to stand on the WBB without shoes for three 15-minute sessions, one for each of the three modalities listed below. In the quiet standing application, participants were required to stand quietly (illustrated in Figure 8 (A), (B), & (C)), while participants played a mobile version of "PlayerUnknown's Battlegrounds (PUBG)" in the mobile game application (illustrated in Figure 8 (D), (E), & (F)). PUBG mobile is an engaging battle royale game (illustrated in Figure 7) and was selected for this study due to its high engagement level and popularity amongst people aged between 15-35 years, who may be more prone to AWD due to prolonged standing hours at work or mobile gaming sessions. In both applications, participants were required to complete the following three modalities:
126
+
127
+ - Modality 1: Audio alert feedback and self-correction
128
+
129
+ - Modality 2: Vibro-tactile alert feedback and self-correction
130
+
131
+ - Modality 3: EMS feedback and automatic correction
132
+
133
+ In both applications, the order in which the participants were introduced to the modalities was counterbalanced to minimize learning effects. The three different modalities and the two applications in the study were the independent variables and the dependent variables were the average correction response times, and user perception parameters such as accuracy of correction feedback, task disruption, comfort, and posture awareness. Each study session lasted approximately ${60} - {75}$ minutes and the participants were compensated $\$ {15}$ for their participation.
134
+
135
+ ### 4.3 Research Hypotheses
136
+
137
+ Our study was designed to determine the effects of automatic or self-posture correction on user experience across the two applications, and three feedback modalities. As such, we expect to find the main or interaction effects of modality and application type on the average correction response times, and the user perception of correction feedback accuracy, comfort, disruption. EMS being an semi-invasive feedback technology, we developed four research hypotheses below to determine the usability of EMS for AWD correction against the traditional audio and vibrotactile feedbacks.
138
+
139
+ - H1: Average correction response times to EMS feedback will be the fastest among all three modalities.
140
+
141
+ - H2: Correction feedback accuracy in the EMS feedback modality will be greater in comparison to the other modalities.
142
+
143
+ - H3: EMS feedback modality will be equally comfortable as the alternative traditional feedback types and across both application types.
144
+
145
+ - H4: EMS feedback modality will be more disruptive across the three modalities.
146
+
147
+ ### 4.4 COVID-19 Considerations
148
+
149
+ Due to the ongoing COVID-19 pandemic, we wanted to ensure safety for the participants and researchers. Following our institution's guidelines, all individuals were required to always wear face masks Between each user, we sanitized all devices and surfaces that the participants and researchers would be in contact with. We also provided hand sanitizer, cleaning wipes, and latex gloves to reduce the risk of contracting the disease.
150
+
151
+ ### 4.5 Experimental Procedures
152
+
153
+ Before the start of the study session, participants were required to review the consent document and provide their consent for participating in the research. Participants then completed a pre-questionnaire survey on knowledge and experience with balance-related intervention technology, AWD, and EMS. Upon completion of the pre-questionnaire survey, participants were required to complete a validation study where they performed a set of the 10 typical actions on the WBB as illustrated in the Figure 3 to ensure the AWD detection system with the preset balance threshold (1.25) and time threshold (2.9 seconds) was able to detect the AWD conditions (Lean slight right/left, Lean extreme right/left) accurately and to mitigate the possibility of false-positive correction feedback. Next, participants were required to stand without shoes on the WBB for calibration. For the vibro-tactile alert modality, Grove vibration motors were placed on each leg with double-sided adhesives as illustrated in Figure 9 (a) Adhesive EMS electrodes were placed on each leg along the tibialis muscles before the EMS feedback session for correcting AWD as illustrated in Figure 9 (b). Before the EMS feedback session, participants were required to stand on the WBB and were calibrated for an optimal EMS intensity that affected balance stabilization and corrected AWD posture. Each user's optimal EMS intensity level was manually calibrated by the study moderator only once. Participants were asked to emulate an AWD condition of leaning left or right and the moderators incremented the EMS intensity on the opposite leg until an involuntary muscular contraction is felt by the user and generated a physiological response of a counter-weight shift in an attempt to stabilize the balance ratio. The above process was repeated for both AWD left and AWD right conditions to deliver the user with an optimal user experience in the EMS feedback session. As EMS has been known to produce a haptic effect at low intensities, participants were asked to ignore the haptic effect to ensure the haptic component did not contribute to the automatic AWD correction process in any way. Additionally, during this calibration process, moderators also asked participants to specifically respond verbally to the following questions to ensure tibialis muscular contraction and user comfort: 1) If and when they initially felt a haptic sensation of the EMS, 2) If and when they felt the EMS intensity generating an involuntary contraction in the leg and/or when they are experiencing the counter-weight shift force towards restoring their balance, 3) If and when they felt any pain or discomfort. For each user, this involuntary muscular contraction affecting AWD correction was visually verified by the moderator and verbally confirmed by the user. The optimal EMS intensity which generated the counter-weight shift effect to correct AWD and was also comfortable to the user was recorded to be used for the EMS feedback session of the study.
154
+
155
+ ---
156
+
157
+ ${}^{1}$ https://www.pubg.com/
158
+
159
+ ---
160
+
161
+ ![01963e70-cfea-78a9-980d-701049e5a4eb_6_412_151_972_312_0.jpg](images/01963e70-cfea-78a9-980d-701049e5a4eb_6_412_151_972_312_0.jpg)
162
+
163
+ Figure 8: Evaluation of the effectiveness of our automatic approach across 2 different application types- Quiet Standing (A), (B), (C) and Mobile Game (D), (E), (F). Quiet Standing: (A) AWD right, (B) Balanced, (C) AWD left. Mobile Game: (D) AWD right, (E) Balanced, (F) AWD left.
164
+
165
+ ![01963e70-cfea-78a9-980d-701049e5a4eb_6_298_580_429_316_0.jpg](images/01963e70-cfea-78a9-980d-701049e5a4eb_6_298_580_429_316_0.jpg)
166
+
167
+ Figure 9: Haptic motor unit and EMS electrode placement on the tibialis muscle. (a) Vibro-tactile feedback is delivered to the legs through the haptic motor units placed on each leg. (b) EMS feedback is delivered through EMS Electrodes place on the tibialis muscle on each leg.
168
+
169
+ The above EMS intensity calibration steps are similar in both the quiet standing and the mobile game applications. In the quiet standing application, participants would be asked to stand quietly, while for the mobile game application, participants would be required to play PUBG. In both applications, participants would be required to stand without shoes on the WBB, and their balance ratio would be monitored for AWD (illustrated in Figure 8). The study comprises three parts: audio, vibro-tactile, and EMS feedback. Each part of the study is 15 minutes in duration and all participants were required to finish all three parts to complete the study. The participants were given a 5-minute seated break after each part of the study, where participants were required to remain seated to rest their legs. Participants then completed a survey about their experience after each part.
170
+
171
+ #### 4.5.1 Audio feedback and self-correction:
172
+
173
+ Upon AWD detection based on balance ratio from the WBB, an audio notification "Leaning left/right-please correct imbalance" is activated and the participants were required to self-correct their AWD posture and stabilize their balance till another audio notification "Stabilized" is presented to them.
174
+
175
+ #### 4.5.2 Vibro-tactile feedback and self-correction:
176
+
177
+ Upon AWD detection based on balance ratio from the WBB, a vibro-tactile notification in the form of vibration from the haptic motor is activated on the opposite leg, indicating the direction that the user was required to shift to self-correct their AWD and stabilize their balance ratio. When participants' balance is stabilized the vibro-tactile notification stops, indicating a ${50} : {50}$ balance has been achieved.
178
+
179
+ #### 4.5.3 EMS feedback and Auto-correction:
180
+
181
+ Upon AWD detection, the EMS feedback is activated to apply the recorded EMS intensity to the tibialis muscles in the opposite leg to the AWD lean. This invokes an involuntary muscle contraction to produce a counter-weight shift force in the opposite direction to the AWD lean for stabilizing the balance. Figure 1(A) and (C) illustrate the AWD left and right-leaning posture, respectively. Figures 1(B) and (D) illustrate the automatically corrected posture after EMS has been applied. The EMS is deactivated when the balance ratio stabilization has been achieved.
182
+
183
+ ## 5 RESULTS
184
+
185
+ The average number of AWD conditions observed per participant in the quiet standing application was (12.38,13.05, and 14.11) for the audio, vibro-tactile, and EMS feedback modalities, respectively. and(12.22,13.83, and 12.66)for the audio, vibro-tactile, and EMS feedback modalities, respectively in the mobile game application For the quiet standing application, the mean EMS intensity required to correct AWD condition and stabilize balance posture was 50.55 ${mA}\left( {S \cdot D = {9.05mA}}\right)$ while for the mobile game task, the mean EMS intensity was ${51.94}\mathrm{{mA}}\left( {S.D = {8.25}\mathrm{{mA}}}\right)$ . To analyze the performance of our approach, we used repeated-measures 2-Factor ANOVA to determine the influence of modality and application types on each dependent variable and the consolidated results are presented in Table 2, 3, 4, 5. For the non-parametric user perception Likert scale data, we utilized the Aligned Rank Transform (ART) tool [110] and performed repeated measures 2-Factor ANOVA tests on the aligned ranks for the user perception Likert scale data.
186
+
187
+ ### 5.1 Average Correction Response Times
188
+
189
+ For $\mathrm{H}1$ , the main effect for modality type yielded an $F\left( {2,{68}}\right) =$ ${125.16}, p < {0.001}$ , indicating a significant difference between Audio $\left( {M = {2.58}, S.D = {0.63}}\right)$ , Vibro-tactile $\left( {M = {1.8}, S.D = {0.45}}\right)$ , and EMS modalities $\left( {M = {1.32}, S.D = {0.29}}\right)$ as illustrated in Figure 10 (a). A post-hoc pairwise comparison with Bonferroni correction conducted on the average correction response times across the three modalities showed that EMS feedback modality was significantly faster than the audio modality $\left( {{t}_{34} = - {1.262}, p < {0.001}}\right)$ , and the vibro-tactile feedback modality $\left( {{t}_{34} = - {0.492}, p < {0.001}}\right)$ . The main effect for application type yielded an $F\left( {1,{34}}\right) = {2.744}$ , $p > {0.05}$ , indicating that the effect of application type was not significant between quiet standing $\left( {M = {1.8}, S.D = {0.6}}\right)$ , and mobile game $\left( {M = 2, S.D = {0.79}}\right)$ as illustrated in Figure 10 (b). The interaction effect was significant $F\left( {2,{68}}\right) = {5.803}, p < {0.05}$ . Significant differences were found in the system performance with regards to average correction response times between different feedback modalities with EMS feedback delivering the fastest correction. As a result, we were able to accept H1.
190
+
191
+ Table 2: 2-Factor ANOVA: Average Correction response times (ACRT). M: Modality, A: Application.
192
+
193
+ <table><tr><td>Source</td><td>ACRT</td><td>$\mathbf{p}$</td></tr><tr><td>M</td><td>$F\left( {2,{68}}\right) = {125.16}$</td><td>$< {0.001}^{ * }$</td></tr><tr><td>A</td><td>$F\left( {1,{34}}\right) = {2.744}$</td><td>0.107</td></tr><tr><td>M X A</td><td>$F\left( {2,{68}}\right) = {5.803}$</td><td>0.016*</td></tr></table>
194
+
195
+ Note: * indicates significant difference $p < {0.05}$ .
196
+
197
+ ![01963e70-cfea-78a9-980d-701049e5a4eb_7_184_440_651_351_0.jpg](images/01963e70-cfea-78a9-980d-701049e5a4eb_7_184_440_651_351_0.jpg)
198
+
199
+ Figure 10: Average correction response times (ACRT) across (a) Modality and (b) Application. Error bars:95% Cl.
200
+
201
+ ### 5.2 User Perception of Correction Feedback Accuracy
202
+
203
+ For $\mathrm{H}2$ , the main effect for modality type yielded an $F\left( {2,{68}}\right) =$ ${4.113}, p < {0.05}$ , indicating a significant difference between Audio $\left( {M = {5.83}, S.D = {1.03}}\right)$ , Vibro-tactile $\left( {M = {6.44}, S.D = {0.69}}\right)$ , and EMS modalities $\left( {M = {6.67}, S.D = {0.53}}\right)$ as illustrated in Figure 11 (a). A post-hoc pairwise comparison with Bonferroni correction conducted on the participants ranking of correction feedback accuracy across the three modalities showed significant differences between the audio and vibro-tactile $\left( {{t}_{34} = - {0.611}, p < {0.001}}\right)$ , and audio and EMS feedback types $\left( {{t}_{34} = - {0.833}, p < {0.001}}\right)$ but no evidence of significant differences between the vibro-tactile and EMS feedback. The participants perceived EMS feedback to be more accurate than the audio, but not vibro-tactile feedback and hence we were not able to accept $\mathrm{H}2$ . The main effect for application type yielded an $F\left( {1,{34}}\right) = {0.052}, p > {0.05}$ , indicating that the effect of application type was not significant between quiet standing $\left( {M = {6.3}, S.D = {0.82}}\right)$ , and mobile game $\left( {M = {6.33}, S.D = {0.81}}\right)$ as illustrated in Figure 11 (b). The interaction effect was not significant $F\left( {2,{68}}\right) = {2.988}, p > {0.05}$ .
204
+
205
+ ### 5.3 User Perception of Comfort
206
+
207
+ For $\mathrm{H}3$ , the main effect for modality type yielded an $F\left( {2,{68}}\right) =$ ${1.376}, p > {0.05}$ , indicating no significant difference between Audio $\left( {M = {6.3}, S.D = {0.98}}\right)$ , Vibro-tactile $\left( {M = {6.36}, S.D = {0.96}}\right)$ , and EMS modalities $\left( {M = {5.91}, S.D = {1.23}}\right)$ as illustrated in Figure 12 (a). The main effect for application type yielded an $F\left( {1,{34}}\right) =$ ${1.364}, p > {0.05}$ , indicating that the effect of application type was not significant between quiet standing $\left( {M = {6.43}, S.D = {1.02}}\right)$ , and mobile game $\left( {M = 6, S.D = {1.08}}\right)$ as illustrated in Figure 12 (b). The interaction effect was not significant $F\left( {2,{68}}\right) = {2.027}, p > {0.05}$ . As no significant differences were found in the main effects for modality or the application type, neither modality nor application had any influence on the user comfort. As a result, we accept H3.
208
+
209
+ Table 3: 2-Factor ANOVA: User Perception-Correction feedback accuracy (CFA). M: Modality, A: Application.
210
+
211
+ <table><tr><td>Source</td><td>CFA</td><td>$\mathbf{p}$</td></tr><tr><td>M</td><td>$F\left( {2,{68}}\right) = {4.113}$</td><td>${0.021}^{ * }$</td></tr><tr><td>A</td><td>$F\left( {1,{34}}\right) = {0.052}$</td><td>0.82</td></tr><tr><td>M X A</td><td>A $F\left( {2,{68}}\right) = {2.988}$</td><td>0.057</td></tr></table>
212
+
213
+ Note: * indicates significant difference $p < {0.05}$ .
214
+
215
+ ![01963e70-cfea-78a9-980d-701049e5a4eb_7_999_446_570_345_0.jpg](images/01963e70-cfea-78a9-980d-701049e5a4eb_7_999_446_570_345_0.jpg)
216
+
217
+ Figure 11: User perception of correction feedback accuracy (CFA)across (a) Modality and (b) Application. Error bars: 95% CI.
218
+
219
+ ### 5.4 User Perception of Task Disruption
220
+
221
+ For $\mathrm{H}4$ , the main effect for modality type yielded an $F\left( {2,{68}}\right) =$ ${0.036}, p > {0.05}$ , indicating no significant difference between Audio $\left( {M = 2, S.D = {1.37}}\right)$ , Vibro-tactile $\left( {M = {2.11}, S.D = {1.30}}\right)$ , and EMS modalities $\left( {M = {2.28}, S.D = {1.65}}\right)$ as illustrated in Figure 13 (a). The main effect for application type yielded an $F\left( {1,{34}}\right) =$ ${0.280}, p > {0.05}$ , indicating that the effect of application type was not significant between quiet standing $\left( {M = {1.7}, S.D = {1.05}}\right)$ , and mobile game $\left( {M = {2.51}, S.D = {1.67}}\right)$ as illustrated in Figure 13 (b). The interaction effect was not significant $F\left( {2,{68}}\right) = {1.427}$ , $p > {0.05}$ . As no significant differences were found in the main effects for modality or the application type, neither modality nor application had any influence on task disruption. As a result, we reject H4.
222
+
223
+ ### 5.5 User Perception and Preference
224
+
225
+ Mean rankings for user perception of correction feedback accuracy, posture awareness, comfort, and task disruption are shown in Figure 14. Participants ranked their posture awareness on a 7-point scale where 1 means not at all aware and 7 means completely aware. Participants’ ranking indicated higher posture awareness $(M = {5.46}$ , $S.D = {1.61})$ in the quiet standing task, while posture awareness was significantly reduced for the mobile game condition $(M = {2.33}$ , $S.D = {1.27})$ . Additionally, when participants were asked about their preferred modality for correcting AWD, 55.56% of the study population reported that EMS feedback was their preferred correction feedback technique, while 36.11% preferred the vibro-tactile feedback and 8.33% preferred the audio feedback. However, 29 out of 36 participants reported that they would be willing to purchase EMS feedback for AWD posture correction if it were a commercially available product. Participants also ranked their shared responsibility with auto-correction utilizing EMS on a 7-point scale where 1 means not at all and 7 means completely. The mean shared responsibility exhibited by the participants was ${2.00}\left( {S.D = {1.08}}\right)$ in the quiet standing task, and ${1.72}\left( {S.D = {0.75}}\right)$ for mobile game condition. Participants ranked EMS feedback to be a highly interesting concept for automatic AWD correction with a mean ranking of 6.33 $\left( {S.D = {1.39}}\right)$ on a 7-point Likert scale.
226
+
227
+ Table 4: 2-Factor ANOVA: User perception-Comfort. M: Modality, A: Application.
228
+
229
+ <table><tr><td>Source</td><td>Comfort</td><td>$\mathbf{p}$</td></tr><tr><td>M</td><td>$F\left( {2,{68}}\right) = {1.376}$</td><td>0.259</td></tr><tr><td>A</td><td>$F\left( {1,{34}}\right) = {1.364}$</td><td>0.251</td></tr><tr><td>MXA</td><td/><td>0.14</td></tr></table>
230
+
231
+ Note: * indicates significant difference $p < {0.05}$ .
232
+
233
+ ![01963e70-cfea-78a9-980d-701049e5a4eb_8_187_444_644_345_0.jpg](images/01963e70-cfea-78a9-980d-701049e5a4eb_8_187_444_644_345_0.jpg)
234
+
235
+ Figure 12: User perception of comfort across (a) Modality and (b) Application. Error bars: 95% Cl.
236
+
237
+ ## 6 Discussion
238
+
239
+ Given the recent developments of EMS feedback in accelerating preemptive reflexes $\left\lbrack {{51},{52},{84}}\right\rbrack$ , and slouching posture correction [53], we were interested in understanding if EMS feedback could be utilized for correcting AWD. In comparison to the alternative techniques, we find there are several benefits to automatic correction using EMS. Our approach was able to achieve significantly faster correction at a high accuracy while delivering an equally comfortable user experience across different tasks with different levels of engagement and posture awareness. Although research on postural control, sway analysis, and AWD alert systems have been conducted, the system's correction responsiveness and user perception have not been measured or reported. Therefore, our study primarily focuses on evaluation of the performance and user perception of our EMS feedback based automatic AWD detection and correction technique against traditional audio and vibro-tactile feedback mechanisms.
240
+
241
+ Correction response times were measured from the time correction feedback is activated until balance has been restored. The average correction response times were significantly faster for the EMS feedback modality in comparison to the audio and vibro-tactile modalities. In both application types, the EMS modality delivered faster AWD corrections leading to faster stabilization and restoration of balance as illustrated in Figure 15. This was also reflected in the participants' comments on EMS: "the fastest feedback and made me correct the best", "liked the fast response", and "Perfect response, subtle but noticeable". The faster correction response times to EMS feedback could be mainly due to the automatic stabilization and balance restoration which does not require the user to place emphasis on processing audio or vibro-tactile feedback prior to engaging in a self-assessment and self-correction process. This self-assessment and self-correction process in the audio and vibro-tactile feedback mechanisms place an additional cognitive load on the user while being engaged in their task and rely entirely on the user's willingness or intent to self-correct their posture. One participant's comment attests to this fact: "Audio-took me time to process the feedback command and then correct, Vibration- got my attention, EMS-pulling quickly didn't need my attention". On the contrary, EMS feedback which does not require the participants' attention in the correction process, thereby allowing one to continue leveraging the cognitive or attentional resources for the primary task which would have otherwise been required for auditory, visual or sensory processing for postural control. Results also indicate that application type had no effect on the correction response times suggesting that EMS would be capable of delivering faster correction responses across a range of applications with varying levels of engagement and posture awareness. This frees up the cognitive demand of the visual, vestibular, and proprioception placed on the user and makes it especially beneficial as a smart intervention technique for athletes in post-operative rehabilitation to prevent unnecessary AWD conditions that prohibit or impede recovery, mitigating risk of re-injury, rebuilding strength and motion, and restoring normal function thereby ensuring proper recovery and safer return-to-sport.
242
+
243
+ Table 5: 2-Factor ANOVA: User Perception-Task disruption (TD). M: Modality, A: Application. Note: * indicates significant difference $p < {0.05}$ .
244
+
245
+ <table><tr><td>Source</td><td>TD</td><td>$\mathbf{p}$</td></tr><tr><td>M</td><td>$F\left( {2,{68}}\right) = {0.036}$</td><td>0.965</td></tr><tr><td>A</td><td>$F\left( {1,{34}}\right) = {0.280}$</td><td>0.6</td></tr><tr><td>M X A</td><td/><td>0.247</td></tr></table>
246
+
247
+ ![01963e70-cfea-78a9-980d-701049e5a4eb_8_989_444_597_354_0.jpg](images/01963e70-cfea-78a9-980d-701049e5a4eb_8_989_444_597_354_0.jpg)
248
+
249
+ Figure 13: User perception of task disruption (TD) across (a) Modality and (b) Application. Error bars: 95% Cl.
250
+
251
+ Participants' ranking of their perceived accuracy of correction feedback indicated that EMS feedback was more accurate than the audio, and equally accurate in comparison to the vibro-tactile feedback. Some of the participants' comments reflected this fact: "Audio was most distracting", "EMS was a better form of feedback, was strong and detected even the slightest imbalance", "EMS gave me best feedback, I couldn't hear the audio feedback over the game", "EMS most accurate and best for correction, but could be uncomfortable for some people". The participants perceived accuracy of EMS and vibro-tactile feedback equally well and this may have been due to the nature of explicit somatosensory confirmation provided by these two feedback types during delivery and termination of correction feedback when AWD is detected and corrected, respectively.
252
+
253
+ Participants' ranking their perceived level of comfort and task disruption, indicated neither modality nor application had any influence on the user comfort or task disruption. Although, both EMS and vibro-tactile feedback types are non-invasive in nature, EMS feedback has been known to produce a stronger somatosensory experience due to its ability to produce an involuntary muscular contraction along with a vibro-tactile effect. However, participants perceived all three modalities to be equally comfortable and equally disruptive. This could be due to careful calibration for an optimal EMS intensity that provides the user with a comfortable experience while generating a physiological response to effect a counter-weight shift. This user perception of comfort and task disruption illustrates participants' acceptance of EMS feedback as a viable alternative to the traditional feedback mechanisms with the additional advantage of automatic posture correction freeing up cognitive resources to focus on more important tasks. Participants comments show that EMS "took time getting used to. It is like an Assisted PUSH, very useful when physical awareness is lacking" and "The pulling effect surprised me a bit but it was fine after". This acceptance shows EMS feedback's potential to be developed as a commercial product and allow EMS-based smart intervention wearable technology to be available for everyday use especially by younger adults engaging in the use of mobile devices for gaming, social media consumption while standing, and older adults engaging in work related activities in industrial, manufacturing or customer service sectors that require long standing hours. This fact was also supported by the participants' willingness (80.55% of healthy study population) to purchase EMS based wearable AWD intervention technology if it were available as a commercial product.
254
+
255
+ ![01963e70-cfea-78a9-980d-701049e5a4eb_9_447_152_901_288_0.jpg](images/01963e70-cfea-78a9-980d-701049e5a4eb_9_447_152_901_288_0.jpg)
256
+
257
+ Figure 14: User perception mean rankings for correction feedback accuracy, posture awareness, comfort, and task disruption across all modality and application types. Likert Scale: 1-meaning not at all, 7-meaning completely. QS:Quiet Standing, MG:Mobile Gaming. Error bars: 95% Cl.
258
+
259
+ ![01963e70-cfea-78a9-980d-701049e5a4eb_9_268_552_490_253_0.jpg](images/01963e70-cfea-78a9-980d-701049e5a4eb_9_268_552_490_253_0.jpg)
260
+
261
+ Figure 15: Average Correction Response times across all modality and application types. Error bars:95% CI.
262
+
263
+ It was also interesting to note that the EMS intensity required for effecting counter-weight shift by stimulating the tibialis muscles was higher in comparison to another study on automatic detection and correction of slouching [53] where slouched posture was corrected by stimulating the trapezius muscles (Mean EMS intensity : Tibialis $= {51.25}\mathrm{\;{mA}}$ , Rhomboid $= {43.47}\mathrm{\;{mA}}$ ). This may be because the rhomboid muscle is more accessible physiologically in comparison to the tibialis muscle which is regarded as more deeper muscle group and thereby necessitating higher EMS intensity to recruit the motor neurons to cause an involuntary muscular contraction and generate a physiological response for producing the counter-weight shift effect with the desired magnitude and in the desired direction. Participants also reported shared responsibilities in helping/aiding the correction process during the EMS feedback session. This illustrates the participants' adaptability to new technology and demonstrates the positive learning effect produced by the EMS feedback towards better postural control. Further, it also demonstrates that EMS feedback with its somatosensory feedback encouraged the participants to get involved in the correction process. Finally, one participant commented "It's like trainer wheels on a bicycle", while some participants commented that EMS "Felt amazing", "Auto-correction is good", "the fastest feedback and made me correct the best", and "correction happens without thinking about it".
264
+
265
+ Finally, our system could be particularly beneficial in preventive health care and the development of rehabilitation protocols for recovery post-knee/ankle surgery as it would allow the healthcare specialists to develop customized recovery protocols for different individuals by varying the balance and time thresholds, and EMS intensity parameters as prescribed. This would ensure precision control of the weight distribution on the operated leg at different stages of recovery to maximize rebuilding strength and mobility, and minimizing the time duration for return-to-sport in case of athletes or return-to-normal function in case of non-athlete patients. Also, our EMS feedback when integrated with load sensors and IMUs embedded in shoes, could be utilized to detect AWD and dangerous tilt angles for automatic fall prevention in older adults, and PD patients who present a higher risk of injury due to falls experienced through the loss of balance. Therefore, our autonomous AWD detection and correction system could be a useful alternative or inclusion to existing environment, health, and safety (EHS) guidelines for mitigating risk of workplace injury, improving employee health, and in rehabilitation and preventive health care.
266
+
267
+ ## 7 LIMITATIONS AND FURTHER WORK
268
+
269
+ One prominent limitation is the need to manually place electrodes on the body. To resolve this, we plan to integrate the electrodes into wearable clothing and devising an auto-calibration system that can be customized to each individual's comfort. Another limitation of our study is that although our system detects any imbalance instantly. we utilized a time threshold of ${2.9}\mathrm{\;s}$ to discriminate AWD conditions from other actions. However, this threshold could be shortened if our AWD detection system were integrated with IMU sensors to classify non-AWD actions. Our future work includes the development of a mobile application to allow participants to customize the balance ratio, time thresholds, and EMS intensity. We also plan to gather data on how people with impaired balance issues fall compared to a healthy person's fall and implement an automatic fall prediction and prevention system utilizing EMS.
270
+
271
+ ## 8 CONCLUSION
272
+
273
+ We have demonstrated that our automatic EMS-based physiological feedback loop is a viable approach to supporting AWD detection and correction, and stabilizing balance through a counter-weight shift approach. Our auto-correction system utilizing EMS feedback demonstrated significantly faster posture correction response times compared to the self-correction in the audio and vibro-tactile feedback. Our approach also showed that participants perceived EMS feedback to be highly accurate, equally comfortable, and produced no more disruption than the alternative techniques it was tested against in both the quiet standing and the mobile game applications even though the posture awareness across the application types were significantly different. Therefore, automatic AWD detection and correction utilizing EMS shows promising results and can be developed as an alternative method for AWD correction.
274
+
275
+ ## ACKNOWLEDGMENTS
276
+
277
+ This work is supported in part by NSF Award IIS-1917728. We also thank the anonymous reviewers for their insightful feedback.
278
+
279
+ [1] V. Agostini, E. Chiaramello, C. Bredariol, C. Cavallini, and M. Knaflitz. Postural control after traumatic brain injury in patients with neuro-ophthalmic deficits. Gait & posture, 34(2):248-253, 2011.
280
+
281
+ [2] V. Agostini, A. Sbrollini, C. Cavallini, A. Busso, G. Pignata, and M. Knaflitz. The role of central vision in posture: Postural sway adaptations in stargardt patients. Gait & posture, 43:233-238, 2016.
282
+
283
+ [3] G. Andersson, J. Hagman, R. Talianzadeh, A. Svedberg, and H. C. Larsen. Effect of cognitive load on postural control. Brain research bulletin, 58(1):135-139, 2002.
284
+
285
+ [4] G. Andersson, L. Yardley, and L. Luxon. A dual-task study of interference between mental activity and control of balance. The American journal of otology, 19(5):632-637, 1998.
286
+
287
+ [5] L. C. Anker, V. Weerdesteyn, I. J. van Nes, B. Nienhuis, H. Straatman, and A. C. Geurts. The relation between postural stability and weight distribution in healthy subjects. Gait & posture, 27(3):471-477, 2008.
288
+
289
+ [6] R. Balasubramaniam, M. A. Riley, and M. Turvey. Specificity of postural sway to the demands of a precision task. Gait & posture, 11(1):12-24, 2000.
290
+
291
+ [7] C. Baston, M. Mancini, L. Rocchi, and F. Horak. Effects of levodopa on postural strategies in parkinson's disease. Gait & posture, 46:26- 29, 2016.
292
+
293
+ [8] C. Bauer, I. Gröger, R. Rupprecht, and K. G. Gaßmann. Intrasession reliability of force platform parameters in community-dwelling older adults. Archives of physical medicine and rehabilitation, 89(10):1977- 1982, 2008.
294
+
295
+ [9] F. Benvenuti, R. Mecacci, I. Gineprari, S. Bandinelli, E. Benvenuti, L. Ferrucci, A. Baroni, M. Rabuffetti, M. Hallett, J. M. Dambrosia, et al. Kinematic characteristics of standing disequilibrium: reliability and validity of a posturographic protocol. Archives of physical medicine and rehabilitation, 80(3):278-287, 1999.
296
+
297
+ [10] E. Bisson, B. Contant, H. Sveistrup, and Y. Lajoie. Functional balance and dual-task reaction times in older adults are improved by virtual reality and biofeedback training. Cyberpsychology & behavior, 10(1):16-23, 2007.
298
+
299
+ [11] G. Bonora, M. Mancini, I. Carpinella, L. Chiari, M. Ferrarin, J. G. Nutt, and F. B. Horak. Investigation of anticipatory postural adjustments during one-leg stance using inertial sensors: evidence from subjects with parkinsonism. Frontiers in neurology, 8:361, 2017.
300
+
301
+ [12] M. Bortole, A. Venkatakrishnan, F. Zhu, J. C. Moreno, G. E. Francisco, J. L. Pons, and J. L. Contreras-Vidal. The h2 robotic exoskeleton for gait rehabilitation after stroke: early findings from a clinical study. Journal of neuroengineering and rehabilitation, 12(1):54, 2015.
302
+
303
+ [13] Y. Chen, Z. Yang, R. Abbou, P. Lopes, B. Y. Zhao, and H. Zheng. User authentication via electrical muscle stimulation. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1-15, 2021.
304
+
305
+ [14] L. Chiari, M. Dozza, A. Cappello, F. B. Horak, V. Macellari, and D. Giansanti. Audio-biofeedback for balance improvement: an accelerometry-based system. IEEE transactions on biomedical engineering, 52(12):2108-2111, 2005.
306
+
307
+ [15] M.-K. Cho, J.-H. Kim, Y. Chung, and S. Hwang. Treadmill gait training combined with functional electrical stimulation on hip abductor and ankle dorsiflexor muscles for chronic hemiparesis. Gait & posture, 42(1):73-78, 2015.
308
+
309
+ [16] R. A. Clark, A. L. Bryant, Y. Pua, P. McCrory, K. Bennell, and M. Hunt. Validity and reliability of the nintendo wii balance board for assessment of standing balance. Gait & posture, 31(3):307-310, 2010.
310
+
311
+ [17] R. A. Clark, Y.-H. Pua, K. Fortin, C. Ritchie, K. E. Webster, L. Denehy, and A. L. Bryant. Validity of the microsoft kinect for assessment of postural control. Gait & posture, 36(3):372-377, 2012.
312
+
313
+ [18] A. Colley, A. Leinonen, M.-T. Forsman, and J. Häkkilä. Ems painter: Co-creating visual art using electrical muscle stimulation. In Proceedings of the Twelfth International Conference on Tangible, Embedded, and Embodied Interaction, pp. 266-270, 2018.
314
+
315
+ [19] F. Daiber, F. Kosmalla, F. Wiehr, and A. Krüger. Footstriker: A wearable ems-based foot strike assistant for running. In Proceedings of the 2017 ACM International Conference on Interactive Surfaces
316
+
317
+ and Spaces, pp. 421-424. ACM, 2017.
318
+
319
+ [20] M. A. Davis. Where the united states spends its spine dollars: expenditures on different ambulatory services for the management of back and neck conditions. Spine, 37(19):1693, 2012.
320
+
321
+ [21] C. De Marchis, T. S. Monteiro, C. Simon-Martinez, S. Conforto, and A. Gharabaghi. Multi-contact functional electrical stimulation for hand opening: electrophysiologically driven identification of the optimal stimulation site. Journal of neuroengineering and rehabilitation, 13(1):22, 2016.
322
+
323
+ [22] R. Deyo. Back pain patient outcomes assessment team (boat). US Department of Health & Human Services-Agency of Healthcare Research, 1994.
324
+
325
+ [23] T. Dijkstra, G. Schöner, and C. Gielen. Temporal stability of the action-perception cycle for postural control in a moving visual environment. Experimental Brain Research, 97(3):477-486, 1994.
326
+
327
+ [24] C. Doherty, L. Zhao, J. Ryan, Y. Komaba, A. Inomata, and B. Caulfield. Quantification of postural control deficits in patients with recent concussion: an inertial-sensor based approach. Clinical biomechanics, 42:79-84, 2017.
328
+
329
+ [25] R. J. Doyle, E. T. Hsiao-Wecksler, B. G. Ragan, and K. S. Rosengren. Generalizability of center of pressure measures of quiet standing. Gait & posture, 25(2):166-171, 2007.
330
+
331
+ [26] T. L. Doyle, R. U. Newton, and A. F. Burnett. Reliability of traditional and fractal dimension measures of quiet stance center of pressure in young, healthy people. Archives of physical medicine and rehabilitation, 86(10):2034-2040, 2005.
332
+
333
+ [27] M. Dozza, L. Chiari, and F. B. Horak. Audio-biofeedback improves balance in patients with bilateral vestibular loss. Archives of physical medicine and rehabilitation, 86(7):1401-1403, 2005.
334
+
335
+ [28] M. Dozza, F. B. Horak, and L. Chiari. Auditory biofeedback substitutes for loss of sensory information in maintaining stance. Experimental brain research, 178(1):37-48, 2007.
336
+
337
+ [29] F. Farbiz, Z. H. Yu, C. Manders, and W. Ahmad. An electrical muscle stimulation haptic feedback for mixed reality tennis game. In ${ACM}$ SIGGRAPH 2007 posters, p. 140. ACM, 2007.
338
+
339
+ [30] S. Fioretti, M. Guidi, L. Ladislao, and G. Ghetti. Analysis and reliability of posturographic parameters in parkinson patients at an early stage. In The 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, vol. 1, pp. 651-654. IEEE, 2004.
340
+
341
+ [31] A. Frenklach, S. Louie, M. M. Koop, and H. Bronte-Stewart. Excessive postural sway and the risk of falls at different stages of parkinson's disease. Movement Disorders, 24(3):377-385, 2009.
342
+
343
+ [32] T. Friden, R. Zätterström, A. Lindstrand, and U. Moritz. A stabilomet-ric technique for evaluation of lower limb instabilities. The American journal of sports medicine, 17(1):118-122, 1989.
344
+
345
+ [33] M. Gandolfi, C. Geroin, E. Dimitrova, P. Boldrini, A. Waldner, S. Bonadiman, A. Picelli, S. Regazzo, E. Stirbu, D. Primon, et al. Virtual reality telerehabilitation for postural instability in parkinson's disease: a multicenter, single-blind, randomized, controlled trial. BioMed research international, 2017, 2017.
346
+
347
+ [34] J.-A. Gil-Gómez, R. Lloréns, M. Alcañiz, and C. Colomer. Effectiveness of a wii balance board-based system (ebavir) for balance rehabilitation: a pilot randomized clinical trial in patients with acquired brain injury. Journal of neuroengineering and rehabilitation, $8\left( 1\right) : 1 - {10},{2011}$ .
348
+
349
+ [35] P. A. Goldie, T. Bach, and O. Evans. Force platform measures for evaluating postural control: reliability and validity. Archives of physical medicine and rehabilitation, 70(7):510-517, 1989.
350
+
351
+ [36] S. T. Grafton, A. B. Ralston, and J. D. Ralston. Monitoring of postural sway with a head-mounted wearable device: effects of gender, participant state, and concussion. Medical Devices (Auckland, NZ), 12:151, 2019.
352
+
353
+ [37] G. S. Grewal, R. Sayeed, M. Schwenk, M. Bharara, R. Menzies, T. K. Talal, D. G. Armstrong, and B. Najafi. Balance rehabilitation: promoting the role of virtual reality in patients with diabetic peripheral neuropathy. Journal of the American Podiatric Medical Association, 103(6):498-507, 2013.
354
+
355
+ [38] G. S. Grewal, M. Schwenk, J. Lee-Eng, S. Parvaneh, M. Bharara, R. A. Menzies, T. K. Talal, D. G. Armstrong, and B. Najafi. Sensor-based
356
+
357
+ interactive balance training with visual joint movement feedback for improving postural stability in diabetics with peripheral neuropathy: a randomized controlled trial. Gerontology, 61(6):567-574, 2015.
358
+
359
+ [39] Z. Halická, J. Lobotková, K. Bučková, and F. Hlavačka. Effectiveness of different visual biofeedback signals for human balance improvement. Gait & posture, 39(1):410-414, 2014.
360
+
361
+ [40] S. Hanagata and Y. Kakehi. Paralogue: A remote conversation system using a hand avatar which postures are controlled with electrical muscle stimulation. In Proceedings of the 9th Augmented Human International Conference, pp. 1-3, 2018.
362
+
363
+ [41] N. Hasegawa, K. Takeda, M. Mancini, L. A. King, F. B. Horak, and T. Asaka. Differential effects of visual versus auditory biofeedback training for voluntary postural sway. PLoS one, 15(12):e0244583, 2020.
364
+
365
+ [42] M. Hassib, M. Pfeiffer, S. Schneegass, M. Rohs, and F. Alt. Emotion actuator: Embodied emotional feedback through electroencephalography and electrical muscle stimulation. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 6133-6146, 2017.
366
+
367
+ [43] S. M. Henry, J. Fung, and F. B. Horak. Effect of stance width on multidirectional postural responses. Journal of neurophysiology, 85(2):559- 570, 2001.
368
+
369
+ [44] F. B. Horak and F. Hlavacka. Somatosensory loss increases vestibulospinal sensitivity. Journal of neurophysiology, 86(2):575-585, 2001.
370
+
371
+ [45] P. S. Huang, C. J. Harris, and M. S. Nixon. Recognising humans by gait via parametric canonical space. Artificial Intelligence in Engineering, 13(4):359-366, 1999.
372
+
373
+ [46] H. Ihara, M. Takayama, and T. Fukumoto. Postural control capability of acl-deficient knee after sudden tilting. Gait & posture, 28(3):478- 482, 2008.
374
+
375
+ [47] H. Ismail. Fall prediction by analysing gait and postural sway from videos. In 12th IEEE Conference on Automatic Face and Gesture Recognition (FG2017), 2017.
376
+
377
+ [48] G.-B. Jarnlo. Functional balance tests related to falls among community-dwelling elderly. European Journal of Geriatrics, 1(5):7- 7, 2003.
378
+
379
+ [49] N. S. M. Kamil and S. Z. M. Dawal. Effect of postural angle on back muscle activities in aging female workers performing computer tasks. Journal of physical therapy science, 27(6):1967-1970, 2015.
380
+
381
+ [50] A. Karlsson and H. Lanshammar. Analysis of postural sway strategies using an inverted pendulum model and force plate data. Gait & Posture, 5(3):198-203, 1997.
382
+
383
+ [51] S. Kasahara, J. Nishida, and P. Lopes. Preemptive action: Accelerating human reaction using electrical muscle stimulation without compromising agency. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-15, 2019.
384
+
385
+ [52] S. Kasahara, K. Takada, J. Nishida, K. Shibata, S. Shimojo, and P. Lopes. Preserving agency during electrical muscle stimulation training speeds up reaction time directly after removing ems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1-9, 2021.
386
+
387
+ [53] R. K. Kattoju, C. R. Pittman, J. LaViola, et al. Automatic slouching detection and correction utilizing electrical muscle stimulation. In Graphics Interface 2021, 2020.
388
+
389
+ [54] B. J. Keeney, D. Fulton-Kehoe, J. A. Turner, T. M. Wickizer, K. C. G. Chan, and G. M. Franklin. Early predictors of lumbar spine surgery after occupational back injury: results from a prospective study of workers in washington state. Spine, 38(11):953, 2013.
390
+
391
+ [55] B. Kerr, S. M. Condon, and L. A. McDonald. Cognitive spatial processing and the regulation of posture. Journal of Experimental Psychology: Human Perception and Performance, 11(5):617, 1985.
392
+
393
+ [56] E. Keshner and R. Kenyon. The influence of an immersive virtual environment on the segmental organization of postural stabilizing responses. Journal of Vestibular Research, 10(4, 5):207-219, 2000.
394
+
395
+ [57] I.-H. Khoo, P. Marayong, V. Krishnan, M. Balagtas, O. Rojas, and K. Leyba. Real-time biofeedback device for gait rehabilitation of post-stroke patients. Biomedical engineering letters, 7(4):287-298, 2017.
396
+
397
+ [58] J.-H. Kim, Y. Chung, Y. Kim, and S. Hwang. Functional electrical stimulation applied to gluteus medius and tibialis anterior correspond-
398
+
399
+ ing gait cycle for stroke. Gait & posture, 36(1):65-67, 2012.
400
+
401
+ [59] M. Kono, Y. Ishiguro, T. Miyaki, and J. Rekimoto. Design and study of a multi-channel electrical muscle stimulation toolkit for human augmentation. In Proceedings of the 9th Augmented Human
402
+
403
+ International Conference, pp. 1-8, 2018.
404
+
405
+ [60] M. Kono, T. Miyaki, and J. Rekimoto. In-pulse: inducing fear and pain in virtual experiences. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, p. 40. ACM, 2018.
406
+
407
+ [61] F. Koslucher, M. G. Wade, B. Nelson, K. Lim, F.-C. Chen, and T. A. Stoffregen. Nintendo wii balance board is sensitive to effects of visual tasks on standing sway in healthy elderly adults. Gait & posture, 36(3):605-608, 2012.
408
+
409
+ [62] Y. Lajoie, N. Teasdale, C. Bard, and M. Fleury. Attentional demands for static and dynamic equilibrium. Experimental brain research, 97(1):139-144, 1993.
410
+
411
+ [63] K. Le Clair and C. Riach. Postural stability measures: what to measure and for how long. Clinical biomechanics, 11(3):176-178, 1996.
412
+
413
+ [64] J. Leanderson, E. Eriksson, C. Nilsson, and A. Wykman. Proprioception in classical ballet dancers: a prospective study of the influence of an ankle sprain on proprioception in the ankle joint. The American journal of sports medicine, 24(3):370-374, 1996.
414
+
415
+ [65] J. Leanderson, A. Wykman, and E. Eriksson. Ankle sprain and postural sway in basketball players. Knee Surgery, Sports Traumatology, Arthroscopy, 1(3):203-205, 1993.
416
+
417
+ [66] D. Lin, H. Seol, M. A. Nussbaum, and M. L. Madigan. Reliability of cop-based postural sway measures and age-related differences. Gait & posture, 28(2):337-342, 2008.
418
+
419
+ [67] P. Lopes. Interacting with wearable computers by means of functional electrical muscle stimulation. In The First Biannual Neuroadaptive Technology Conference, p. 118, 2017.
420
+
421
+ [68] P. Lopes and P. Baudisch. Demonstrating interactive systems based on electrical muscle stimulation. In Adjunct Publication of the 30th Annual ACM Symposium on User Interface Software and Technology, pp. 47-49, 2017.
422
+
423
+ [69] P. Lopes and P. Baudisch. Immense power in a tiny package: Wearables based on electrical muscle stimulation. IEEE Pervasive Computing, 16(3):12-16, 2017.
424
+
425
+ [70] P. Lopes and P. Baudisch. Interactive systems based on electrical muscle stimulation. Computer, 50(10):28-35, 2017.
426
+
427
+ [71] P. Lopes, A. Ion, and P. Baudisch. Impacto: Simulating physical impact by combining tactile stimulation with electrical muscle stimulation. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, pp. 11-19, 2015.
428
+
429
+ [72] P. Lopes, A. Ion, and R. Kovacs. Using your own muscles: realistic physical experiences in vr. XRDS: Crossroads, The ACM Magazine for Students, 22(1):30-35, 2015.
430
+
431
+ [73] P. Lopes, A. Ion, W. Mueller, D. Hoffmann, P. Jonell, and P. Baudisch. Proprioceptive interaction. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 939-948, 2015.
432
+
433
+ [74] P. Lopes, P. Jonell, and P. Baudisch. Affordance++ allowing objects to communicate dynamic use. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 2515-2524, 2015.
434
+
435
+ [75] P. Lopes, S. You, L.-P. Cheng, S. Marwecki, and P. Baudisch. Providing haptics to walls & heavy objects in virtual reality by means of electrical muscle stimulation. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 1471-1482, 2017.
436
+
437
+ [76] P. Lopes, S. You, A. Ion, and P. Baudisch. Adding force feedback to mixed reality experiences and games using electrical muscle stimulation. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-13, 2018.
438
+
439
+ [77] P. Lopes, D. Yüksel, F. Guimbretière, and P. Baudisch. Muscle-plotter: An interactive system based on electrical muscle stimulation that produces spatial output. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, pp. 207-217, 2016.
440
+
441
+ [78] M. Lysholm, T. Ledin, L. Ödkvist, and L. Good. Postural control-a comparison between patients with chronic anterior cruciate
442
+
443
+ ligament insufficiency and healthy individuals. Scandinavian journal of medicine & science in sports, 8(6):432-438, 1998.
444
+
445
+ [79] E. Maranesi, G. Ghetti, R. A. Rabini, and S. Fioretti. Functional
446
+
447
+ reach test: movement strategies in diabetic subjects. Gait & posture, 39(1):501-505, 2014.
448
+
449
+ [80] E. A. Maylor and A. M. Wing. Age differences in postural stability are increased by additional cognitive demands. The Journals of Gerontology Series B: Psychological Sciences and Social Sciences, 51(3):P143-P154, 1996.
450
+
451
+ [81] M. Mazaheri, P. Coenen, M. Parnianpour, H. Kiers, and J. H. van Dieën. Low back pain and postural sway during quiet standing with and without sensory manipulation: a systematic review. Gait & posture, 37(1):12-22, 2013.
452
+
453
+ [82] I. Melzer, N. Benjuya, and J. Kaplanski. Age-related changes of postural control: effect of cognitive tasks. Gerontology, 47(4):189- 194, 2001.
454
+
455
+ [83] S. Moore and M. Woollacott. The use of biofeedback devices to improve postural stability. Phys Ther Pract, 2(2):1-19, 1993.
456
+
457
+ [84] J. Nishida, S. Kasahara, and P. Lopes. Demonstrating preemptive reaction: accelerating human reaction using electrical muscle stimulation without compromising agency. In ACM SIGGRAPH 2019 Emerging Technologies, pp. 1-2. 2019.
458
+
459
+ [85] J. Nishida, K. Takahashi, and K. Suzuki. A wearable stimulation device for sharing and augmenting kinesthetic feedback. In Proceedings of the 6th Augmented Human International Conference, pp. 211-212. ACM, 2015.
460
+
461
+ [86] M. O'Connell, K. George, and D. Stock. Postural sway and balance testing: a comparison of normal and anterior cruciate ligament deficient knees. Gait & posture, 8(2):136-142, 1998.
462
+
463
+ [87] O. Oullier, B. G. Bardy, T. A. Stoffregen, and R. J. Bootsma. Postural coordination in looking and tracking tasks. Human Movement Science, 21(2):147-167, 2002.
464
+
465
+ [88] M. Pfeiffer, T. Dünte, S. Schneegass, F. Alt, and M. Rohs. Cruise control for pedestrians: Controlling walking direction using electrical muscle stimulation. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 2505-2514. ACM, 2015.
466
+
467
+ [89] N. Pinsault and N. Vuillerme. Test-retest reliability of centre of foot pressure measures to assess postural control during unperturbed stance. Medical engineering & physics, 31(2):276-286, 2009.
468
+
469
+ [90] M. Pollind and R. Soangra. Development and validation of wearable inertial sensor system for postural sway analysis. Measurement, 165:108101, 2020.
470
+
471
+ [91] M. Popovic, A. Curt, T. Keller, and V. Dietz. Functional electrical stimulation for grasping and walking: indications and limitations. Spinal cord, 39(8):403-412, 2001.
472
+
473
+ [92] J. M. Prado, T. A. Stoffregen, and M. Duarte. Postural sway during dual tasks in young and elderly adults. Gerontology, 53(5):274-281, 2007.
474
+
475
+ [93] R. Real and J. M. Vargas. The probabilistic basis of jaccard's index of similarity. Systematic biology, 45(3):380-385, 1996.
476
+
477
+ [94] A. Reeve and A. Dilley. Effects of posture on the thickness of transversus abdominis in pain-free subjects. Manual therapy, 14(6):679-684, 2009.
478
+
479
+ [95] J. C. Reneker, R. Babl, W. C. Pannell, F. Adah, M. M. Flowers, K. Curbow-Wilcox, and S. Lirette. Sensorimotor training for injury prevention in collegiate soccer players: an experimental study. Physical therapy in sport, 40:184-192, 2019.
480
+
481
+ [96] B. Riebold, H. Nahrstaedt, C. Schultheiss, R. O. Seidl, and T. Schauer. Multisensor classification system for triggering fes in order to support voluntary swallowing. European journal of translational myology, 26(4), 2016.
482
+
483
+ [97] B. L. Riemann, K. M. Guskiewicz, and E. W. Shields. Relationship between clinical and forceplate measures of postural stability. Journal of sport rehabilitation, 8(2):71-82, 1999.
484
+
485
+ [98] D. I. Rubin. Epidemiology and risk factors for spine pain. Neurologic clinics, 25(2):353-371, 2007.
486
+
487
+ [99] S. Savoie, S. Tanguay, H. Centomo, G. Beauchamp, M. Anidjar, and F. Prince. Postural control during laparoscopic surgical tasks. The American journal of surgery, 193(4):498-501, 2007.
488
+
489
+ [100] S. Schneegass, A. Schmidt, and M. Pfeiffer. Creating user interfaces with electrical muscle stimulation. interactions, 24(1):74-77, 2016.
490
+
491
+ [101] T. A. Stoffregen, R. J. Pagulayan, B. G. Bardy, and L. J. Hettinger. Modulating postural control to facilitate visual performance. Human Movement Science, 19(2):203-220, 2000.
492
+
493
+ [102] P. Strojnik, A. Kralj, and I. Ursic. Programmed six-channel electrical stimulator for complex stimulation of leg muscles during walking. IEEE Transactions on Biomedical Engineering, (2):112-116, 1979.
494
+
495
+ [103] A. Takahashi, J. Brooks, H. Kajimoto, and P. Lopes. Increasing electrical muscle stimulation's dexterity by means of back of the hand actuation. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1-12, 2021.
496
+
497
+ [104] E. Tamaki, T. Miyaki, and J. Rekimoto. Possessedhand: techniques for controlling human hands using electrical muscles stimuli. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 543-552. ACM, 2011.
498
+
499
+ [105] N. Teasdale, C. Bard, J. LaRue, and M. Fleury. On the cognitive penetrability of posture control. Experimental aging research, 19(1):1- 13, 1993.
500
+
501
+ [106] E. B. Titianova and I. M. Tarkka. Asymmetry in walking performance and postural sway in patients with chronic unilateral cerebral infarction. Journal of rehabilitation research and development, 32:236-236, 1995.
502
+
503
+ [107] J. S. Torg, W. Conrad, and V. Kalen. Clinical i diagnosis of anterior cruciate ligament instability in the athlete. The American journal of sports medicine, 4(2):84-93, 1976.
504
+
505
+ [108] S. Virk and K. M. V. McConville. Virtual reality applications in improving postural control and minimizing falls. In 2006 international conference of the IEEE engineering in medicine and biology society, pp. 2694-2697. IEEE, 2006.
506
+
507
+ [109] D. A. Winter, A. E. Patla, and J. S. Frank. Assessment of balance control in humans. Med prog technol, 16(1-2):31-51, 1990.
508
+
509
+ [110] J. O. Wobbrock, L. Findlater, D. Gergle, and J. J. Higgins. The aligned rank transform for nonparametric factorial analyses using only anova procedures. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 143-146, 2011.
510
+
511
+ [111] W. Young, S. Ferguson, S. Brault, and C. Craig. Assessing and training standing balance in older adults: a novel approach using the "nintendo wii'balance board. Gait & posture, 33(2):303-305, 2011.
512
+
513
+ [112] A. Zampogna, I. Mileti, E. Palermo, C. Celletti, M. Paoloni, A. Manoni, I. Mazzetta, G. Dalla Costa, C. Pérez-López, F. Camerota, et al. Fifteen years of wireless sensors for balance assessment in neurological disorders. Sensors, 20(11):3247, 2020.
514
+
515
+ [113] R. Zhang, C. Vogler, and D. Metaxas. Human gait recognition at sagittal plane. Image and vision computing, 25(3):321-330, 2007.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/E-PcUeaDbzv/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,353 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § AUTOMATIC ASYMMETRIC WEIGHT DISTRIBUTION DETECTION AND CORRECTION UTILIZING ELECTRICAL MUSCLE STIMULATION
2
+
3
+ Kattoju Ravi Kiran* Eugene Taranta† Ryan Ghamandi ${}^{ \ddagger }$ Joseph J.Laviola Jr. ${}^{§}$
4
+
5
+ Interactive Systems and User Experience Lab
6
+
7
+ University of Central Florida, USA
8
+
9
+ < g r a p h i c s >
10
+
11
+ Figure 1: Impaired balance can have long-term health ramifications. Presented here are images of asymmetric weight distribution (AWD) due to prolonged standing and restored balance conditions using electrical muscle stimulation (EMS): (A) AWD right, (C) AWD left, (B) & (D) EMS feedback based stabilization and restoration of balanced posture. The red arrows indicate direction of progressive AWD and green arrows indicate a counter-weight shift balance stabilization due to EMS feedback correction to the tibialis muscle.
12
+
13
+ § ABSTRACT
14
+
15
+ Postural control is a constant re-establishment process for the maintenance of balance and stability. Asymmetric weight distribution (AWD), characterized by uneven leg loading, leads to increased instability, injury, and progressive deterioration of posture and gait. Postural self-correction is automatically affected by the human body in response to visual, vestibular, and proprioceptive sensory information. However, simultaneous cognitive loads can increase the demand for extra resources and require balance monitoring and correction techniques. We address these issues with a novel physiological feedback system that utilizes load sensors for AWD detection, and electrical muscle stimulation (EMS) for automatic correction and restoration of balance by affecting a counter-weight shift. In a user study involving 36 participants, we compare our automatic approach against two alternative feedback systems (Audio and Vibro-tactile). We find that our automatic approach delivered faster correction and outperformed alternative feedback mechanisms and perceived to be interesting, comfortable and a potential commercial product.
16
+
17
+ Index Terms: Human-centered computing-Human computer interaction (HCI)-Wearable computing-Preventive Healthcare Posture correction-Asymmetric weight distribution-Electrical Muscle Stimulation;
18
+
19
+ *e-mail:Kattoju.ravikiran@knights.ucf.edu
20
+
21
+ ${}^{ \dagger }$ e-mail:etaranta@gmail.com
22
+
23
+ ${}^{ \ddagger }$ e-mail:ryanghamandi1@gmail.com
24
+
25
+ §e-mail:jjl@cs.ucf.edu
26
+
27
+ § 1 INTRODUCTION
28
+
29
+ The maintenance of stable posture is important as two-thirds of our body mass, and delicate organs are being supported by our legs which form a narrow base of support. Asymmetric weight distribution (AWD) characterized by postural sway and impaired standing balance has been known to be responsible for multiple health conditions resulting in reduced functional ability [109]. Numerous posture-related health issues such as lower back pain [81], anterior cruciate ligament ruptures $\left\lbrack {{46},{78},{86}}\right\rbrack$ , and knee and ankle injuries $\left\lbrack {{32},{64}}\right\rbrack$ are associated with an increase in postural sway and AWD. Postural control is a constant re-establishment process of balance and is integral to the safe execution of most movements in our daily life. Posture adjustment relies primarily on the integration of different sensory feedback such as the visual, vestibular, and proprioceptive control systems. Subconscious proprioception, in the form of awareness from muscle receptors, and joints also play an important role in the control of posture and balance. However, the effectiveness of our body's postural control system decreases with cognitive demand, age, and injuries, and imposes a critical demand on the postural control system especially while being engaged in additional cognitive tasks during standing activities. Although conscious proprioception plays a crucial role in gross muscular and full-body posture adjustments, poor postural habits and impaired proprioception may lead to increased postural sway, AWD, and even loss of balance [3]. AWD may lead to increasing instability, subsequent injury, and progressive deterioration of posture and gait [107]. Investigation of AWD has provided valuable information in an array of situations such as fall detection and prediction in the elderly [48], evaluation of balance-related disabilities (Parkinson's disease, stroke, and concussions), and lower body post-surgery rehabilitation $\left\lbrack {1,2,{30},{79},{99}}\right\rbrack$ .
30
+
31
+ Nearly $90 billion are spent annually in the USA, for treating repetitive strain injuries (RSI) and lower body injuries arising out of poor workplace postures and prolonged standing [20,22]. Prolonged standing causes muscle dysfunction, or dystrophy of the muscles of the leg and often leads to unequal load distribution on the hips, knees, ankles, and feet which are responsible for stabilizing the torso in an upright position and is directly associated with lower back pain [94]. Lower body injuries are one of the noted root causes of disability in the world and affect approximately ${80}\%$ of the world population at some point in their lives $\left\lbrack {{54},{98}}\right\rbrack$ . As existing intervention technology attempts only postural sway detection and necessitates the participants' attention and effort to self-correct imbalance, there is a need for the development of an automatic wearable intervention technology with the capability for AWD detection and subsequent correction to facilitate proper posture maintenance during tasks involving prolonged standing hours such as work, recreational, and gaming activities.
32
+
33
+ As EMS has been shown to induce involuntary muscular contractions for generating physiological responses $\left\lbrack {{21},{91},{102}}\right\rbrack$ , we integrated EMS with an AWD detection system to automatically detect and correct habitual AWD posture and restore balance in posture through involuntary contractions of the muscles in the legs. Our work aims to explore and provide insights into differences between our approach of automatic posture correction and self-correction in traditional feedback techniques. We evaluated the performance of our automatic approach across two different applications with varying levels of engagement and posture awareness, and a novel between-subjects study was conducted. The performance of our automatic approach was measured by the correction response times to the EMS feedback. Qualitative data in the form of user perception rankings for different usability parameters were recorded and analyzed. In comparison to the previous research, the main contributions of this work include
34
+
35
+ 1. The development of a novel intervention prototype that autonomously detects and corrects AWD posture through a physiological feedback loop utilizing EMS.
36
+
37
+ 2. A user study for quantitative, and qualitative evaluation of performance, and usability of our automatic AWD detection and correction utilizing EMS feedback against two traditional feedback techniques (audio and vibro-tactile), and under two different conditions of posture awareness and engagement in breaking the habit of AWD, and for training and developing good postural habits.
38
+
39
+ § 2 RELATED WORK
40
+
41
+ Owing to the increasing awareness of workplace injuries, health, and wellness, there has been a renewed interest in the relationship between postural control and cognitive load in recent times [3]. Self-correction of posture is affected automatically by the human body to a certain extent in response to sensory information such as visual, vestibular, and proprioceptive information. However, any additional loads due to simultaneous cognitive tasks demand extra resources, and necessitates balance monitoring and correction techniques $\left\lbrack {4,{55},{62},{80},{105}}\right\rbrack$ . Previous research on AWD monitoring and detection can be classified into two main categories: balance and stability monitoring, and asymmetric weight distribution with real-time feedback solutions.
42
+
43
+ § 2.1 BALANCE AND STABILITY MONITORING
44
+
45
+ Balance and stability monitoring has primarily been an area of research for detecting neurological disorders, gait imbalance, lower-body injury, and post-surgery rehabilitation. Traditionally, the measurement of impaired balance and AWD employed highly specialized equipment such as force plates [6, 43], electrogoniome-ters [87], video motion analysis [23], electromyography [82], and magnetic tracking systems [101]. Balance and stability monitoring techniques using force plates often measured the center of pressure/gravity and balance ratios [35], while Inertial measurement units (IMU) $\left\lbrack {7,{11},{36},{90}}\right\rbrack$ and video analysis techniques $\left\lbrack {{45},{47},{50},{113}}\right\rbrack$ relied on computed angular changes. However, expensive equipment developed for medical rehabilitation and clinical research was found to be cumbersome due to the attachment of markers and sensors to the skin/clothing. This resulted in difficulties in conducting easy, non-invasive data collection concerning AWD. As a cost-effective alternative, standing balance has also been evaluated using a Wii Balance Board (WBB) in different clinical settings $\left\lbrack {{16},{17},{34},{111}}\right\rbrack$ . WBB was utilized in clinical trials in brain injury patients to determine the effectiveness of balance rehabilitation [34], and predicting fall risks in older adults [111]. Additionally, other researchers also investigated postural sway and standing balance in a quiet standing condition among young adults $\left\lbrack {5,{66},{89}}\right\rbrack$ , elderly $\left\lbrack {8,{25},{26}}\right\rbrack$ , athletes $\left\lbrack {{61},{65}}\right\rbrack$ , and brain injury, and Parkinson’s Disease (PD) patients [31, 97, 106].
46
+
47
+ Research on postural sway was conducted to investigate steadiness in different stances [35] and different postural control tasks [17], to determine influence of standing duration on sway [63], and to expose impairments leading to disequilibrium and evaluate compensatory strategies in quiet standing positions in patients [9]. Postural sway was also investigated to determine the effect of dual tasks on standing balance [92]. Researchers also developed postural control strategies for clinical rehabilitation of patients suffering from Parkinson's disease (PD), and diagnosing sports-based impairments by investigating effects of altered postural control and balance on the ankle and hip in PD patients during quiet standing [7].Further, researchers also investigated the effects of anticipatory postural adjustments in patients with PD [11], detecting balance irregularities in athletes at risk of AWD [90], incidence of head impacts due to imbalance [36], and sway assessment for detecting balance impairments in athletic populations. Finally, postural sway and balance impairment studies have also been conducted by different researchers for postural control in concussion patients [24], neurological disorders [112], and injury prevention [95]. However, the above-mentioned research studies are only focused on the assessment and monitoring of balance and postural sway for diagnosis of balance impairments, and the development of rehabilitation protocols for balance training. These techniques do not provide any posture correction feedback to the participants. To address this, our research focuses on both detection of AWD conditions and subsequently provide real time automatic correction feedback to restore balance using EMS.
48
+
49
+ § 2.2 ASYMMETRIC WEIGHT DISTRIBUTION DETECTION WITH FEED- BACK
50
+
51
+ Maintaining balance and stability is a complex activity that is accomplished by a synergy between the brain and different sensory information from the vestibular, somatosensory, and visual systems. Postural instability or abnormal postural sway coincides with asymmetric weight distribution or weight-bearing asymmetry when feedback from sensory systems is inaccurate. However, this loss or absence of sensory information can be compensated by providing additional external sensory feedback to the brain for effecting posture correction and maintaining balance $\left\lbrack {{44},{83}}\right\rbrack$ . Due to the advancements in sensor technology, and smarter algorithms, the past decade has seen an increased interest in the design and development of biofeedback-based postural control devices for maintaining balance.
52
+
53
+ Audio feedback systems were developed by researchers for improving balance in patients suffering from bilateral vestibular loss [27], for comparing the effect of visual senses and environmental conditions on postural control [28], and for improving balance in comparison to absence or unreliable sensory feedback [14]. Alternatively, visual feedback was utilized to evaluate the effectiveness of human balance improvement in quiet standing tasks [39], to explore the effect of interactive balance training on postural stability in daily physical activities [37], and for developing balance rehabilitation strategies based on ankle movement to compensate for impaired joint proprioception in patients [38]. Augmented sensory feedback through visual, and auditory feedback was further explored to investigate the relative effectiveness of the two types of feedback on improving postural control [41] and it was concluded that audio feedback was more effective for motor learning and maintaining balance. Virtual reality integrated with visual feedback has also been employed in developing balance training rehabilitation protocols and biofeedback for minimizing fall risks [108], to investigate the influence of moving visual immersive environments on postural control [56], improving standing balance in patients suffering from hemiplegia [10], and PD [33]. All the above-mentioned balance and stability detection techniques focused on alerting the user through traditional audio, visual or vibro-tactile feedback techniques and relied entirely on the participants' ability to process the feedback, and their willingness to self-correct their AWD. Although, these AWD detection techniques enabled minimization of postural sway and restoration of balance using different types of feedback, they still required the users' willingness to self-correct posture when AWD or postural sway is detected. Additionally no posture correction feedback response times and user perception parameters have been reported. The traditional feedback types are also known to place a cognitive load on the user by relying solely on the user's intent and desire to self-correct their posture based on the received feedback especially when engaged in a cognitively demanding task [44,83].
54
+
55
+ Additionally, clinical research on gait rehabilitation for stroke patients has been conducted through the application of electrical stimulation to the gluteus medius and tibialis anterior [58], hip abductor and ankle dorsiflexor [15]. However, these systems are not automatic and utilize a manual trigger mechanism for providing the correction feedback to improve spatio-temporal parameters during dynamic activities like walking by controlling pronation, and foot placement during walking activities. Further, the tactile component of EMS was utilized on the thigh to help provide notifications to improve walking gait in post-stroke survivors [57]. Their system utilized force sensitive resistors capable of detecting impact of heel and foot strikes for detecting improper gait during walking activities and utilized EMS to provide only a vibro-tactile sensory stimulation without invoking any involuntary muscular activity that can alter the patient's gait. Their technique also relied on the user to make a conscious effort to correct their pronation and foot strikes to improve gait. Although, the above techniques utilized EMS for gait rehabilitation in clinical settings to address pronation, foot placement, and foot striking in dynamic activities such as walking in stroke patients, AWD detection and subsequent automatic balance stabilization during prolonged standing conditions in every day activities is not fully explored. This presents a gap in the research for the design and development of autonomous AWD detection and correction systems for preventing fall risks, gait imbalance, and proper rehabilitation after injury and surgery. Our automatic AWD detection and correction prototype system addresses this research gap by employing EMS to automatically generate a physiological counter weight shift response through involuntary muscular contractions of the tibialis muscle for restoring balanced posture when AWD conditions are detected during two prolonged standing conditions (quiet Standing and mobile gaming) under different levels of posture awareness, thereby reducing the additional cognitive load required for self correcting their posture.
56
+
57
+ § 2.3 ELECTRICAL MUSCLE STIMULATION (EMS)
58
+
59
+ Primarily, EMS has been utilized in pain management therapy to deliver electrical impulses to the muscles, nerves, and joints in a noninvasive manner via surface electrodes placed on the skin. Besides being used for alleviating chronic conditions of muscle strains and spasms, EMS was also employed in post-surgery rehabilitation to regain normal function [102], and in post-injury recovery for rebuilding muscle strength $\left\lbrack {{12},{21}}\right\rbrack$ . EMS has also been applied in clinical research to generate involuntary muscle contractions for restoring normal function to impaired muscles due to injury, surgery, or disuse, and also to restore normal functional actions such as hand grasping in hemiplegic patients [21], generating reflex actions for disorders involving swallowing [96], and to enable control of neuro-prosthetic implants [91].
60
+
61
+ § 2.4 EMS IN HUMAN COMPUTER INTERACTION
62
+
63
+ The capability of EMS to deliver haptic and somatosensory feedback has led to a newfound interest in the human-computer interaction (HCI) domain for the development of immersive training, and gaming in virtual, augmented, and mixed reality applications $\left\lbrack {{59},{67} - {70},{100}}\right\rbrack$ . Due to its adaptability, EMS has enabled the development of new interactive approaches for dynamic activity training, delivering more immersive experiences through somatosensory feedback, and in the development of spatial interfaces for user interaction. Dynamic activity training using EMS has been explored to enable users to acquire and develop new motor skills such as learning to play a musical instrument [104], learn offered affordances of different objects [74], and enable the development of fast reflexes for preemptive actions $\left\lbrack {{51},{52},{84}}\right\rbrack$ . Additionally, with its ability to generate physiological responses through invoked involuntary muscular contractions, EMS had been utilized to develop force feedback applications to emulate impact [29, 71], increase dexterity by flexing individual fingers [103], apply physical forces to gaming devices [77], objects [76], and walls and barriers in virtual environments [72, 75]. EMS has also permitted researchers to develop increased immersion in virtual reality applications through sharing kinesthetic experiences from tremors in patients with Parkinson's disease [85], arousing fear and pain in In-pulse [60], and transmitting emotions between individuals in Emotion Actuator [42]. Further, integration of EMS with input/output devices has enabled the development of physiological feedback loops in Pose-IO for proprioceptive interaction [73], induced navigation [88], bio-metric user authentication [13], influenced sketching [77], running assistant [19], discrete notification systems [40], and involuntary motor learning [18].
64
+
65
+ The current literature suggests that traditional feedback-based posture alert systems relied entirely on the users' intent and willingness to correct their improper posture and that EMS feedback-based posture correction has not been fully explored. Although, the above-mentioned characteristic interactive and adaptive features of EMS-based technologies have validated its ability to deliver latent, distinct, and more distinguished feedback for delivering immersive experiences, dynamic activity training, and input/output interfaces, our work investigates the feasibility of automatic posture correction for restoring balance and stabilization through a counter weight shift strategy utilizing EMS.
66
+
67
+ § 3 AUTOMATIC DETECTION AND CORRECTION OF AWD
68
+
69
+ For automatic detection and correction of AWD, we developed an intervention prototype based on a physiological feedback loop that relied on load sensors and EMS (illustrated in Figure 2). Our prototype employed a wireless Wii Balance Board (WBB) for measuring changes in weight distribution across the two legs using the balance ratio of the weights displaced by the two legs separately, and the openEMSstim package [69] for presenting the EMS correction feedback. A C#-based user interface using a Wii-mote library was developed to integrate the WBB with the EMS hardware to complete the physiological feedback loop. As AWD is mainly characterized by progressive and/or unusual leaning to either side [49], our system was designed to detect these changes in weight distribution across the two legs using the shift in balance ratio representing the AWD conditions.
70
+
71
+ < g r a p h i c s >
72
+
73
+ Figure 2: Physiological feedback loop: Automatic asymmetric weight distribution detection and correction system. Asymmetric weight distribution posture (top) illustrates leaning to either side and the auto-corrected posture (bottom) illustrates the restored balanced posture through counter weight shift using EMS.
74
+
75
+ § 3.1 TIME AND BALANCE THRESHOLDS
76
+
77
+ Asymmetrical leg loading can be detected from the shift in balance ratio calculated from the weight displacement information obtained from the load sensors in the WBB. Our proposed system detected AWD when the user's balance ratio approached and crossed preset balance ratio and time thresholds. To improve our system robustness and tune our system for optimal performance, we collected ecologically valid balance ratio data from 10 participants performing 10 typical actions one performs consciously or unconsciously when they are standing idly (illustrated in Figure 3). These 10 unique actions were identified based on general movement observations of employees taking breaks from standing. These actions were interleaved with moderate and extreme leaning actions to ensure AWD conditions were embedded in each session. The balance ratio patterns of the 10 actions are shown in Figure 4. A grid search was then employed to find the balance ratio and time thresholds that optimized the accuracy of AWD detection. Since our primary concern was the impact of false positives on user perception and to prevent unwarranted correction feedback, we selected thresholds that minimized false positives first, maximized true positives second, and maximized the per-frame Jaccard index of similarity [93] with the manually marked per-frame ground truth third. With valid data collected from 10 participants, using a leave-one-subject-out protocol, we found that at a time threshold of 2.9 seconds and balance ratio threshold of 1.25, our system achieved high accuracy of 96% for true positive AWD detection, 0.1% for false-positive AWD detection, and ${0.3}\%$ for false rate. The balance ratio of 1.25 translates to a left-to-right or right-to-left AWD balance ratio of 55.5 : 44.5.
78
+
79
+ The preset time and balance ratio thresholds obtained through our tuning process allowed the AWD detection system to overcome measurement errors, mitigate false positives, and ensured that typical movements such as actions illustrated in Figure 3 did not lead to false-positive AWD detection or activate unwarranted correction feedback. When the user's balance ratio approached and crossed the preset balance ratio threshold of 1.25, a countdown timer set to the preset time threshold value of 2.9 seconds was initiated to provide correction feedback after the time threshold had elapsed. The purpose of the timer is to ensure that false positives due to participant behavior do not trigger a correction feedback response.
80
+
81
+ < g r a p h i c s >
82
+
83
+ Figure 3: Some examples of typical actions performed during standing activities based on movement observations of employees taking breaks after standing. (A) Lean slight left, (B) Lean slight right, (C) Balanced, (D) Calf raise and reset, (E) Lift left leg and reset, (F) Scratch leg and reset, (G) Sway and reset, (H) Lean extreme right, (I) Lift right leg and reset, (J) Lean extreme left.
84
+
85
+ § 3.2 CORRECTION FEEDBACK
86
+
87
+ The Wii balance board contains load sensors at each corner (top left, bottom left, top right, and bottom right) allowing measurement of the weight distributed across each leg and calculation of the weight balance ratio for AWD detection. When AWD is detected, automatic correction feedback would be presented to the user by applying electrical stimulus to the tibialis muscles for generating a counter-weight shift force in the opposite leg to the direction of the AWD leaning and thereby, generating a physiological response to stabilize the user back to a 50:50 balanced equal weight distribution position. A pair of electrodes on each leg (illustrated in Figure 5) would be utilized for contraction of the tibialis muscle which causes the foot to roll outward, thus generating a physiological response of a counter-weight shift. This generated counter-weight shift attempts to redistribute the weight more evenly across the two legs, thereby stabilizing the user back to the balanced 50:50 weight distribution position. Calibration of the WBB and EMS intensity play a crucial role in the effectiveness of the system. The calibration process includes correcting offset values of the load sensors in the WBB prior to start of the study session. The users' balance ratio in balanced position and emulated AWD leaning positions relative to the balanced position are monitored to ensure WBB is calibrated. For the EMS calibration, the EMS intensity would be manually incremented to deliver an intensity that is optimal for generating involuntary muscular contraction, comfortable, and avoid any discomfort or pain to the user. This EMS intensity, provided to the user for generating the necessary force for correcting AWD posture and restoring the balanced position, would be recorded and utilized during the experiment. The Trans-cutaneous electrical stimulation (TENS) device can deliver intensities between(0 - 70mA). A continuous square wave at a pulse width of ${100\mu }\mathrm{s}$ with a frequency of ${75}\mathrm{\;{Hz}}$ at the recorded EMS intensity would be presented as EMS feedback to the participants. The EMS calibration procedure is described in detail in section 4.5.
88
+
89
+ § 3.3 OPERATION
90
+
91
+ Our Physiological feedback loop for detecting and correcting AWD relied on the changes in balance ratio along with the total weight distributed on each leg. This allowed our system to detect AWD left/right conditions when the balance and time thresholds have been crossed. AWD occurs when a user unevenly distributed body weight across the two legs. This places an additional stress on the ankle, knee, hip, and lower back. To detect these AWD conditions, our system utilized the balance and time thresholds determined in Section 3.1. Figure 6 illustrates the activation and deactivation of EMS correction feedback when an AWD left condition was detected and corrected for a participant during the study. Initially, under a balanced posture condition, the EMS left leg and EMS right leg remain deactivated. A timer with preset time threshold of 2.9 Seconds was activated when the user's balance ratio gradually increased and crossed the preset threshold of 1.2. Upon completion of the timer, if the balance ratio still remained above the threshold, the EMS was activated to apply a stimulus of ${50}\mathrm{{mA}}$ to invoke a muscular contraction on the right tibilais muscle (EMS Right Leg) for generating a counter weight shift and restoring balanced posture. The EMS was deactivated immediately after the balanced posture is restored. A correction response time of 1.2 Seconds was recorded between activation and deactivation of the EMS Right Leg. The AWD right condition is similarly detected and corrected by activating and deactivating the EMS Left Leg.
92
+
93
+ < g r a p h i c s >
94
+
95
+ Figure 4: Balance ratio patterns of the 10 actions performed by users (illustrated in Figure 3) for the tuning process to determine balance and time thresholds for AWD detection. The lean actions representative of AWD exhibited higher balance ratios and for prolonged time durations in comparison to the other actions.
96
+
97
+ < g r a p h i c s >
98
+
99
+ Figure 5: EMS electrode placement on tibialis muscle for affecting counter weight shift.
100
+
101
+ § 4 METHODS
102
+
103
+ The goal of this study was to evaluate the overall effectiveness and user perception of our automatic AWD detection and correction feedback system using EMS compared to traditional audio and vibro-tactile feedback modalities. The audio and vibro-tactile feedback modalities required self-correction by the user based on audio and vibro-tactile notifications delivered to them, respectively. We also identified two common use cases of everyday activities with varying levels of engagement and posture awareness such as quiet standing (QS) and playing a mobile game (MG) (illustrated in Figure 7) to in-
104
+
105
+ < g r a p h i c s >
106
+
107
+ Figure 6: Automatic detection and correction of AWD: Graph showing EMS activation and deactivation. When the user's balance ratio approached and crossed preset balance ratio and time thresholds, EMS was activated for AWD correction. EMS was deactivated when 50:50 balance was restored.
108
+
109
+ vestigate the effect of cognitive demand on posture awareness, AWD occurrence, and type of correction feedback. Our objective was to determine if our automatic AWD detection and correction system using EMS feedback would be a viable technique for correcting AWD as opposed to the audio and the vibro-tactile feedback types while standing idly or being engaged in cognitively demanding task.
110
+
111
+ § 4.1 SUBJECTS AND APPARATUS
112
+
113
+ We recruited 36 participants (Male $= {29}$ , Female $= 7$ ) for the study with 18 participants for each application-quiet standing, and mobile game. All participants were aged 18 years and above with mean age of 24.67 years (S.D. = 3.98 years), mean weight of 71.1 ${Kg}\left( {S.D = {10.88}\mathrm{{Kg}}}\right)$ , and mean height of ${167.3}\mathrm{\;{cm}}(S.D = {8.94}$ ${cm})$ . All participants were able-bodied and had corrective ${20}/{20}$ vision. For monitoring the balance ratio along the medial lateral axis, a Wii balance board was utilized. A Grove-vibration motor with double-sided disposable adhesives was utilized for delivering the vibro-tactile feedback (illustrated in Figure 9 (a)). An off-the-shelf TENS unit (TN SM MF2), and openEMSStim package [68] was utilized for generating the EMS feedback and controlling the activation and modulation of the intensity of the electrical stimuli supplied to the muscles, respectively. A 14" Intel i7 laptop was utilized for the study user interface and an iPhone ${SE}$ 2nd generation was employed for the mobile game application. Qualitative data from the pre-questionnaire survey on participants' prior exposure to balance alert devices and EMS, experience with posture problems, and AWD is illustrated in Table 1. Participants ranked their exposure and experience on a 7-point Likert scale with 1 meaning never/no experience and 7 meaning frequently/very experienced.
114
+
115
+ Table 1: User ranking on posture awareness, devices, and EMS. User ranking on a 7-point Likert scale. QS: Quiet standing, MG: Mobile game
116
+
117
+ max width=
118
+
119
+ User Experience Application $\mathbf{{Mean}}$ S.D
120
+
121
+ 1-4
122
+ 2*Exposure to balance alert devices QS 1.44 0.70
123
+
124
+ 2-4
125
+ MG 2.11 1.28
126
+
127
+ 1-4
128
+ 2*Exposure to EMS QS 2.56 1.39
129
+
130
+ 2-4
131
+ $\mathbf{{MG}}$ 1.94 1.25
132
+
133
+ 1-4
134
+ 2*Prolonged standing QS 4.39 1.87
135
+
136
+ 2-4
137
+ $\mathbf{{MG}}$ 4.11 1.67
138
+
139
+ 1-4
140
+ 2*Experienced AWD QS 4.33 2.01
141
+
142
+ 2-4
143
+ MG 3.67 2.08
144
+
145
+ 1-4
146
+
147
+ < g r a p h i c s >
148
+
149
+ Figure 7: Participants played PUBG mobile in the mobile game condition. Image shows the lobby area of the game prior to starting.
150
+
151
+ § 4.2 EXPERIMENTAL DESIGN
152
+
153
+ To investigate the performance and feasibility of our approach, a 2 by 3 mixed subjects experiment with 36 participants was conducted. The within-subject factor was the feedback type (audio, vibro-tactile, and EMS) and the between subject factor was the application type (Quiet standing (QS) and Mobile game (MG)). The performance of our automatic AWD correction using the EMS feedback was compared against the self-correction in the audio and vibro-tactile feedback techniques. A quantitative evaluation of the average correction response times and a qualitative evaluation of the perceived usability of our system was conducted across the three feedback and the two application types. In both applications, participants were required to stand on the WBB without shoes for three 15-minute sessions, one for each of the three modalities listed below. In the quiet standing application, participants were required to stand quietly (illustrated in Figure 8 (A), (B), & (C)), while participants played a mobile version of "PlayerUnknown's Battlegrounds (PUBG)" in the mobile game application (illustrated in Figure 8 (D), (E), & (F)). PUBG mobile is an engaging battle royale game (illustrated in Figure 7) and was selected for this study due to its high engagement level and popularity amongst people aged between 15-35 years, who may be more prone to AWD due to prolonged standing hours at work or mobile gaming sessions. In both applications, participants were required to complete the following three modalities:
154
+
155
+ * Modality 1: Audio alert feedback and self-correction
156
+
157
+ * Modality 2: Vibro-tactile alert feedback and self-correction
158
+
159
+ * Modality 3: EMS feedback and automatic correction
160
+
161
+ In both applications, the order in which the participants were introduced to the modalities was counterbalanced to minimize learning effects. The three different modalities and the two applications in the study were the independent variables and the dependent variables were the average correction response times, and user perception parameters such as accuracy of correction feedback, task disruption, comfort, and posture awareness. Each study session lasted approximately ${60} - {75}$ minutes and the participants were compensated $\$ {15}$ for their participation.
162
+
163
+ § 4.3 RESEARCH HYPOTHESES
164
+
165
+ Our study was designed to determine the effects of automatic or self-posture correction on user experience across the two applications, and three feedback modalities. As such, we expect to find the main or interaction effects of modality and application type on the average correction response times, and the user perception of correction feedback accuracy, comfort, disruption. EMS being an semi-invasive feedback technology, we developed four research hypotheses below to determine the usability of EMS for AWD correction against the traditional audio and vibrotactile feedbacks.
166
+
167
+ * H1: Average correction response times to EMS feedback will be the fastest among all three modalities.
168
+
169
+ * H2: Correction feedback accuracy in the EMS feedback modality will be greater in comparison to the other modalities.
170
+
171
+ * H3: EMS feedback modality will be equally comfortable as the alternative traditional feedback types and across both application types.
172
+
173
+ * H4: EMS feedback modality will be more disruptive across the three modalities.
174
+
175
+ § 4.4 COVID-19 CONSIDERATIONS
176
+
177
+ Due to the ongoing COVID-19 pandemic, we wanted to ensure safety for the participants and researchers. Following our institution's guidelines, all individuals were required to always wear face masks Between each user, we sanitized all devices and surfaces that the participants and researchers would be in contact with. We also provided hand sanitizer, cleaning wipes, and latex gloves to reduce the risk of contracting the disease.
178
+
179
+ § 4.5 EXPERIMENTAL PROCEDURES
180
+
181
+ Before the start of the study session, participants were required to review the consent document and provide their consent for participating in the research. Participants then completed a pre-questionnaire survey on knowledge and experience with balance-related intervention technology, AWD, and EMS. Upon completion of the pre-questionnaire survey, participants were required to complete a validation study where they performed a set of the 10 typical actions on the WBB as illustrated in the Figure 3 to ensure the AWD detection system with the preset balance threshold (1.25) and time threshold (2.9 seconds) was able to detect the AWD conditions (Lean slight right/left, Lean extreme right/left) accurately and to mitigate the possibility of false-positive correction feedback. Next, participants were required to stand without shoes on the WBB for calibration. For the vibro-tactile alert modality, Grove vibration motors were placed on each leg with double-sided adhesives as illustrated in Figure 9 (a) Adhesive EMS electrodes were placed on each leg along the tibialis muscles before the EMS feedback session for correcting AWD as illustrated in Figure 9 (b). Before the EMS feedback session, participants were required to stand on the WBB and were calibrated for an optimal EMS intensity that affected balance stabilization and corrected AWD posture. Each user's optimal EMS intensity level was manually calibrated by the study moderator only once. Participants were asked to emulate an AWD condition of leaning left or right and the moderators incremented the EMS intensity on the opposite leg until an involuntary muscular contraction is felt by the user and generated a physiological response of a counter-weight shift in an attempt to stabilize the balance ratio. The above process was repeated for both AWD left and AWD right conditions to deliver the user with an optimal user experience in the EMS feedback session. As EMS has been known to produce a haptic effect at low intensities, participants were asked to ignore the haptic effect to ensure the haptic component did not contribute to the automatic AWD correction process in any way. Additionally, during this calibration process, moderators also asked participants to specifically respond verbally to the following questions to ensure tibialis muscular contraction and user comfort: 1) If and when they initially felt a haptic sensation of the EMS, 2) If and when they felt the EMS intensity generating an involuntary contraction in the leg and/or when they are experiencing the counter-weight shift force towards restoring their balance, 3) If and when they felt any pain or discomfort. For each user, this involuntary muscular contraction affecting AWD correction was visually verified by the moderator and verbally confirmed by the user. The optimal EMS intensity which generated the counter-weight shift effect to correct AWD and was also comfortable to the user was recorded to be used for the EMS feedback session of the study.
182
+
183
+ ${}^{1}$ https://www.pubg.com/
184
+
185
+ < g r a p h i c s >
186
+
187
+ Figure 8: Evaluation of the effectiveness of our automatic approach across 2 different application types- Quiet Standing (A), (B), (C) and Mobile Game (D), (E), (F). Quiet Standing: (A) AWD right, (B) Balanced, (C) AWD left. Mobile Game: (D) AWD right, (E) Balanced, (F) AWD left.
188
+
189
+ < g r a p h i c s >
190
+
191
+ Figure 9: Haptic motor unit and EMS electrode placement on the tibialis muscle. (a) Vibro-tactile feedback is delivered to the legs through the haptic motor units placed on each leg. (b) EMS feedback is delivered through EMS Electrodes place on the tibialis muscle on each leg.
192
+
193
+ The above EMS intensity calibration steps are similar in both the quiet standing and the mobile game applications. In the quiet standing application, participants would be asked to stand quietly, while for the mobile game application, participants would be required to play PUBG. In both applications, participants would be required to stand without shoes on the WBB, and their balance ratio would be monitored for AWD (illustrated in Figure 8). The study comprises three parts: audio, vibro-tactile, and EMS feedback. Each part of the study is 15 minutes in duration and all participants were required to finish all three parts to complete the study. The participants were given a 5-minute seated break after each part of the study, where participants were required to remain seated to rest their legs. Participants then completed a survey about their experience after each part.
194
+
195
+ § 4.5.1 AUDIO FEEDBACK AND SELF-CORRECTION:
196
+
197
+ Upon AWD detection based on balance ratio from the WBB, an audio notification "Leaning left/right-please correct imbalance" is activated and the participants were required to self-correct their AWD posture and stabilize their balance till another audio notification "Stabilized" is presented to them.
198
+
199
+ § 4.5.2 VIBRO-TACTILE FEEDBACK AND SELF-CORRECTION:
200
+
201
+ Upon AWD detection based on balance ratio from the WBB, a vibro-tactile notification in the form of vibration from the haptic motor is activated on the opposite leg, indicating the direction that the user was required to shift to self-correct their AWD and stabilize their balance ratio. When participants' balance is stabilized the vibro-tactile notification stops, indicating a ${50} : {50}$ balance has been achieved.
202
+
203
+ § 4.5.3 EMS FEEDBACK AND AUTO-CORRECTION:
204
+
205
+ Upon AWD detection, the EMS feedback is activated to apply the recorded EMS intensity to the tibialis muscles in the opposite leg to the AWD lean. This invokes an involuntary muscle contraction to produce a counter-weight shift force in the opposite direction to the AWD lean for stabilizing the balance. Figure 1(A) and (C) illustrate the AWD left and right-leaning posture, respectively. Figures 1(B) and (D) illustrate the automatically corrected posture after EMS has been applied. The EMS is deactivated when the balance ratio stabilization has been achieved.
206
+
207
+ § 5 RESULTS
208
+
209
+ The average number of AWD conditions observed per participant in the quiet standing application was (12.38,13.05, and 14.11) for the audio, vibro-tactile, and EMS feedback modalities, respectively. and(12.22,13.83, and 12.66)for the audio, vibro-tactile, and EMS feedback modalities, respectively in the mobile game application For the quiet standing application, the mean EMS intensity required to correct AWD condition and stabilize balance posture was 50.55 ${mA}\left( {S \cdot D = {9.05mA}}\right)$ while for the mobile game task, the mean EMS intensity was ${51.94}\mathrm{{mA}}\left( {S.D = {8.25}\mathrm{{mA}}}\right)$ . To analyze the performance of our approach, we used repeated-measures 2-Factor ANOVA to determine the influence of modality and application types on each dependent variable and the consolidated results are presented in Table 2, 3, 4, 5. For the non-parametric user perception Likert scale data, we utilized the Aligned Rank Transform (ART) tool [110] and performed repeated measures 2-Factor ANOVA tests on the aligned ranks for the user perception Likert scale data.
210
+
211
+ § 5.1 AVERAGE CORRECTION RESPONSE TIMES
212
+
213
+ For $\mathrm{H}1$ , the main effect for modality type yielded an $F\left( {2,{68}}\right) =$ ${125.16},p < {0.001}$ , indicating a significant difference between Audio $\left( {M = {2.58},S.D = {0.63}}\right)$ , Vibro-tactile $\left( {M = {1.8},S.D = {0.45}}\right)$ , and EMS modalities $\left( {M = {1.32},S.D = {0.29}}\right)$ as illustrated in Figure 10 (a). A post-hoc pairwise comparison with Bonferroni correction conducted on the average correction response times across the three modalities showed that EMS feedback modality was significantly faster than the audio modality $\left( {{t}_{34} = - {1.262},p < {0.001}}\right)$ , and the vibro-tactile feedback modality $\left( {{t}_{34} = - {0.492},p < {0.001}}\right)$ . The main effect for application type yielded an $F\left( {1,{34}}\right) = {2.744}$ , $p > {0.05}$ , indicating that the effect of application type was not significant between quiet standing $\left( {M = {1.8},S.D = {0.6}}\right)$ , and mobile game $\left( {M = 2,S.D = {0.79}}\right)$ as illustrated in Figure 10 (b). The interaction effect was significant $F\left( {2,{68}}\right) = {5.803},p < {0.05}$ . Significant differences were found in the system performance with regards to average correction response times between different feedback modalities with EMS feedback delivering the fastest correction. As a result, we were able to accept H1.
214
+
215
+ Table 2: 2-Factor ANOVA: Average Correction response times (ACRT). M: Modality, A: Application.
216
+
217
+ max width=
218
+
219
+ Source ACRT $\mathbf{p}$
220
+
221
+ 1-3
222
+ M $F\left( {2,{68}}\right) = {125.16}$ $< {0.001}^{ * }$
223
+
224
+ 1-3
225
+ A $F\left( {1,{34}}\right) = {2.744}$ 0.107
226
+
227
+ 1-3
228
+ M X A $F\left( {2,{68}}\right) = {5.803}$ 0.016*
229
+
230
+ 1-3
231
+
232
+ Note: * indicates significant difference $p < {0.05}$ .
233
+
234
+ < g r a p h i c s >
235
+
236
+ Figure 10: Average correction response times (ACRT) across (a) Modality and (b) Application. Error bars:95% Cl.
237
+
238
+ § 5.2 USER PERCEPTION OF CORRECTION FEEDBACK ACCURACY
239
+
240
+ For $\mathrm{H}2$ , the main effect for modality type yielded an $F\left( {2,{68}}\right) =$ ${4.113},p < {0.05}$ , indicating a significant difference between Audio $\left( {M = {5.83},S.D = {1.03}}\right)$ , Vibro-tactile $\left( {M = {6.44},S.D = {0.69}}\right)$ , and EMS modalities $\left( {M = {6.67},S.D = {0.53}}\right)$ as illustrated in Figure 11 (a). A post-hoc pairwise comparison with Bonferroni correction conducted on the participants ranking of correction feedback accuracy across the three modalities showed significant differences between the audio and vibro-tactile $\left( {{t}_{34} = - {0.611},p < {0.001}}\right)$ , and audio and EMS feedback types $\left( {{t}_{34} = - {0.833},p < {0.001}}\right)$ but no evidence of significant differences between the vibro-tactile and EMS feedback. The participants perceived EMS feedback to be more accurate than the audio, but not vibro-tactile feedback and hence we were not able to accept $\mathrm{H}2$ . The main effect for application type yielded an $F\left( {1,{34}}\right) = {0.052},p > {0.05}$ , indicating that the effect of application type was not significant between quiet standing $\left( {M = {6.3},S.D = {0.82}}\right)$ , and mobile game $\left( {M = {6.33},S.D = {0.81}}\right)$ as illustrated in Figure 11 (b). The interaction effect was not significant $F\left( {2,{68}}\right) = {2.988},p > {0.05}$ .
241
+
242
+ § 5.3 USER PERCEPTION OF COMFORT
243
+
244
+ For $\mathrm{H}3$ , the main effect for modality type yielded an $F\left( {2,{68}}\right) =$ ${1.376},p > {0.05}$ , indicating no significant difference between Audio $\left( {M = {6.3},S.D = {0.98}}\right)$ , Vibro-tactile $\left( {M = {6.36},S.D = {0.96}}\right)$ , and EMS modalities $\left( {M = {5.91},S.D = {1.23}}\right)$ as illustrated in Figure 12 (a). The main effect for application type yielded an $F\left( {1,{34}}\right) =$ ${1.364},p > {0.05}$ , indicating that the effect of application type was not significant between quiet standing $\left( {M = {6.43},S.D = {1.02}}\right)$ , and mobile game $\left( {M = 6,S.D = {1.08}}\right)$ as illustrated in Figure 12 (b). The interaction effect was not significant $F\left( {2,{68}}\right) = {2.027},p > {0.05}$ . As no significant differences were found in the main effects for modality or the application type, neither modality nor application had any influence on the user comfort. As a result, we accept H3.
245
+
246
+ Table 3: 2-Factor ANOVA: User Perception-Correction feedback accuracy (CFA). M: Modality, A: Application.
247
+
248
+ max width=
249
+
250
+ Source CFA $\mathbf{p}$
251
+
252
+ 1-3
253
+ M $F\left( {2,{68}}\right) = {4.113}$ ${0.021}^{ * }$
254
+
255
+ 1-3
256
+ A $F\left( {1,{34}}\right) = {0.052}$ 0.82
257
+
258
+ 1-3
259
+ M X A A $F\left( {2,{68}}\right) = {2.988}$ 0.057
260
+
261
+ 1-3
262
+
263
+ Note: * indicates significant difference $p < {0.05}$ .
264
+
265
+ < g r a p h i c s >
266
+
267
+ Figure 11: User perception of correction feedback accuracy (CFA)across (a) Modality and (b) Application. Error bars: 95% CI.
268
+
269
+ § 5.4 USER PERCEPTION OF TASK DISRUPTION
270
+
271
+ For $\mathrm{H}4$ , the main effect for modality type yielded an $F\left( {2,{68}}\right) =$ ${0.036},p > {0.05}$ , indicating no significant difference between Audio $\left( {M = 2,S.D = {1.37}}\right)$ , Vibro-tactile $\left( {M = {2.11},S.D = {1.30}}\right)$ , and EMS modalities $\left( {M = {2.28},S.D = {1.65}}\right)$ as illustrated in Figure 13 (a). The main effect for application type yielded an $F\left( {1,{34}}\right) =$ ${0.280},p > {0.05}$ , indicating that the effect of application type was not significant between quiet standing $\left( {M = {1.7},S.D = {1.05}}\right)$ , and mobile game $\left( {M = {2.51},S.D = {1.67}}\right)$ as illustrated in Figure 13 (b). The interaction effect was not significant $F\left( {2,{68}}\right) = {1.427}$ , $p > {0.05}$ . As no significant differences were found in the main effects for modality or the application type, neither modality nor application had any influence on task disruption. As a result, we reject H4.
272
+
273
+ § 5.5 USER PERCEPTION AND PREFERENCE
274
+
275
+ Mean rankings for user perception of correction feedback accuracy, posture awareness, comfort, and task disruption are shown in Figure 14. Participants ranked their posture awareness on a 7-point scale where 1 means not at all aware and 7 means completely aware. Participants’ ranking indicated higher posture awareness $(M = {5.46}$ , $S.D = {1.61})$ in the quiet standing task, while posture awareness was significantly reduced for the mobile game condition $(M = {2.33}$ , $S.D = {1.27})$ . Additionally, when participants were asked about their preferred modality for correcting AWD, 55.56% of the study population reported that EMS feedback was their preferred correction feedback technique, while 36.11% preferred the vibro-tactile feedback and 8.33% preferred the audio feedback. However, 29 out of 36 participants reported that they would be willing to purchase EMS feedback for AWD posture correction if it were a commercially available product. Participants also ranked their shared responsibility with auto-correction utilizing EMS on a 7-point scale where 1 means not at all and 7 means completely. The mean shared responsibility exhibited by the participants was ${2.00}\left( {S.D = {1.08}}\right)$ in the quiet standing task, and ${1.72}\left( {S.D = {0.75}}\right)$ for mobile game condition. Participants ranked EMS feedback to be a highly interesting concept for automatic AWD correction with a mean ranking of 6.33 $\left( {S.D = {1.39}}\right)$ on a 7-point Likert scale.
276
+
277
+ Table 4: 2-Factor ANOVA: User perception-Comfort. M: Modality, A: Application.
278
+
279
+ max width=
280
+
281
+ Source Comfort $\mathbf{p}$
282
+
283
+ 1-3
284
+ M $F\left( {2,{68}}\right) = {1.376}$ 0.259
285
+
286
+ 1-3
287
+ A $F\left( {1,{34}}\right) = {1.364}$ 0.251
288
+
289
+ 1-3
290
+ MXA X 0.14
291
+
292
+ 1-3
293
+
294
+ Note: * indicates significant difference $p < {0.05}$ .
295
+
296
+ < g r a p h i c s >
297
+
298
+ Figure 12: User perception of comfort across (a) Modality and (b) Application. Error bars: 95% Cl.
299
+
300
+ § 6 DISCUSSION
301
+
302
+ Given the recent developments of EMS feedback in accelerating preemptive reflexes $\left\lbrack {{51},{52},{84}}\right\rbrack$ , and slouching posture correction [53], we were interested in understanding if EMS feedback could be utilized for correcting AWD. In comparison to the alternative techniques, we find there are several benefits to automatic correction using EMS. Our approach was able to achieve significantly faster correction at a high accuracy while delivering an equally comfortable user experience across different tasks with different levels of engagement and posture awareness. Although research on postural control, sway analysis, and AWD alert systems have been conducted, the system's correction responsiveness and user perception have not been measured or reported. Therefore, our study primarily focuses on evaluation of the performance and user perception of our EMS feedback based automatic AWD detection and correction technique against traditional audio and vibro-tactile feedback mechanisms.
303
+
304
+ Correction response times were measured from the time correction feedback is activated until balance has been restored. The average correction response times were significantly faster for the EMS feedback modality in comparison to the audio and vibro-tactile modalities. In both application types, the EMS modality delivered faster AWD corrections leading to faster stabilization and restoration of balance as illustrated in Figure 15. This was also reflected in the participants' comments on EMS: "the fastest feedback and made me correct the best", "liked the fast response", and "Perfect response, subtle but noticeable". The faster correction response times to EMS feedback could be mainly due to the automatic stabilization and balance restoration which does not require the user to place emphasis on processing audio or vibro-tactile feedback prior to engaging in a self-assessment and self-correction process. This self-assessment and self-correction process in the audio and vibro-tactile feedback mechanisms place an additional cognitive load on the user while being engaged in their task and rely entirely on the user's willingness or intent to self-correct their posture. One participant's comment attests to this fact: "Audio-took me time to process the feedback command and then correct, Vibration- got my attention, EMS-pulling quickly didn't need my attention". On the contrary, EMS feedback which does not require the participants' attention in the correction process, thereby allowing one to continue leveraging the cognitive or attentional resources for the primary task which would have otherwise been required for auditory, visual or sensory processing for postural control. Results also indicate that application type had no effect on the correction response times suggesting that EMS would be capable of delivering faster correction responses across a range of applications with varying levels of engagement and posture awareness. This frees up the cognitive demand of the visual, vestibular, and proprioception placed on the user and makes it especially beneficial as a smart intervention technique for athletes in post-operative rehabilitation to prevent unnecessary AWD conditions that prohibit or impede recovery, mitigating risk of re-injury, rebuilding strength and motion, and restoring normal function thereby ensuring proper recovery and safer return-to-sport.
305
+
306
+ Table 5: 2-Factor ANOVA: User Perception-Task disruption (TD). M: Modality, A: Application. Note: * indicates significant difference $p < {0.05}$ .
307
+
308
+ max width=
309
+
310
+ Source TD $\mathbf{p}$
311
+
312
+ 1-3
313
+ M $F\left( {2,{68}}\right) = {0.036}$ 0.965
314
+
315
+ 1-3
316
+ A $F\left( {1,{34}}\right) = {0.280}$ 0.6
317
+
318
+ 1-3
319
+ M X A X 0.247
320
+
321
+ 1-3
322
+
323
+ < g r a p h i c s >
324
+
325
+ Figure 13: User perception of task disruption (TD) across (a) Modality and (b) Application. Error bars: 95% Cl.
326
+
327
+ Participants' ranking of their perceived accuracy of correction feedback indicated that EMS feedback was more accurate than the audio, and equally accurate in comparison to the vibro-tactile feedback. Some of the participants' comments reflected this fact: "Audio was most distracting", "EMS was a better form of feedback, was strong and detected even the slightest imbalance", "EMS gave me best feedback, I couldn't hear the audio feedback over the game", "EMS most accurate and best for correction, but could be uncomfortable for some people". The participants perceived accuracy of EMS and vibro-tactile feedback equally well and this may have been due to the nature of explicit somatosensory confirmation provided by these two feedback types during delivery and termination of correction feedback when AWD is detected and corrected, respectively.
328
+
329
+ Participants' ranking their perceived level of comfort and task disruption, indicated neither modality nor application had any influence on the user comfort or task disruption. Although, both EMS and vibro-tactile feedback types are non-invasive in nature, EMS feedback has been known to produce a stronger somatosensory experience due to its ability to produce an involuntary muscular contraction along with a vibro-tactile effect. However, participants perceived all three modalities to be equally comfortable and equally disruptive. This could be due to careful calibration for an optimal EMS intensity that provides the user with a comfortable experience while generating a physiological response to effect a counter-weight shift. This user perception of comfort and task disruption illustrates participants' acceptance of EMS feedback as a viable alternative to the traditional feedback mechanisms with the additional advantage of automatic posture correction freeing up cognitive resources to focus on more important tasks. Participants comments show that EMS "took time getting used to. It is like an Assisted PUSH, very useful when physical awareness is lacking" and "The pulling effect surprised me a bit but it was fine after". This acceptance shows EMS feedback's potential to be developed as a commercial product and allow EMS-based smart intervention wearable technology to be available for everyday use especially by younger adults engaging in the use of mobile devices for gaming, social media consumption while standing, and older adults engaging in work related activities in industrial, manufacturing or customer service sectors that require long standing hours. This fact was also supported by the participants' willingness (80.55% of healthy study population) to purchase EMS based wearable AWD intervention technology if it were available as a commercial product.
330
+
331
+ < g r a p h i c s >
332
+
333
+ Figure 14: User perception mean rankings for correction feedback accuracy, posture awareness, comfort, and task disruption across all modality and application types. Likert Scale: 1-meaning not at all, 7-meaning completely. QS:Quiet Standing, MG:Mobile Gaming. Error bars: 95% Cl.
334
+
335
+ < g r a p h i c s >
336
+
337
+ Figure 15: Average Correction Response times across all modality and application types. Error bars:95% CI.
338
+
339
+ It was also interesting to note that the EMS intensity required for effecting counter-weight shift by stimulating the tibialis muscles was higher in comparison to another study on automatic detection and correction of slouching [53] where slouched posture was corrected by stimulating the trapezius muscles (Mean EMS intensity : Tibialis $= {51.25}\mathrm{\;{mA}}$ , Rhomboid $= {43.47}\mathrm{\;{mA}}$ ). This may be because the rhomboid muscle is more accessible physiologically in comparison to the tibialis muscle which is regarded as more deeper muscle group and thereby necessitating higher EMS intensity to recruit the motor neurons to cause an involuntary muscular contraction and generate a physiological response for producing the counter-weight shift effect with the desired magnitude and in the desired direction. Participants also reported shared responsibilities in helping/aiding the correction process during the EMS feedback session. This illustrates the participants' adaptability to new technology and demonstrates the positive learning effect produced by the EMS feedback towards better postural control. Further, it also demonstrates that EMS feedback with its somatosensory feedback encouraged the participants to get involved in the correction process. Finally, one participant commented "It's like trainer wheels on a bicycle", while some participants commented that EMS "Felt amazing", "Auto-correction is good", "the fastest feedback and made me correct the best", and "correction happens without thinking about it".
340
+
341
+ Finally, our system could be particularly beneficial in preventive health care and the development of rehabilitation protocols for recovery post-knee/ankle surgery as it would allow the healthcare specialists to develop customized recovery protocols for different individuals by varying the balance and time thresholds, and EMS intensity parameters as prescribed. This would ensure precision control of the weight distribution on the operated leg at different stages of recovery to maximize rebuilding strength and mobility, and minimizing the time duration for return-to-sport in case of athletes or return-to-normal function in case of non-athlete patients. Also, our EMS feedback when integrated with load sensors and IMUs embedded in shoes, could be utilized to detect AWD and dangerous tilt angles for automatic fall prevention in older adults, and PD patients who present a higher risk of injury due to falls experienced through the loss of balance. Therefore, our autonomous AWD detection and correction system could be a useful alternative or inclusion to existing environment, health, and safety (EHS) guidelines for mitigating risk of workplace injury, improving employee health, and in rehabilitation and preventive health care.
342
+
343
+ § 7 LIMITATIONS AND FURTHER WORK
344
+
345
+ One prominent limitation is the need to manually place electrodes on the body. To resolve this, we plan to integrate the electrodes into wearable clothing and devising an auto-calibration system that can be customized to each individual's comfort. Another limitation of our study is that although our system detects any imbalance instantly. we utilized a time threshold of ${2.9}\mathrm{\;s}$ to discriminate AWD conditions from other actions. However, this threshold could be shortened if our AWD detection system were integrated with IMU sensors to classify non-AWD actions. Our future work includes the development of a mobile application to allow participants to customize the balance ratio, time thresholds, and EMS intensity. We also plan to gather data on how people with impaired balance issues fall compared to a healthy person's fall and implement an automatic fall prediction and prevention system utilizing EMS.
346
+
347
+ § 8 CONCLUSION
348
+
349
+ We have demonstrated that our automatic EMS-based physiological feedback loop is a viable approach to supporting AWD detection and correction, and stabilizing balance through a counter-weight shift approach. Our auto-correction system utilizing EMS feedback demonstrated significantly faster posture correction response times compared to the self-correction in the audio and vibro-tactile feedback. Our approach also showed that participants perceived EMS feedback to be highly accurate, equally comfortable, and produced no more disruption than the alternative techniques it was tested against in both the quiet standing and the mobile game applications even though the posture awareness across the application types were significantly different. Therefore, automatic AWD detection and correction utilizing EMS shows promising results and can be developed as an alternative method for AWD correction.
350
+
351
+ § ACKNOWLEDGMENTS
352
+
353
+ This work is supported in part by NSF Award IIS-1917728. We also thank the anonymous reviewers for their insightful feedback.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/H2GICxFVaGc/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,637 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # TwoTorials: A Remote Cooperative Tutorial System for 3D Design Software
2
+
3
+ Anonymous Author(s)
4
+
5
+ ## Abstract
6
+
7
+ Step-by-step tutorials have emerged as a key means for learning complex software, but they are typically designed for individuals learning independently. In contrast, cooperative learning, where learners can help each other as they work, is a fundamental pedagogical technique with many established benefits. To extend these benefits to learning 3D-design software, this work investigates the design of remote cooperative software tutorial systems. We first conduct an observational study of dyads of participants working on 3D-design tutorials, which reveals a range of potential benefits, challenges, and strategies for cooperation. Our findings inform the design of TwoTorials, a cooperative step-by-step tutorial system that helps pairs of remote users establish shared 3D context, maintain awareness of each other's activities, and coordinate their efforts. A user study reveals several benefits to this approach, including enhanced cooperation between learners, reduced effort and mental demand, increased awareness of peer activities, and higher subjective engagement with the tutorial.
8
+
9
+ Keywords: Software learning, 3D modeling, remote learning
10
+
11
+ ## 1 INTRODUCTION
12
+
13
+ Users starting out in 3D design software face a range of learnability challenges [36], which have motivated the development of a variety of innovative software learning systems (e.g., $\left\lbrack {{13},{36},{51},{59},{78}}\right\rbrack$ ). In particular, step-by-step tutorials have emerged as a key means for learning complex software, and tutorials of this type exist for nearly all popular applications. In some ways, these tutorials replicate the experience of working on non-trivial projects using the software, with the tutorial providing a clear goal and scaffolding the user's skills and abilities [56]. However, this format of tutorials is primarily designed for individuals learning independently, so users cannot benefit from over-the-shoulder learning [88] and other advantages that come from learning alongside other people, such as occurs in workplace settings. This is unfortunate, because education research has established cooperative learning as a fundamental pedagogical technique [22] with benefits in terms of learner motivation [18, 75], retention [5, 80], and effective knowledge gain and transfer $\left\lbrack {{50},{64}}\right\rbrack$ .
14
+
15
+ In this paper, we are interested in how the benefits of cooperative learning can be made available on-demand to remote learners of 3D-design software, both to address some of the above challenges with tutorials, and to extend the benefits of over-the-shoulder learning to users who would not be able to benefit from it otherwise (e.g., users who are learning at home, either informally or through online courses). To this end, we developed TwoTorials (Figure 1) a tutorial system that allows pairs of remote users to complete a tutorial in parallel, with mechanisms to facilitate beneficial learning interactions. Through the design and development of TwoTorials, and two studies, our work addresses the following research questions: (1) What are the salient components of cooperative learning for step-by-step 3D-design tutorials, including potential benefits, challenges, and common strategies? and (2) What are the appropriate design principles and interface features to support these components?
16
+
17
+ To address these questions, we first conducted an observational study of four dyads completing step-by-step tutorials in Tinkercad, a popular 3D solid modeling application. The results of this study revealed potential benefits, challenges, and coordination strategies between users cooperatively completing step-by-step tutorials, such as the need to rapidly establish shared context to support their communication, and a hesitance to help one another if not explicitly asked.
18
+
19
+ Based on this initial study, we derived a set of five design principles for cooperative software tutorial systems and instantiated these in our TwoTorials prototype. The system provides mechanisms to help establish shared context, synchronize user progress, and facilitate non-disruptive communication between the peers.
20
+
21
+ To evaluate TwoTorials, we ran a second user study with six dyads of participants, comparing a baseline system with minimal coordination features to the TwoTorials system. Our findings, based on a within-subjects mixed-methods user study, indicate that TwoTorials helped participants to complete tutorials faster, significantly reduced their effort and mental demand, and helped them to maintain a higher level of awareness of each other's progress.
22
+
23
+ Building on previous work in software learnability and cooperative learning, and our interest in fostering peer help for 3D design software tutorials, this work makes three main contributions. First, we contribute a deeper understanding of the potential benefits, challenges, and common behaviors surrounding cooperative learning of $3\mathrm{D}$ design software. Second, based on these findings, we present a set of design principles for remote cooperative tutorial systems, and instantiate these principles in what we believe to be the first cooperative software tutorial system. Finally, a user study contributes an understanding of the benefits of such system for learning feature-rich software, and points to directions for further work, including generalizations to larger peer groups and other software domains.
24
+
25
+ ## 2 RELATED WORK
26
+
27
+ This work is related to prior research on software learning and tutorial systems, cooperative learning and distributed teamwork, and cooperative interfaces for multiplayer games. We review each of these areas below.
28
+
29
+ ### 2.1 Software Learning and Tutorial Systems
30
+
31
+ Early research on software learning established a tendency for learners to abandon printed manuals and other learning materials that take time away from their primary task $\left\lbrack {8,{23},{36}}\right\rbrack$ . This has led to a rich body of research on systems and tools to support learning of software applications $\left\lbrack {{13},{25},{78}}\right\rbrack$ . In particular, prior work has demonstrated the benefits of step-by-step tutorials and gamified tutorial systems $\left\lbrack {{23},{59}}\right\rbrack$ , as well as systems that allow users to learn in the context of realistic tasks [28, 35, 78].
32
+
33
+ A number of research projects have explored methods for harnessing community-created content, or improvements to learning content contributed by other learners, such as improved workflows $\left\lbrack {{11},{37},{58}}\right\rbrack$ , multimedia demonstrations of tutorial steps [13], or comments on tutorial content [7]. Recent research has also proposed new approaches for how groups learn 3D design [25, 48]. For example, Maestro [25] enables facilitators of 3D modelling workshops to track the progress of their classrooms in real-time, and provides simple mechanisms for provide help to students when needed.
34
+
35
+ While the above approaches appear to be valuable, research in the education community has demonstrated a range of benefits to active cooperative learning approaches, in which learners are able to directly interact with one another [14, 15, 52] (discussed in detail in the next section). To provide such an active learning experience, some work on software tutorial systems has integrated elements from games [21, 57, 59]. For example, CADament [59] enables users to learn 3D design software skills by observing the workflows of opponents in a competitive multiplayer learning game. The system enables competitors to engage in "over the shoulder" learning, but it is not focused on creating an environment where learners can work on tasks together to help and benefit from each other. Currently, there exists no step-by-step tutorials that explicitly support remote cooperation. To fill this gap, we build on this body of prior work but focus on supporting active cooperative approaches for learning 3D design software with other users, and propose the first known step-by-step tutorial system specifically designed to support remote peer learning.
36
+
37
+ ![01963e71-f5cb-7395-8f61-4177ee48d7b3_1_166_1116_687_453_0.jpg](images/01963e71-f5cb-7395-8f61-4177ee48d7b3_1_166_1116_687_453_0.jpg)
38
+
39
+ Figure 1: The TwoTorials system.
40
+
41
+ ### 2.2 Remote and Cooperative Learning
42
+
43
+ Remote learning is becoming increasingly prevalent in our world. The global coronavirus (COVID-19) ${}^{1}$ pandemic has demonstrated the need to establish effective remote learning environments [3, 10, 70], and is forcing educators and learners to rethink pedagogical methods and approaches [6, 20, 81]. A key question raised in this context is how to enable remote learning that preserves the social aspects of in-person learning, giving learners opportunities to engage and interact with each other, which has been shown to aid in motivation and creating positive learning experiences $\left\lbrack {{16},{31},{47},{76},{91}}\right\rbrack$ . In the present work, we aim to support remote synchronous coordinated learning for the specific learning resource of step-by-step tutorials.
44
+
45
+ Collaborative learning is an educational approach in which learners work together to solve a problem or complete a task, recognizing that learning is a naturally social activity [22, 65]. There are many approaches to foster collaborative learning, and significant literature showing its effectiveness both in co-located and remote learning environments [22, 44, 72]. Cooperative learning is a particular type of collaborative learning in which a set of processes help people interact together to accomplish specific goals, helping themselves and others to learn [64, 65, 85]. In terms of specific benefits, social activities have been shown to increase the motivation to learn from others, and to result in effective knowledge gain and transfer $\left\lbrack {{16},{18},{75}}\right\rbrack$ . Critically, this kind of social learning does not need to consist of continuous interaction between the learners $\left\lbrack {{64},{73}}\right\rbrack -$ simply being able to work together in a social environment provides the opportunity for both passive and active learning. For example, "over-the-shoulder learning" can occur from observing another learner while they are completing a task, or by actively engaging in completing the task together [18, 88]. When compared to individual and competitive learning, cooperative learning has been demonstrated to be particularly effective for sustaining learner motivation [63, 71].
46
+
47
+ In terms of particular mechanisms for enabling remote cooperative learning, prior work has shown that cooperative learning is most effective when learners organize their activities, synchronize their effort, and maintain shared situational awareness $\left\lbrack {{29},{50},{82}}\right\rbrack$ . Research on distributed teams has also shown the importance of awareness mechanisms, to enable team members to inform one another of their status [40, 92]. Although team or group awareness can be easily maintained in co-located collaborative environments, it can be difficult in remote collaboration [40]. Thus, groupware research has focused on interfaces and techniques that facilitate communication, increase group awareness, and enable cooperation, such as capturing eye gaze [18, 19] and other awareness cues [12, 92]. A full literature review of groupware research is beyond the scope of this paper but we point readers to existing surveys on the topic [26,40,79].
48
+
49
+ ### 2.3 Cooperative Gaming
50
+
51
+ Games are a prominent example of systems that cultivate cooperative behavior [68, 84, 87]. Cooperative games provide different interfaces and mechanics to facilitate multiplayer interaction $\left\lbrack {1,{45},{55}}\right\rbrack$ . In cooperative games, mutual understanding of the objectives between the players are essential to their success. Players must maintain awareness of each other and establish a common ground for communication [9]. Cooperative games also provide players with a verity of explicit and implicit communication mechanics, including awareness cues and cooperative communication mechanics [2, 12, 67, 87, 92]. These mechanics help teams communicate with each other and maintain high level of awareness. In this work, we draw on the body of past work on multiplayer games and gamification to develop interfaces to foster effective cooperative learning for a qualitatively different domain - step-by-step tutorials for 3D design software.
52
+
53
+ In summary, our current work contributes to the understanding of cooperative learning for the domain of step-by-step tutorials for 3D design software, and the TwoTorials prototype system provides specific mechanisms to enable shared awareness and support cooperative learning in this domain. To the best of our knowledge this work represents the first application of a cooperative active learning approach to step-by-step software tutorial systems.
54
+
55
+ ---
56
+
57
+ ${}^{1}$ The 2019/2020 global novel coronavirus (COVID-19) pandemic [10]
58
+
59
+ ---
60
+
61
+ ## 3 Observational Lab Study
62
+
63
+ To inform the design of our cooperative step-by-step tutorial system, we conducted an observational study with pairs of participants. Our main goal was to understand how peers cooperate with each other to complete this type of tutorial, the challenges they face, how they synchronize their work, and how they encourage and support each other.
64
+
65
+ ### 3.1 Study Procedure
66
+
67
+ Each pair of participants completed two step-by-step tutorials for Tinkercad drawn from those provided in-product in the software (Balloon Powered Car and Roman Dome), each lasting $\sim {30}$ minutes, followed by an individual survey and a short semi-structured interview.
68
+
69
+ We intentionally tested a range of different cooperative tutorial setups (Figure 2), to gain a broad set of insights into the benefits and challenges that arise in different forms of collaboration. These included co-located vs. distributed setups (simulated by a partition between the participants, which permitted them to talk to each other, but still required the use of screen sharing to view each other's workspaces), and separate vs. shared workspace setups (i.e., whether both participants were working on one project together, or working on the same project in parallel). Our decision to use a partition to simulate a distributed condition was designed to reduce the complexity of the study setup, and is an approach that has been used in prior work [18]. In this study, each dyad of participants completed two tutorials across one of the axes of the four setups shown in Figure 2, enabling them to comment in greater detail on the effect of that axis.
70
+
71
+ The experimenter took observations and provided participants with assistance with technical difficulties but did not help the participants with completing the tutorial instructions. Video and audio recording of the study and post-study interviews were transcribed and analyzed for common themes. Each study session lasted ~60 minutes total.
72
+
73
+ ![01963e71-f5cb-7395-8f61-4177ee48d7b3_2_156_1236_711_437_0.jpg](images/01963e71-f5cb-7395-8f61-4177ee48d7b3_2_156_1236_711_437_0.jpg)
74
+
75
+ Figure 2: The four cooperative tutorial setups that were tested.
76
+
77
+ ### 3.2 Analysis
78
+
79
+ Interview transcripts and observations were analyzed using methods drawn from grounded theory [33]. Specifically, open coding was used to label transcript data, and emerging themes and patterns were identified by the first author and then shared and discussed with the broader research team. The themes that emerged relate to the potential benefits to users from cooperative leaning of $3\mathrm{D}$ design software, challenges experienced by peers when learning 3D design software together, and common strategies used to cooperatively learn.
80
+
81
+ ### 3.3 Participants
82
+
83
+ Four dyads ( 8 participants total ( 6 male, 2 female), mean age 38.4 years (SD 10.3)) were recruited via an email to employees of a large software company. As dyads volunteered together, they are best considered as coworkers or friends. Two dyads were all male, and two were mixed (one male, one female). All participants reported having completed a bachelor's degree. All participants were screened for prior experience with 3D modeling software; $1/8$ participants had no experience, $3/8$ participants had minimal experience, $3/8$ participants had some experience, and $1/8$ participants had extensive experience. The most common 3D modeling applications used previously by participants were Maya, Blender, and Alias. Only one participant had prior experience with TinkerCAD. Each participant received a \$25 gift card as compensation for their participation.
84
+
85
+ ### 3.4 Results
86
+
87
+ We begin by discussing our observations of the tradeoffs of separate vs. shared workspaces and co-located vs. distributed workspaces, and then discuss our findings on the benefits, challenges, and strategies used by participants to cooperatively complete step-by-step tutorials.
88
+
89
+ #### 3.4.1 Separate vs. Shared Workspace
90
+
91
+ Neither the separate nor shared workspace setups were revealed to be clearly superior for enabling cooperative learning, with both showing advantages and disadvantages. Having a shared workspace forced the peers to collaborate, which was beneficial, but created a situation where the participant that was not 'driving' the system could become frustrated, and feel like they were missing out on learning:
92
+
93
+ When I was just watching it was frustrating to not be able to take actions myself. We were trying to figure out how the interface works, and I want to be able to create my own objects to explore the manipulators and what is possible. (P5)
94
+
95
+ Conversely, participants reported that working on separate workspaces created a feeling of working in parallel on separate tasks:
96
+
97
+ We had the video sharing, but we were both doing our own thing, so we only looked at each other's views to make sure our work looked somewhat similar. It didn't really seem like a cooperative effort [in the distributed separate workspace condition], more like we were just doing the same thing at the same time. (P3)
98
+
99
+ This observation is consistent with prior work on personalizable groupware that can support individual and group activities [34, 55]. A design approach that emerged from this observation was the idea of a hybrid system, which would allow each of the peers to benefit from actively working on the tutorial individually, while encouraging cooperation and peer help.
100
+
101
+ #### 3.4.2 Co-located vs. Distributed
102
+
103
+ Contrasting the co-located and distributed setups revealed a range of challenges to coordinating effort when participants were distributed. When co-located, it was much easier for participants to make spatial references to parts of the $3\mathrm{D}$ environment and to assist one another by looking at each other's screens, pointing at parts of their peer's screen, or even taking over the mouse of their peer to rotate the camera or make simple changes to 3D objects. Consistent with prior work on collaborative remote physical tasks $\left\lbrack {{29},{54}}\right\rbrack$ , cooperative help-giving and receiving in a $3\mathrm{D}$ design tutorial was much more difficult when participants were distributed. Participants were not always able to clearly understand each other due to a mismatch in their respective views of the 3D environment, and providing verbal instructions became complex without the ability to ground the instructions in spatial references, or to make direct changes:
104
+
105
+ Explaining how I want my partner to try using the manipulator with words is much slower than just being able to do it myself. (P4)
106
+
107
+ These observations suggest that additional coordination mechanisms are needed to enable productive cooperative learning in distributed setups.
108
+
109
+ #### 3.4.3 Benefits of Cooperative Learning of 3D Design Software
110
+
111
+ In terms of the benefits to cooperative learning of $3\mathrm{D}$ design software, participants reported having an overall positive experience, and suggested that this approach allowed them to gain additional insights beyond the tutorial content:
112
+
113
+ When we both were doing the tutorial, it just felt that, wow, that moved on very quickly, and [I] actually still learned something, and it might not [have] been what was intended to be learned through the steps, but, like, the other person's insights. (P2)
114
+
115
+ Participants also pointed out the benefit of being able to quickly detect errors and identify if they were misunderstanding the tutorial instructions:
116
+
117
+ I think working together has a lot of advantages. You can detect errors very quickly and keep on making progress. (P5)
118
+
119
+ Participants also indicated that cooperatively working on a tutorial helped them to accelerate their learning and sustain motivation to learn the tutorial content:
120
+
121
+ It accelerated the learning since it was a shared experience and we could communicate what our successes and failures were to each other. (P8)
122
+
123
+ Overall, these observations point to several potential benefits of cooperative learning of 3D design software, which are worthy of further investigation.
124
+
125
+ #### 3.4.4 Strategies for Cooperation
126
+
127
+ Our observations and interviews indicated several common strategies that peers used to cooperate with one another. During help-seeking instances, we observed that participants in distributed setups started by establishing common ground and shared 3D context with their peer as a first step when providing assistance. For example, they would ask questions such as, "which view are you on - top, side, bottom?" And then they would change to that view and proceed to make recommendations and provide help:
128
+
129
+ ## Got to first understand the language and perspective and then give feedback after. (P4)
130
+
131
+ This strategy was more common when participants were distributed. Related to this theme, we observed that peers would frequently communicate which step in the tutorial they were on, or signal to their peer that they are moving on to the next step, as suggested by the following quote in response to a question on what techniques or practices P4 was using to work together and synchronize their efforts with their peer:
132
+
133
+ ## Make sure to communicate that we were on the same step and sub-step. (P4)
134
+
135
+ This strategy suggests that mechanisms for establishing shared context between learners could be beneficial, particularly if they can support the sharing of $3\mathrm{D}$ viewpoints and the step of the tutorial a learner is currently on.
136
+
137
+ A final beneficial strategy we observed was that looking at their peer's workspace provided learners with insights into their own work and how it could be improved. This over-the-shoulder learning [88] was observed in multiple instances where peers would spend time observing each other completing a step of the tutorial and then attempt it themselves.
138
+
139
+ I can look at the other person's work and say, mine doesn't look anything like that, and then you know there is a problem. (P1)
140
+
141
+ In terms of design guidance, providing explicit support for this learning strategy could be valuable.
142
+
143
+ #### 3.4.5 Challenges in Cooperative Learning of 3D Design Software
144
+
145
+ Finally, we observed several challenges that can come from working cooperatively on step-by-step tutorials for $3\mathrm{D}$ design software. For example, participants struggled to maintain awareness of each other's activities, due to a lack of shared context:
146
+
147
+ Sometimes it is difficult when you can not see what I'm seeing, you would be like, oh it is like this, and I'm like no it is not, we both are seeing different things, and we are arguing about nothing, that was frustrating. (P5)
148
+
149
+ While maintaining awareness is a known issue for groupware applications [27, 40-42, 92], these challenges appeared to be exacerbated from working in a 3D environment, where each peer had a different camera orientation on the scene, making it difficult to establish shared context when help is needed:
150
+
151
+ At some times, we both were oriented in different ways, and I'm like something is wrong here, and she is like, oh no we just need to adjust the orientations to match. (P1)
152
+
153
+ Another challenge reported by participants was that it was difficult to determine when feedback or help was needed or would be welcomed by their peer. This was especially prominent in the distributed setups, where awareness of activities between peers was less strong. We believe that the domain of 3D design software exacerbates this problem, because the editing history of a model, or any mistakes made on previous steps, are not obvious to a peer observing the model "over the shoulder" as the user works on it.
154
+
155
+ Finally, while not necessarily a challenge specific to 3D design software, peers found it difficult to synchronize their progress in the step-by-step tutorials. In the distributed setups, participants would verbally share which step they were on in the tutorial, to help each other maintain constant awareness of each other's progress, and to signal if one of them was falling behind and might need help. This was less of a problem in the co-located setups, where participants could glance at their peer's screen to get the same information:
156
+
157
+ I found that my partner jumped to the next step. I needed to confirm what step he was on so that I could help with the next step (P4)
158
+
159
+ Past work has suggested that this kind of communication overhead can be distracting [62], which can detract from the learning process. Thus, it may be beneficial to design features to reduce this "orienting communication" overhead.
160
+
161
+ ## 4 DESIGN PRINCIPLES
162
+
163
+ The results of our observational study complement prior literature and provide an understanding of the main challenges and breakdowns faced by peers learning 3D design software through step-by-step tutorials. In particular, our findings are consistent with known issues surrounding control ownership, and collocated use of groupware systems, but also reveal important and unique insights specifically related to both cooperative use of step-by-step tutorials, and learning challenges for 3D software. Pulling together the observations and insights from the study, we suggest a set of five design principles for cooperative step-by-step tutorial systems for 3D design software:
164
+
165
+ ### 4.1 Help Establish Shared 3D Context (D1)
166
+
167
+ The system should assist with establishing and maintaining shared 3D context between peers, to make giving and receiving help easier. The need to establish and maintain shared 3D context has been identified in prior work $\left\lbrack {{24},{30},{90}}\right\rbrack$ , but it presents a particular challenge for step-by-step tutorials, where each user may have a different camera position and orientation on a separate 3D workspace, whose 3D content may be at a different stage of the tutorial than their peer. This makes it difficult to establish the context necessary to reference 3D objects or meaningfully discuss their orientation.
168
+
169
+ ### 4.2 Balance Independent Action with Encouraging Collaboration (D2)
170
+
171
+ Two competing challenges we observed were peers becoming frustrated with not being able to 'drive' in the shared workspace condition, and peers not engaging with each other in the separate workspace condition. Prior work has shown that giving users the power over navigation, manipulation, and representation within shared workspaces supports collaboration, but has its tradeoffs [39]. The system should balance the need for independence, while also encouraging collaboration between the peers, to create a beneficial cooperative learning experience where both users are engaged with the tutorial task.
172
+
173
+ ### 4.3 High-Level Awareness of Progress (D3)
174
+
175
+ The system should provide learners with high-level awareness of where their peer is in the tutorial steps and in the 3D workspace, and help make learners aware of any challenges or setbacks faced by their peer. This is particularly relevant for 3D environments, where it is more difficult to maintain awareness and establish mutual orientation and view between peers $\left\lbrack {{24},{30}}\right\rbrack$ .
176
+
177
+ ### 4.4 Non-disruptive Communication Mechanisms (D4)
178
+
179
+ The system should provide non-disruptive communication modalities that simplify and complement the beneficial cooperative learning practices that we observed. Prior work suggests that communication can be less disruptive when timing and communication method are selected appropriately [17].
180
+
181
+ ### 4.5 Synchronize Progress (D5)
182
+
183
+ To increase the likelihood of beneficial cooperation, and to try and avoid the situation where one learner quickly finishes the tutorial and becomes bored, the system should encourage peers to work together and synchronize their progress through the tutorial steps.
184
+
185
+ Guided by the principles above and previous research in this area, we developed TwoTorials, a cooperative step-by-step tutorial system designed for pairs of learners, which we present next.
186
+
187
+ ## 5 THE TwoTORIALS SYSTEM
188
+
189
+ TwoTorials offers a cooperative learning environment for two distributed users. The pair of users work cooperatively to learn 3D design software by each completing the same step-by-step tutorial in parallel (Figure 1). The system includes a set of features to support coordination and establish shared context within the tutorial. In the current work, we designed the system to work with Tinkercad, a popular web-based 3D solid modeling tool [4]. In this section, we start with a high-level overview description of TwoTorials, then we highlight the main features, noting in parentheses the relevant design principles each of these features is intended to address, and citing any prior work that influenced the designed features.
190
+
191
+ ### 5.1 System Overview
192
+
193
+ Each remote peer gets an individual workspace, as well as access to a constantly-updating and editable view of their peer's workspace, enabling users to observe their peers and actively assist them if needed. Peers can communicate, help, and encourage each other using both verbal and non-verbal communication modalities. The system also provides implicit awareness cues to helps learners maintain awareness of their own progress in the tutorial and whether their peers are falling behind and may need help; and can enforce a level of interdependence between the learners as a means to encourage them to work together and help each other out. Finally, as a user completes each step, their peer is provided with a screen recording of their efforts on that step, providing further material for the peers to reference when helping one another. The sections that follow describe the above features in detail.
194
+
195
+ ![01963e71-f5cb-7395-8f61-4177ee48d7b3_4_961_1440_639_337_0.jpg](images/01963e71-f5cb-7395-8f61-4177ee48d7b3_4_961_1440_639_337_0.jpg)
196
+
197
+ Figure 3: User and peer workspaces. (A) the user's workspace; (B) a constantly-updating view of the peer's screen; (C) an expanded view of the peer's screen (accessible by clicking the small view).
198
+
199
+ ### 5.2 Seamless View, Transition, and Editing between Workspaces
200
+
201
+ In TwoTorials, each user gets a small, constantly-updating view of their peer's screen (Figure 3B) displayed above their own workspace (Figure 3A). Clicking the small view of their peer's screen expands it to full screen (Figure 3C) and allows the user to directly edit their peer's workspace. Through these mechanisms, learners are able to constantly monitor their peer's progress, enabling over-the-shoulder learning [88], and helping to establish shared context (D1). The ability to make changes to their peer's workspace in the expanded view enables a user to directly provide assistance or demonstrate editing operations on the peer's 3D model (D2). Prior work has shown these kinds of seamless transition mechanisms from individual to shared spaces to be important for facilitating collaboration [29, 32, 53, 83].
202
+
203
+ ![01963e71-f5cb-7395-8f61-4177ee48d7b3_5_155_443_715_240_0.jpg](images/01963e71-f5cb-7395-8f61-4177ee48d7b3_5_155_443_715_240_0.jpg)
204
+
205
+ Figure 4: Drawing Annotations enable the user to draw over the 3D workspace, enabling cooperation and conversational grounding between peers [2, 44].
206
+
207
+ ### 5.3 Verbal and Non-Verbal Communication Features
208
+
209
+ The system provides in-tutorial voice and text chat, allowing peers to verbally communicate or send text messages to each other (D4). To complement these communication methods, the user can also create free-hand drawing annotations on top of both workspaces in the form of free-hand lines and simple shapes (Figure 4) [2, 74]. These non-verbal communication help users to ground their conversations or direct their peer's attention to particular parts of the UI or 3D workspace (D1). Prior work has shown such mechanisms to be effective for keeping users engaged in collaborative activities in games $\left\lbrack {2,{89}}\right\rbrack$ and online courses [44].
210
+
211
+ Users can also send peer pings, a set of predefined visual messages that provide simple, non-disruptive communication between users. Clicking on one of these pings (Figure 5), sends a visual message to their peers that lasts for a couple of seconds, displayed on top of their peer's workspace. Each of these pings is designed to indicate a situation where cooperation is needed, such as having a question, being stuck, or expressing the need to move faster (D5). Peer pings are also supported to celebrate success or provide encouragement, such as sending fireworks, high-fives, and thumbs ups (D2). This type of lightweight communication mechanism has been shown to be effective in encouraging participation in gaming and live streaming contexts [43, 44, 69].
212
+
213
+ ![01963e71-f5cb-7395-8f61-4177ee48d7b3_5_190_1569_642_331_0.jpg](images/01963e71-f5cb-7395-8f61-4177ee48d7b3_5_190_1569_642_331_0.jpg)
214
+
215
+ Figure 5: Peer Pings are predefined visual messages that provide lightweight communication between peers.
216
+
217
+ ### 5.4 Implicit Awareness Cues
218
+
219
+ To enable users to maintain awareness of their peer's progress (D3), each user has a simple avatar that moves through the list of tutorial steps as they proceed though the tutorial content (Figure 6). A timer indicates the amount of time the user has spent at the current step, further fostering peer awareness. Finally, to provide spatial awareness $\left\lbrack {{41},{42}}\right\rbrack$ of a peer’s activities within a step, the system displays the peer's mouse cursor in the user's workspace.
220
+
221
+ The non-verbal awareness cues described above allow users to maintain awareness of each other's activities in a lightweight manner, without the need to constantly communicate their status explicitly (D5). Similar mechanisms have been shown to be important for enabling users to maintain shared awareness [40, 92], especially for distributed environments, which lack the sensory cues that ease collaboration in co-located settings [12].
222
+
223
+ ![01963e71-f5cb-7395-8f61-4177ee48d7b3_5_961_534_646_241_0.jpg](images/01963e71-f5cb-7395-8f61-4177ee48d7b3_5_961_534_646_241_0.jpg)
224
+
225
+ Figure 6: Tutorial-progress awareness cues, including the user avatar, and indicator of time spent on the current step.
226
+
227
+ ### 5.5 Progress Control Mechanism
228
+
229
+ Before starting a tutorial, users can select one of three levels of step-synchronization, which affect how much the system enforces synchronization of activities between the peers (Figure 7). At the most extreme, the Strict setting prevents each user from moving on to the next step until they are both finished the current step (indicated by clicking a button). The Moderate setting enables a user to move one step ahead of their peer, and if they try to move any further they are prompted to wait. Finally, the Free setting puts no restrictions on movement through the steps. These mechanisms provide system-imposed synchronization of progress (D5), primarily motivated by our observational study results.
230
+
231
+ ![01963e71-f5cb-7395-8f61-4177ee48d7b3_5_975_1269_633_227_0.jpg](images/01963e71-f5cb-7395-8f61-4177ee48d7b3_5_975_1269_633_227_0.jpg)
232
+
233
+ Figure 7: The progress control mechanism enables users to control the progress of each peer in the tutorial.
234
+
235
+ ### 5.6 Workflow Replay
236
+
237
+ The system records a video of each user's screen as they work on a step and makes this recording available to their peer upon proceeding to the next step. By clicking a replay icon, the peer can view a video showing the exact steps the user took to complete that step (Figure 7). Past work has shown that this kind of short demonstration video can be particularly valuable for learning design software $\left\lbrack {{37},{59}}\right\rbrack$ , and this feature also frees a user from having to explain the exact process they followed - they can simply prompt their peer to check the recording video (D2).
238
+
239
+ ![01963e71-f5cb-7395-8f61-4177ee48d7b3_6_158_147_706_275_0.jpg](images/01963e71-f5cb-7395-8f61-4177ee48d7b3_6_158_147_706_275_0.jpg)
240
+
241
+ Figure 8: Video replay window.
242
+
243
+ ### 5.7 Access to Online Help Resources
244
+
245
+ The system provides quick in-application access to online and community-based help resources (e.g., the Tinkercad help center). This enables users to access help without disengaging from the tutorial experience (D2).
246
+
247
+ ### 5.8 System Implementation
248
+
249
+ TwoTorials was implemented in two parts. First, the step-by-step tutorial system was built as a Unity application. This enabled us to quickly build a multi-user system by taking advantage of Unity's networking capabilities to provide a reliable, low-latency connection between the peers for sending media streams, including voice and text chat, user progress data, shared annotations, and peer pings. Screen recording and playback was implemented using a Unity plugin that enables real-time video and audio capture and streaming. The Tinkercad application was embedded using a Unity web-browser component, which mirrored a locally-running version of Tinkercad. The second part of the system consisted of a modified version of the Tinkercad application to add required concurrent editing features to the application.
250
+
251
+ ### 5.9 Tutorial Format, Authoring, and Progress Tracking
252
+
253
+ Each tutorial step consists of text and images (Figure 1, left). We adopted this format to match as close as possible the in-product tutorials available in Tinkercad, which we used for the baseline condition in our evaluation study, explained in the next section. In terms of tutorial authoring, text was manually entered, and figures were added to a folder that was read by the system. TwoTorials tracks progress solely based on navigation through the tutorial steps (users explicitly clicking "next step"). More sophisticated tracking of Tinkercad tool usage or the 3D content being created is an interesting avenue for future work.
254
+
255
+ ## 6 EVALUATION
256
+
257
+ We conducted a user study to understand users' reactions to the TwoTorials system and its cooperative features, and to gain further insights into the cooperative experience of step-by-step tutorials.
258
+
259
+ ### 6.1 Study Procedure and Design
260
+
261
+ The study followed a within-subjects mixed-methods design, with each dyad of participants completing two step-by-step tutorials, one using TwoTorials, and the other using Tinkercad's built-in tutorial interface. These tutorials were the same as those used in the previous observational study, which had revealed them to be about the same level of difficulty. For the TwoTorials condition, the progress control setting was set to 'Free'. Although inapplication voice chat was implemented in the system, the setup of the study resulted in us not needing to use it - participants were simply instructed to talk with each other over the divider (similar to our observational study and methods used in prior work [18]).
262
+
263
+ In the baseline condition, participants used the Tinkercad tutorial along with a live screencast of their workspace, shared with their peer through Google Hangouts. We provided this capability in the baseline condition because it seemed unrealistic for users to collaborate with no view of their peer's workspace whatsoever. Participants in this condition were also able to talk with each other over the divider.
264
+
265
+ To rule out ordering and learning effects, condition order and mapping of tutorials to conditions was fully counterbalanced.
266
+
267
+ At the start of the study, participants were provided informed consent, and asked to complete a questionnaire on demographics and prior 3D design software experience. Next, the experimenter introduced the study system, and the available cooperative features, before allowing the participants to work on the tutorial. The experimenter did not help participants with working through the tutorial instructions but did provide limited assistance in response to technical difficulties with the study system. After completing each condition, a set of Likert-style questions were administered on the overall experience, ease of following the tutorial, learning, and usefulness of the cooperative features. The NASA-TLX questionnaire was also administered, to assess workload $\left\lbrack {{45},{46}}\right\rbrack$ . At the end of the study session, a post-study open-ended questionnaire was administered. The study took $\sim {60}$ minutes total to complete.
268
+
269
+ ### 6.2 Participants
270
+
271
+ Six dyads (12 participants total (10 male, 2 female), mean age 35.8 years (SD 7.9)) were recruited via an email to employees of a large software company. Each dyad was either friends or coworkers, with 1 dyad all female, and 5 all male. All participants were screened for prior experience with 3D modeling software; $1/{12}$ participants had no experience, $4/{12}$ participants had minimal experience, 5/12 participants had some experience, and 2/12 participants had extensive experience. Most common 3D modeling applications used previously by participants were Fusion 360, Maya, and SolidWorks. Only two participants had prior experience with TinkerCAD. Each participant received a $\$ {25}$ gift card as compensation for their participation.
272
+
273
+ ### 6.3 Results
274
+
275
+ We begin by presenting the main quantitative findings, comparing TwoTorials to the baseline. We then present results from the postcondition questionnaire, and the usage and subjective ratings for TwoTorials features. Finally, we discuss our qualitative and semi-structured interview findings.
276
+
277
+ ![01963e71-f5cb-7395-8f61-4177ee48d7b3_6_928_1552_714_219_0.jpg](images/01963e71-f5cb-7395-8f61-4177ee48d7b3_6_928_1552_714_219_0.jpg)
278
+
279
+ Figure 9: Completion time in seconds and NASA-TLX results (lower is better).
280
+
281
+ #### 6.3.1 Performance Results - TwoTorials vs. Baseline
282
+
283
+ A Wilcoxon Signed-Rank Test showed that dyads spent significantly less time to complete the tutorial together using TwoTorials $\left( {\mathrm{M} = {18.5}}\right)$ compared to the Baseline $\left( {\mathrm{M} = {23}}\right) (\mathrm{z} =$ ${2.831},\mathrm{p} < {.05})$ (Figure 9). These findings provide evidence that the features of TwoTorials helped participants to complete the tutorial together more quickly.
284
+
285
+ #### 6.3.2 Cognitive Load Results - TwoTorials vs. Baseline
286
+
287
+ For the cognitive load results (Figure 9), a Wilcoxon Signed-Rank Test showed significantly lower rating for effort $(\mathrm{z} = {2.668}$ , p<.01), mental demand (z = 2.201, p<.05), and frustration (z = ${2.254},\mathrm{p} < {.05}$ ) for the TwoTorials condition as compared to the baseline condition. These findings provide compelling evidence that the features of TwoTorials helped reduce the cognitive load on participants. For the rest of the TLX subscales, we found no significant difference.
288
+
289
+ ![01963e71-f5cb-7395-8f61-4177ee48d7b3_7_555_454_310_203_0.jpg](images/01963e71-f5cb-7395-8f61-4177ee48d7b3_7_555_454_310_203_0.jpg)
290
+
291
+ $\mathbf{A} =$ I learned something from this tutorial B = Co-operatively working with my peer helped me to learn the tutorial content
292
+
293
+ $\mathrm{C} = 1$ learned something new from my peer, beyond what was in the tutorial itself
294
+
295
+ $D = 1$ helped my peer to learn the tutorial content
296
+
297
+ E = Working on this tutorial cooperatively was an enjoyable experience
298
+
299
+ Figure 10: Rating on the learning experience questionnaire (higher is better). Error bars show standard error.
300
+
301
+ #### 6.3.3 Questionnaire Results - TwoTorials vs. Baseline
302
+
303
+ When asked which of the two conditions they preferred overall, TwoTorials was rated higher by 5/12 participants compared to the in-application tutorial, with 6/12 participants expressing no preference and 1/12 preferring the baseline condition. While this suggests a preference for the TwoTorials system, a Wilcoxon signed-rank test did not show this difference to be statistically significant.
304
+
305
+ For each condition, we asked participants a set of questions on what they learned from the tutorial experience (Figure 10). For most of the questions we found no significant difference, but a Wilcoxon Signed-Rank Test showed a significant difference in medians for the statement "I learned something from this tutorial" favoring the TwoTorials condition over the baseline condition (z $= {2.000},\mathrm{p} < {.05})$ .
306
+
307
+ We also asked participants a set of questions on the various other aspects of the tutorial-following experience (Figure 11). A Wilcoxon Signed-Rank Test determined that there was a significantly higher median for the TwoTorials system for "maintaining awareness of your peer's activities" as compared to the baseline $\left( {\mathrm{z} = {2.197},\mathrm{p} < {.05}}\right)$ . We did not find a significant difference for the other questions in this group.
308
+
309
+ $\mathbf{A} =$ Following the tutorial instructions B = Helping your peer with completing parts of the
310
+
311
+ ![01963e71-f5cb-7395-8f61-4177ee48d7b3_7_522_1451_342_194_0.jpg](images/01963e71-f5cb-7395-8f61-4177ee48d7b3_7_522_1451_342_194_0.jpg)
312
+
313
+ tutorial $\mathbf{C} =$ Receiving help from your peer on parts of the
314
+
315
+ tutorial D = Communicating with your peer E = Maintaining awareness of your peer's activities F = Using the tutorial system
316
+
317
+ Figure 11: Ratings of the tutorial systems for various statements. Error bars show standard error.
318
+
319
+ #### 6.3.4 TwoTorials Features
320
+
321
+ For the TwoTorials condition, we analyzed how many times each feature was used by participants, and asked participants to rate the usefulness of the individual features. In terms of usage, participants switched to their peer's workspace an average of 4.8 times $\left( {\mathrm{{SD}} = {1.94}}\right)$ and edited their peer’s workspace directly 2.2 times $\left( {\mathrm{{SD}} = {0.75}}\right)$ . Participants annotated each other’s workspaces 2.3 times $\left( {\mathrm{{SD}} = {1.03}}\right)$ and sent 4.6 peer pings $\left( {\mathrm{{SD}} = {2.42}}\right)$ . Considering that dyads in the TwoTorials condition took less than 25 minutes to complete the tutorial, these numbers suggest that the features of TwoTorials were used frequently by participants.
322
+
323
+ The ratings of usefulness for the individual features of TwoTorials are shown in Figure 12. Participants generally reported the features to be useful. There was strong support for the voice chat, the ability to view the peer's workspace, and the ability to directly edit the peer's workspace. The only feature to receive a strong negative rating for usefulness was the text chat, which is likely because the voice chat provided a much richer and more convenient communication medium.
324
+
325
+ ![01963e71-f5cb-7395-8f61-4177ee48d7b3_7_926_419_730_290_0.jpg](images/01963e71-f5cb-7395-8f61-4177ee48d7b3_7_926_419_730_290_0.jpg)
326
+
327
+ Figure 12: Rating of individual TwoTorials features.
328
+
329
+ ### 6.4 Participant Feedback and Observations
330
+
331
+ At the end of the study session, we asked participants to contrast the experience of working with TwoTorials and the baseline tutorial system. Qualitative data were analyzed using methods drawn from grounded theory [33]. Specifically, open coding was used to label the data and emerging themes were identified by the first author and then shared and discussed with the broader research team.
332
+
333
+ #### 6.4.1 Improved Communication, Awareness, and Coordination
334
+
335
+ Participants reported being able to coordinate with each other more effectively using the TwoTorials system, with smoother information flow between peers. Participants noted that having a constant view into their peer's workspace helped them solve problems more effectively without breaking the flow of working on the tutorial:
336
+
337
+ Having the constant visual of my peer helped quite a bit to solve common problems on my workflow instead of having to stop the flow to find the assistance. (P11)
338
+
339
+ Participants also appreciated the ease with which they could switch from viewing their own workspace to that of their peer:
340
+
341
+ The live view of your companion was a big plus. Easily being able to switch to their view and affect their workspace is a big plus as well. (P12)
342
+
343
+ Participants reported that TwoTorials helped them to maintain an ongoing awareness of the other user, and this helped to encourage dialog:
344
+
345
+ The first system [TwoTorials] reminded me to think about discussing, because the view of the other screen was always present [...] it helped slightly by encouraging dialog. (P9)
346
+
347
+ Participants also described using the shared awareness features to ground their discussions with their peer:
348
+
349
+ It helped to see where the person was so we could say "look at my screen this is what you're supposed to have." (P6)
350
+
351
+ #### 6.4.2 A Cooperative Learning Environment
352
+
353
+ A second common theme was that the TwoTorials features created an environment where cooperative learning was supported. Along these lines, one feature cited by participants was the ability to directly edit their peer's workspace. We observed several occasions where one peer would provide help by directly making changes in the workspace of their peer. Participants reported that this was an efficient way to help each other:
354
+
355
+ The fact that I could work directly on my peer's workspace in [TwoTorials], let me help him more efficiently. (P7)
356
+
357
+ Participants also expressed appreciation for the annotation features, and highlighted how it created more of a "lesson experience" than a tutorial:
358
+
359
+ In [TwoTorials], the fact that my peer could chime in and add his notes in real time made it more of a lesson experience than a tutorial - the chance to clarify and question each other as we followed the steps was a very useful addition. (P11)
360
+
361
+ This quote is particularly encouraging because it suggests the features of the TwoTorials system were able to change the experience to one where cooperation and helping each other was more natural. Along similar lines, P8 suggested that TwoTorials could be used in formal educational settings to enable teacher-student interactions:
362
+
363
+ In [TwoTorials], getting help was much easier. I would imagine a TA or teacher helping students through that system. (P8)
364
+
365
+ Participants also commented that they took advantage of the expertise of their peer less in the baseline condition:
366
+
367
+ If I got stuck, the person knew exactly where I was (they were there too or had just been there) and most likely had the same problems. I used the person less [in the baseline condition]. (P6)
368
+
369
+ Overall, this feedback provides validation that the TwoTorials features encouraged cooperation and helped to create an environment that supports cooperative learning.
370
+
371
+ #### 6.4.3 Motivating and Enjoyable Experience
372
+
373
+ Finally, participants reported enjoying the cooperative tutorial experience (in both conditions), and found it to be engaging:
374
+
375
+ Working cooperatively was fun and kept me engaged. Also, I learned some tips from the other person. (P9)
376
+
377
+ While participants reported enjoying the experience of cooperating in both tutorials, some participants noted that TwoTorials enhanced this aspect of the experience:
378
+
379
+ In [TwoTorials], the second layer of interaction added a different [kind] of enjoyment, where we could interact and made the experience more fun. (P11)
380
+
381
+ A specific feature cited as creating an enjoyable experience was the peer pings. Four participants stated that they felt the peer pings were fun and helped encourage them to cooperate:
382
+
383
+ "chat icons" were a nice touch to encourage each other. (P11)
384
+
385
+ There was a sense of competition that reduced co-operative work in both tutorials. This was less so in [TwoTorials] because of the added features like thumbs up etc. (P8)
386
+
387
+ This final quote is particularly encouraging, it suggests that peer pings were able to reduce the sense of competition between the peers, which could stand in the way of the cooperative experience the system is designed to foster.
388
+
389
+ ### 6.5 Challenges Encountered
390
+
391
+ While participants were generally supportive of the features of TwoTorials, some features elicited mixed feelings. Specifically, the ability to directly modify content in a peer's workspace was cited as undesirable by some participants:
392
+
393
+ I do not want to interfere with my partner's screen. Annotation can be helpful though, and stickers [peer pings] make it more fun, but not direct interaction. (P1)
394
+
395
+ I did not feel comfortable editing my partner's workspace. (P9)
396
+
397
+ As we discuss in the next section, we believe this indicates the need for better social mechanisms to be built around these features, to ensure that they can only be used to provide help or edit a peer's workspace when that help is welcome, as suggested by prior work on collaboration boundaries [83].
398
+
399
+ More broadly than any individual feature, one of the participants expressed that he would prefer to work on his own, because he did not like being observed while he worked:
400
+
401
+ I personally like working on a tutorial alone and having others watching my work is kind of irritating. (P8)
402
+
403
+ This is important feedback, but in practice we believe that those who are interested in cooperative learning will choose to use TwoTorials or other systems like it, while those who are not can continue to use the many resources currently available to support individual learning.
404
+
405
+ ## 7 DISCUSSION AND FUTURE WORK
406
+
407
+ Overall, our evaluation indicated that TwoTorials helped participants to engage in cooperative learning, improved their performance, reduced effort and mental demand, and helped participants to maintain awareness of each other's progress in the tutorial. Feedback from participants also suggests that the system's features helped to create a supportive environment for cooperative learning, helped keep learner motivation high, and helped foster a feeling of cooperation rather than competition between the learners. These are promising findings for applying the cooperative software learning approach to step-by-step 3D design software tutorials.
408
+
409
+ While our study results are generally encouraging, we found that some participants did not appreciate the ability to directly allow peers to edit one another's workspaces. This is important feedback, particularly because this study was conducted with peers who knew each other as friends or colleagues - it seems likely that learners will be more hesitant about this feature if they were working with peers with whom they don't have an existing relationship. To overcome this challenge, we believe that simple permission mechanisms could be put in place. For example, a user could be prevented from editing their peer's workspace unless that peer explicitly asks for help and provides editing permission. Editing permission could also be limited to a short period of time, or to a selected subset of objects in the workspace. This approach would fit with prior research on groupware and MOOCS, which suggests that each user should have their own territory [44], with permission and roles mechanisms to enable users to control who can view and edit $\left\lbrack {{77},{83}}\right\rbrack$ . Alternately, the system could enable a "forked demonstrations" paradigm, where a user could get a copy of their peer's current workspace that they could edit to demonstrate an operation to their peer, without making any lasting change to the peer's workspace itself.
410
+
411
+ ### 7.1 Matching with Remote Peers
412
+
413
+ In this paper we focused on investigating features that could enable a cooperative learning experience for distributed pairs of users working on step-by-step software tutorials. Having established the benefits of this approach, a next important question is how to match pairs of remote users to work together on tutorials. There are several interesting possibilities here. The results of our observational study suggest that it may not be a good idea to match users with large differences in overall experience and expertise, which could result in the more experienced user becoming bored. Instead, the system could try to match users who are at similar levels of experience but have complementary skill sets. It would be particularly interesting if the system could consider both the skills of the learners and the required skills for the tutorial, to create an experience where peers would need to work together and help one another to reach the goal. These skill-based matchmaking mechanics could be designed in a similar way to those available in multiplayer games $\left\lbrack {1,{66}}\right\rbrack$ .
414
+
415
+ ### 7.2 Additional Peers
416
+
417
+ Another interesting area for future work would be to consider how the cooperative software tutorial approach could accommodate more than two learners. An advantage of the approach we have adopted, where each user is working in parallel on the tutorials, is that it could naturally support additional peers - in contrast, if more than two people were working on one shared workspace, it could quickly become unwieldy. The advantage of adding additional peers is more collective expertise, which could help get the group of peers unstuck when they face challenges. However, this could also create additional conflicts between users, or situations where certain users pair off, leaving others out. These challenges make this an interesting area for investigation, and we see the potential for a scaled up system to be used as a component of interactive 3D design MOOCS [38, 44].
418
+
419
+ ### 7.3 Beyond 3D Design Software
420
+
421
+ Although we focused on step-by-step tutorials for 3D design software, we believe that the features of TwoTorials could be easily adapted to work in other software domains with a strong visual element, such as photo editing or the creation of games using game engines (e.g., Unity). From a technical standpoint, our system could be used with minimal modifications with any web-based software application.
422
+
423
+ ### 7.4 Limitations
424
+
425
+ This work adds to a growing body of research on software learning (e.g., $\left\lbrack {{13},{36},{51},{59},{78}}\right\rbrack$ ) and provides insights into how step-by-step tutorial systems can be adapted to support remote cooperative learning. However, there are several limitations to this work which should be addressed in future research. First, our study was conducted with a small, specific sample (employees of a software company), which may limit the generalizability of the findings. A good next step would be to deploy TwoTorials in an online 3D design course, with remote students. Second, TwoTorials was compared against a baseline that offered minimal coordination features. This was intentional, in order to reveal which of TwoTorials' features were most useful to support collaboration, but future work should compare these features to those offered in state-of-the-art online collaborative learning solutions, such as free-form web curation tools [44, 61]. Third, prior research has shown that ethnocultural norms and backgrounds can influence the effectiveness of cooperative learning $\left\lbrack {{49},{60},{86}}\right\rbrack$ , so it is important to expand the evaluation of this type of system to a much larger and more diverse set of participants. Finally, we did not collect data on the long-term effects or value of our system in sustaining learner motivation or encouraging more extensive learning of a domain, which would be an interesting avenue for future work.
426
+
427
+ ## 8 CONCLUSION
428
+
429
+ This work has demonstrated an approach and a set of features for creating cooperative remote software tutorial systems. Our findings indicate that participants enjoy the cooperative learning experience that this approach enables. Overall, we see this work as a first step toward a future where anyone, anywhere can gain the learning benefits of working alongside peers on interesting and engaging projects.
430
+
431
+ ## REFERENCES
432
+
433
+ 1. Sharad Agarwal and Jacob R. Lorch. 2009. Matchmaking for Online Games and Other Latency-sensitive P2P Systems. In Proceedings of the ACM SIGCOMM 2009 Conference on Data Communication (SIGCOMM '09), 315-326. https://doi.org/10.1145/1592568.1592605
434
+
435
+ 2. Sultan A. Alharthi, Ruth C. Torres, Ahmed S. Khalaf, Zachary O. Toups, Igor Dolgov, and Lennart E. Nacke. 2018. Investigating the Impact of Annotation Interfaces on Player Performance in Distributed Multiplayer Games. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), 314:1-314:13. https://doi.org/10.1145/3173574.3173888
436
+
437
+ 3. Wahab Ali. 2020. Online and Remote Learning in Higher Education Institutes: A Necessity in light of COVID-19 Pandemic. Higher Education Studies 10, 3: 16. https://doi.org/10.5539/hes.v10n3p16
438
+
439
+ 4. Autodesk, Inc. 2018. Tinkercad. Retrieved September 17, 2018 from https://www.tinkercad.com/
440
+
441
+ 5. Davida Bloom. 2009. Collaborative Test Taking: Benefits for Learning and Retention. College Teaching 57, 4: 216- 220. https://doi.org/10.1080/87567550903218646
442
+
443
+ 6. Chantelle Bosch, Elsa Mentz, and Gerda Reitsma. 2020. Cooperative Learning as a Blended Learning Strategy: A Conceptual Overview. Emerging Techniques and Applications for Blended Learning in K-20 Classrooms, 65-87. https://doi.org/10.4018/978-1-7998-0242-6.ch004
444
+
445
+ 7. Andrea Bunt, Patrick Dubois, Ben Lafreniere, Michael A. Terry, and David T. Cormack. 2014. TaggedComments: Promoting and Integrating User Comments in Online Application Tutorials. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14), 4037-4046. https://doi.org/10.1145/2556288.2557118
446
+
447
+ 8. John M. Carroll and Mary Beth Rosson. 1987. Paradox of
448
+
449
+ the Active User.
450
+
451
+ 9. Robyn Carston. 1999. Herbert H. Clark, Using language. Cambridge: Cambridge University Press, 1996. Pp. xi+432. Journal of Linguistics 35, 1: 167-222. Retrieved September 17, 2018 from https://www.cambridge.org/core/journals/journal-of-linguistics/article/herbert-h-clark-using-language-cambridge-cambridge-university-press-1996-pp-xi432/05AF0B5CBF76D03CBDB1DC4BFA648654
452
+
453
+ 10. Centers for Disease Control and Prevention. Coronavirus Disease 2019. Retrieved September 5, 2020 from https://www.cdc.gov/coronavirus/2019-ncov/index.html
454
+
455
+ 11. Yan Chen, Steve Oney, and Walter S. Lasecki. 2016. Towards Providing On-Demand Expert Support for Software Developers. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16), 3192-3203. https://doi.org/10.1145/2858036.2858512
456
+
457
+ 12. Victor Cheung, Y.-L. Betty Chang, and Stacey D. Scott. 2012. Communication Channels and Awareness Cues in Collocated Collaborative Time-critical Gaming. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work (CSCW '12), 569-578. https://doi.org/10.1145/2145204.2145291
458
+
459
+ 13. Pei-Yu Chi, Sally Ahn, Amanda Ren, Mira Dontcheva, Wilmot Li, and Björn Hartmann. 2012. MixT: Automatic Generation of Step-by-step Mixed Media Tutorials. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology (UIST '12), 93-102. https://doi.org/10.1145/2380116.2380130
460
+
461
+ 14. P. K. Chilana, N. Hudson, S. Bhaduri, P. Shashikumar, and S. Kane. 2018. Supporting Remote Real-Time Expert Help: Opportunities and Challenges for Novice 3D Modelers. In 2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), 157-166. https://doi.org/10.1109/VLHCC.2018.8506568
462
+
463
+ 15. Parmit K. Chilana, Andrew J. Ko, and Jacob O. Wobbrock. 2012. LemonAid: Selection-based Crowdsourced Contextual Help for Web Applications. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 8), 1549-1558. https://doi.org/10.1145/2207676.2208620
464
+
465
+ 16. Atef Chorfi, Djalal Hedjazi, Sofiane Aouag, and Djalleleddine Boubiche. 2020. Problem-based collaborative learning groupware to improve computer programming skills. Behaviour & Information Technology 0, 1-20. https://doi.org/10.1080/0144929X.2020.1795263
466
+
467
+ 17. Laura Dabbish and Robert E. Kraut. 2004. Controlling Interruptions: Awareness Displays and Social Motivation for Coordination. In Proceedings of the 2004 ACM Conference on Computer Supported Cooperative Work (CSCW ’04), 182-191. https://doi.org/10.1145/1031607.1031638
468
+
469
+ 18. Sarah D'Angelo and Andrew Begel. 2017. Improving Communication Between Pair Programmers Using Shared Gaze Awareness. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17), 6245-6290. https://doi.org/10.1145/3025453.3025573
470
+
471
+ 19. Sarah D'Angelo and Darren Gergle. 2018. An Eye For Design: Gaze Visualizations for Remote Collaborative Work. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), 349:1- 349:12. https://doi.org/10.1145/3173574.3173923
472
+
473
+ 20. Sir John Daniel. Education and the COVID-19 pandemic. 6.
474
+
475
+ 21. Sebastian Deterding, Dan Dixon, Rilla Khaled, and Lennart Nacke. 2011. From Game Design Elements to
476
+
477
+ Gamefulness: Defining "Gamification." In Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments (MindTrek '11), 9-15. https://doi.org/10.1145/2181037.2181040
478
+
479
+ 22. Pierre Dillenbourg. 1999. What do you mean by collaborative learning? P. Dillenbourg (Ed) Collaborative-learning: Cognitive and Computational Approaches.: 1- 19.
480
+
481
+ 23. Tao Dong, Mira Dontcheva, Diana Joseph, Karrie Karahalios, Mark Newman, and Mark Ackerman. 2012. Discovery-based Games for Learning Software. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12), 2083-2086. https://doi.org/10.1145/2207676.2208358
482
+
483
+ 24. Jeff Dyck and Carl Gutwin. 2002. Groupspace: A 3D Workspace Supporting User Awareness. In CHI '02 Extended Abstracts on Human Factors in Computing Systems (CHI EA BA B2), 502-503. https://doi.org/10.1145/506443.506450
484
+
485
+ 25. Volodymyr Dziubak, Ben Lafreniere, Tovi Grossman, Andrea Bunt, and George Fitzmaurice. 2018. Maestro: Designing a System for Real-Time Orchestration of 3D Modeling Workshops. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology (UIST https://doi.org/10.1145/3242587.3242606
486
+
487
+ 26. Clarence A. Ellis, Simon J. Gibbs, and Gail Rein. 1991. Groupware: Some Issues and Experiences. Commun. ACM 34, 1: 39-58. https://doi.org/10.1145/99977.99987
488
+
489
+ 27. Mica R. Endsley. 1995. Toward a theory of situation awareness in dynamic systems. Human Factors 37, 1: 32- 64. https://doi.org/10.1518/001872095779049543
490
+
491
+ 28. Jennifer Fernquist, Tovi Grossman, and George Fitzmaurice. 2011. Sketch-sketch Revolution: An Engaging Tutorial System for Guided Sketching and Application Learning. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (UIST '11), 373-382. https://doi.org/10.1145/2047196.2047245
492
+
493
+ 29. Susan R. Fussell, Robert E. Kraut, and Jane Siegel. 2000. Coordination of Communication: Effects of Shared Visual Context on Collaborative Work. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work (CSCW '00), 21-30. https://doi.org/10.1145/358916.358947
494
+
495
+ 30. William W. Gaver, Abigail Sellen, Christian Heath, and Paul Luff. 1993. One is Not Enough: Multiple Views in a Media Space. In Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems (CHI '93), 335-341. https://doi.org/10.1145/169059.169268
496
+
497
+ 31. Philipp M. Gemmel, McKenna K. Goetz, Nicole M. James, Kate A. Jesse, and Britni J. Ratliff. 2020. Collaborative Learning in Chemistry: Impact of COVID-19. Journal of Chemical Education.
498
+
499
+ https://doi.org/10.1021/acs.jchemed.0c00713
500
+
501
+ 32. Darren Gergle, Robert E. Kraut, and Susan R. Fussell. 2013. Using Visual Information for Grounding and Awareness in Collaborative Tasks. Human-Computer Interaction 28, 1: 1-39. https://doi.org/10.1080/07370024.2012.678246
502
+
503
+ 33. Barney G. Glaser and Anselm L. Strauss. 2009. The Discovery of Grounded Theory: Strategies for Qualitative Research. Transaction Publishers.
504
+
505
+ 34. Saul Greenberg. 1991. Personalizable groupware:
506
+
507
+ Accommodating individual roles and group differences. In Proceedings of the Second European Conference on Computer-Supported Cooperative Work ECSCW '91, Liam Bannon, Mike Robinson and Kjeld Schmidt (eds.). Springer Netherlands, Dordrecht, https://doi.org/10.1007/978-94-011-3506-1_2
508
+
509
+ 35. Tovi Grossman and George Fitzmaurice. 2010. ToolClips: An Investigation of Contextual Video Assistance for Functionality Understanding. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems https://doi.org/10.1145/1753326.1753552
510
+
511
+ 36. Tovi Grossman, George Fitzmaurice, and Ramtin Attar. 2009. A Survey of Software Learnability: Metrics, Methodologies and Guidelines. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’09), 649-658. https://doi.org/10.1145/1518701.1518803
512
+
513
+ 37. Tovi Grossman, Justin Matejka, and George Fitzmaurice. 2010. Chronicle: Capture, Exploration, and Playback of Document Workflow Histories. In Proceedings of the 23Nd Annual ACM Symposium on User Interface Software and Technology (UIST "10), 143-152. https://doi.org/10.1145/1866029.1866054
514
+
515
+ 38. Philip J. Guo, Juho Kim, and Rob Rubin. 2014. How Video Production Affects Student Engagement: An Empirical Study of MOOC Videos. In Proceedings of the First ACM Conference on Learning @ Scale Conference (L@S '14), 41-50. https://doi.org/10.1145/2556325.2566239
516
+
517
+ 39. Carl Gutwin and Saul Greenberg. 1998. Design for Individuals, Design for Groups: Tradeoffs Between Power and Workspace Awareness. In Proceedings of the 1998 ACM Conference on Computer Supported Cooperative Work (CSCW '98), 207-216. https://doi.org/10.1145/289444.289495
518
+
519
+ 40. Carl Gutwin, Saul Greenberg, and Mark Roseman. 1996. Workspace Awareness in Real-Time Distributed Groupware: Framework, Widgets, and Evaluation. In People and Computers XI, 281-298.
520
+
521
+ 41. Carl Gutwin, Mark Roseman, and Saul Greenberg. 1996. A Usability Study of Awareness Widgets in a Shared Workspace Groupware System. In Proceedings of the 1996 ACM Conference on Computer Supported Cooperative Work (CSCW '96), 258-267. https://doi.org/10.1145/240080.240298
522
+
523
+ 42. Carl Gutwin, Gwen Stark, and Saul Greenberg. 1995. Support for Workspace Awareness in Educational Groupware. In The First International Conference on Computer Support for Collaborative Learning (CSCL '95), 147-156. https://doi.org/10.3115/222020.222126
524
+
525
+ 43. William A. Hamilton, Oliver Garretson, and Andruid Kerne. 2014. Streaming on Twitch: Fostering Participatory Communities of Play Within Live Mixed Media. In Proceedings of the 32Nd Annual ACM Conference on Human Factors in Computing Systems (CHI '14), 1315- 1324. https://doi.org/10.1145/2556288.2557048
526
+
527
+ 44. William A. Hamilton, Nic Lupfer, Nicolas Botello, Tyler Tesch, Alex Stacy, Jeremy Merrill, Blake Williford, Frank R. Bentley, and Andruid Kerne. 2018. Collaborative Live Media Curation: Shared Context for Participation in Online Learning. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), 555:1- 555:14. https://doi.org/10.1145/3173574.3174129
528
+
529
+ 45. Sandra G. Hart. 2006. Nasa-Task Load Index (NASA-TLX); 20 Years Later. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 50, 9: 904-908.
530
+
531
+ https://doi.org/10.1177/154193120605000909
532
+
533
+ 46. Sandra G. Hart and Lowell E. Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. In Advances in Psychology, Peter A. Hancock and Najmedin Meshkati (eds.). North-Holland, 139-183. https://doi.org/10.1016/S0166- 4115(08)62386-9
534
+
535
+ 47. Justin B. Houseknecht and Lucy K. Bates. 2020. Transition to Remote Instruction Using Hybrid Just-in-Time Teaching, Collaborative Learning, and Specifications Grading for Organic Chemistry 2. Journal of Chemical Education. https://doi.org/10.1021/acs.jchemed.0c00749
536
+
537
+ 48. Nathaniel Hudson, Benjamin Lafreniere, Parmit K. Chilana, and Tovi Grossman. 2018. Investigating How Online Help and Learning Resources Support Children's Use of 3D Design Software. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18), 257:1-257:14. https://doi.org/10.1145/3173574.3173831
538
+
539
+ 49. Eric A. Hurley, Brenda A. Allen, and A. Wade Boykin. 2009. Culture and the Interaction of Student Ethnicity with Reward Structure in Group Learning. Cognition and Instruction 27, 2: 121-146. https://doi.org/10.1080/07370000902797346
540
+
541
+ 50. Nen-Chen Richard Hwang, Gladie Lui, and Marian Yew Jen Wu Tong. 2008. Cooperative Learning in a Passive Learning Environment: A Replication and Extension. Issues in Accounting Education 23, 1: 67-75. https://doi.org/10.2308/iace.2008.23.1.67
542
+
543
+ 51. Nikhita Joshi, Justin Matejka, Fraser Anderson, Tovi Grossman, and George Fitzmaurice. 2020. MicroMentor: Peer-to-Peer Software Help Sessions in Three Minutes or Less. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20), 1-13. https://doi.org/10.1145/3313831.3376230
544
+
545
+ 52. Juho Kim, Philip J. Guo, Daniel T. Seaton, Piotr Mitros, Krzysztof Z. Gajos, and Robert C. Miller. 2014. Understanding In-video Dropouts and Interaction Peaks Inonline Lecture Videos. In Proceedings of the First ACM Conference on Learning @ Scale Conference (L@S '14), 31-40. https://doi.org/10.1145/2556325.2566237
546
+
547
+ 53. Robert E. Kraut, Darren Gergle, and Susan R. Fussell. 2002. The Use of Visual Information in Shared Visual Spaces: Informing the Development of Virtual Co-presence. In Proceedings of the 2002 ACM Conference on Computer Supported Cooperative Work (CSCW '02), 31-40. https://doi.org/10.1145/587078.587084
548
+
549
+ 54. Robert E. Kraut, Mark D. Miller, and Jane Siegel. 1996. Collaboration in Performance of Physical Tasks: Effects on Outcomes and Communication. In Proceedings of the 1996 ACM Conference on Computer Supported Cooperative Work (CSCW '96), 57-66. https://doi.org/10.1145/240080.240190
550
+
551
+ 55. Hideaki Kuzuoka. 1992. Spatial Workspace Collaboration: A SharedView Video Support System for Remote Collaboration Capability. In Proceedings of the SIGCHl Conference on Human Factors in Computing Systems (CHI '92), 533-540. https://doi.org/10.1145/142750.142980
552
+
553
+ 56. Ben Lafreniere, Andrea Bunt, Matthew Lount, and Michael Terry. 2013. Understanding the Roles and Uses of Web Tutorials. In Seventh International AAAI Conference on
554
+
555
+ Weblogs and Social Media. Retrieved September 18, 2018
556
+
557
+ from
558
+
559
+ https://www.aaai.org/ocs/index.php/ICWSM/ICWSM13/pa per/view/6094
560
+
561
+ 57. Ben Lafreniere and Tovi Grossman. 2018. Blocks-to-CAD: A Cross-Application Bridge from Minecraft to 3D Modeling. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology (UIST ’18), 637-648. https://doi.org/10.1145/3242587.3242602
562
+
563
+ 58. Benjamin Lafreniere, Tovi Grossman, and George Fitzmaurice. 2013. Community Enhanced Tutorials: Improving Tutorials with Multiple Demonstrations. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13), 1779-1788. https://doi.org/10.1145/2470654.2466235
564
+
565
+ 59. Wei Li, Tovi Grossman, and George Fitzmaurice. 2014. CADament: A Gamified Multiplayer Software Tutorial System. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14), 3369- 3378. https://doi.org/10.1145/2556288.2556954
566
+
567
+ 60. G. William Lucker, David Rosenfield, Jev Sikes, and Elliot Aronson. 1976. Performance in the Interdependent Classroom: A Field Study. American Educational Research Journal 13, 2: 115-123. https://doi.org/10.3102/00028312013002115
568
+
569
+ 61. Nic Lupfer, Andruid Kerne, Andrew M. Webb, and Rhema Linder. 2016. Patterns of Free-form Curation: Visual Thinking with Web Content. 12-21. https://doi.org/10.1145/2964284.2964303
570
+
571
+ 62. Jean MacMillan, Elliot E. Entin, and Daniel Serfaty. 2004. Communication overhead: The hidden cost of team cognition. In Team cognition: Understanding the factors that drive process and performance. American Psychological Association, Washington, DC, US, 61-82. https://doi.org/10.1037/10690-004
572
+
573
+ 63. Arwen M. Marker and Amanda E. Staiano. 2014. Better Together: Outcomes of Cooperation Versus Competition in Social Exergaming. Games for Health Journal 4, 1: 25-30. https://doi.org/10.1089/g4h.2014.0066
574
+
575
+ 64. David McConnell. 2000. Implementing Computing Supported Cooperative Learning. London: Kogan Page. Retrieved September 4, 2018 from https://hal.archives-ouvertes.fr/hal-00702948
576
+
577
+ 65. Joanne M. McInnerney and Tim S. Roberts. 2009. Collaborative and Cooperative Learning. Encyclopedia of Distance Learning, Second Edition: 319-326. https://doi.org/10.4018/978-1-60566-198-8.ch046
578
+
579
+ 66. Marçal Mora-Cantallops and Miguel-Ángel Sicilia. 2018. MOBA games: A literature review. Entertainment
580
+
581
+ Computing 26: 128-138. https://doi.org/10.1016/j.entcom.2018.02.005
582
+
583
+ 67. Benedikt Morschheuser, Alexander Maedche, and Dominic Walter. 2017. Designing Cooperative Gamification: Conceptualization and Prototypical Implementation. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW https://doi.org/10.1145/2998181.2998272
584
+
585
+ 68. Benedikt Morschheuser, Marc Riar, Juho Hamari, and Alexander Maedche. 2017. How games induce cooperation? A study on the relationship between game features and we-intentions in an augmented reality game.
586
+
587
+ Computers in Human Behavior 77: 169-183. https://doi.org/10.1016/j.chb.2017.08.026
588
+
589
+ 69. Ahmed E. Mostafa, Kori Inkpen, John C. Tang, Gina Venolia, and William A. Hamilton. 2016. SocialStreamViewer: Guiding the Viewer Experience of Multiple Streams of an Event. In Proceedings of the 19th International Conference on Supporting Group Work (GROUP 8 7-291. https://doi.org/10.1145/2957276.2957286
590
+
591
+ 70. Michael P. A. Murphy. 2020. COVID-19 and emergency eLearning: Consequences of the securitization of higher education for post-pandemic pedagogy. Contemporary Security Policy 41, 3: 492-505. https://doi.org/10.1080/13523260.2020.1761749
592
+
593
+ 71. Martin A. Nowak and Karl Sigmund. 2000. Cooperation versus Competition. Financial Analysts Journal 56, 4: 13- 22. Retrieved September 17, 2018 from https://www.jstor.org/stable/4480255
594
+
595
+ 72. Jennifer K. Olsen, Daniel M. Belenky, Vincent Aleven, and Nikol Rummel. 2014. Using an Intelligent Tutoring System to Support Collaborative as well as Individual Learning. In Intelligent Tutoring Systems (Lecture Notes in Computer Science), 134-143. https://doi.org/10.1007/978-3-319- 07221-0_16
596
+
597
+ 73. Jennifer K Olsen, Nikol Rummel, and Vincent Aleven. Learning Alone or Together? A Combination Can Be Best! The 12th International Conference on Computer Supported Collaborative Learning: 8.
598
+
599
+ 74. Jiazhi Ou, Susan R. Fussell, Xilin Chen, Leslie D. Setlock, and Jie Yang. 2003. Gestural Communication over Video Stream: Supporting Multimodal Interaction for Remote Collaborative Physical Tasks. In Proceedings of the 5th International Conference on Multimodal Interfaces (ICMI '03), 242-249. https://doi.org/10.1145/958432.958477
600
+
601
+ 75. Theodore Panitz. 1999. The Motivational Benefits of Cooperative Learning. New Directions for Teaching and Learning 1999, 78: 59-67. https://doi.org/10.1002/tl.7806
602
+
603
+ 76. Paul John Peña and Dickson Lim. 2020. Learning With Friends: A Rational View of Remote Learning with Network Externalities in the Time of COVID-19. Social Science Research Network, Rochester, NY. Retrieved September 5, 2020 from https://papers.ssrn.com/abstract=3621056
604
+
605
+ 77. David Pinelle, Mutasem Barjawi, Miguel Nacenta, and Regan Mandryk. 2009. An Evaluation of Coordination Techniques for Protecting Objects and Territories in Tabletop Groupware. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '09), 2129-2138. https://doi.org/10.1145/1518701.1519025
606
+
607
+ 78. Suporn Pongnumkul, Mira Dontcheva, Wilmot Li, Jue Wang, Lubomir Bourdev, Shai Avidan, and Michael F. Cohen. 2011. Pause-and-play: Automatically Linking Screencast Video Tutorials with Applications. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (UIST '11), 135-144. https://doi.org/10.1145/2047196.2047213
608
+
609
+ 79. Jiten Rama and Judith Bishop. 2006. A Survey and Comparison of CSCW Groupware Applications. In Proceedings of the 2006 Annual Research Conference of the South African Institute of Computer Scientists and Information Technologists on IT Research in Developing Countries (SAICSIT ’06), 198-205 https://doi.org/10.1145/1216262.1216284
610
+
611
+ 80. Vernellia R. Randall. 1999. Increasing Retention and Improving Performance: Practical Advice on Using
612
+
613
+ Cooperative Learning in Law Schools. Thomas M. Cooley Law Review 16: 201. Retrieved from https://heinonline.org/HOL/Page?handle=hein.journals/tmc lr16&id=235&div=&collection=
614
+
615
+ 81. Justin Reich, Christopher J. Buttimer, Alison Fang, Garron Hillaire, Kelley Hirsch, Laura Larke, Joshua Littenberg-Tobias, Roya Madoff Moussapour, Alyssa Napier, Meredith Thompson, and Rachel Slama. 2020. Remote Learning Guidance From State Education Agencies During the COVID-19 Pandemic: A First Look. EdArXiv. https://doi.org/10.35542/osf.io/437e2
616
+
617
+ 82. Eduardo Salas, Terry L. Dickinson, Sharolyn A. Converse, and Scott I. Tannenbaum. 1992. Toward an understanding of team performance and training. In Teams: Their training and performance. Ablex Publishing, Westport, CT, US, 3-29.
618
+
619
+ 83. Stacey D. Scott, M. Sheelagh T. Carpendale, and Kori M. Inkpen. 2004. Territoriality in Collaborative Tabletop Workspaces. In Proceedings of the 2004 ACM Conference on Computer Supported Cooperative Work (CSCW '04), 294-303. https://doi.org/10.1145/1031607.1031655
620
+
621
+ 84. Magy Seif El-Nasr, Bardia Aghabeigi, David Milam, Mona Erfani, Beth Lameman, Hamid Maygoli, and Sang Mah. 2010. Understanding and Evaluating Cooperative Games. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '10), 253-262. https://doi.org/10.1145/1753326.1753363
622
+
623
+ 85. Shlomo Sharan. 1980. Cooperative Learning in Small Groups: Recent Methods and Effects on Achievement, Attitudes, and Ethnic Relations. Review of Educational Research 50, 2: 241-271. https://doi.org/10.3102/00346543050002241
624
+
625
+ 86. Robert E. Slavin. 1983. When does cooperative learning increase student achievement? Psychological Bulletin 94, 3: 429-445. https://doi.org/10.1037/0033-2909.94.3.429
626
+
627
+ 87. Zachary O. Toups, Jessica Hammer, William A. Hamilton, Ahmad Jarrah, William Graves, and Oliver Garretson. 2014. A Framework for Cooperative Communication Game Mechanics from Grounded Theory. In Proceedings of the First ACM SIGCHI Annual Symposium on Computer-human Interaction in Play (CHI PLAY '14), 257-266. https://doi.org/10.1145/2658537.2658681
628
+
629
+ 88. Michael B. Twidale. 2005. Over the Shoulder Learning: Supporting Brief Informal Learning. Computer Supported Cooperative Work (CSCW) 14, 6: 505-547. https://doi.org/10.1007/s10606-005-9007-7
630
+
631
+ 89. Deepika Vaddi, Zachary Toups, Igor Dolgov, Rina Wehbe, and Lennart Nacke. 2016. Investigating the Impact of Cooperative Communication Mechanics on Player Performance in Portal 2. In Proceedings of the ${42}\mathrm{{Nd}}$ Graphics Interface Conference (GI '16), 41-48. https://doi.org/10.20380/GI2016.06
632
+
633
+ 90. S. Valin, A. Francu, H. Trefftz, and I. Marsic. 2001. Sharing viewpoints in collaborative virtual environments. In Proceedings of the 34th Annual Hawaii International Conference on System Sciences, 12 pp.-. https://doi.org/10.1109/HICSS.2001.926213
634
+
635
+ 91. Katherine M. Van Heuvelen, G. William Daub, and Hal Van Ryswyk. 2020. Emergency Remote Instruction during the COVID-19 Pandemic Reshapes Collaborative Learning in General Chemistry. Journal of Chemical Education. https://doi.org/10.1021/acs.jchemed.0c00691
636
+
637
+ 92. Jason Wuertz, Sultan A. Alharthi, William A. Hamilton, Scott Bateman, Carl Gutwin, Anthony Tang, Zachary Toups, and Jessica Hammer. 2018. A Design Framework for Awareness Cues in Distributed Multiplayer Games. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), 243:1-243:14. https://doi.org/10.1145/3173574.3173817
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/H2GICxFVaGc/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,425 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § TWOTORIALS: A REMOTE COOPERATIVE TUTORIAL SYSTEM FOR 3D DESIGN SOFTWARE
2
+
3
+ Anonymous Author(s)
4
+
5
+ § ABSTRACT
6
+
7
+ Step-by-step tutorials have emerged as a key means for learning complex software, but they are typically designed for individuals learning independently. In contrast, cooperative learning, where learners can help each other as they work, is a fundamental pedagogical technique with many established benefits. To extend these benefits to learning 3D-design software, this work investigates the design of remote cooperative software tutorial systems. We first conduct an observational study of dyads of participants working on 3D-design tutorials, which reveals a range of potential benefits, challenges, and strategies for cooperation. Our findings inform the design of TwoTorials, a cooperative step-by-step tutorial system that helps pairs of remote users establish shared 3D context, maintain awareness of each other's activities, and coordinate their efforts. A user study reveals several benefits to this approach, including enhanced cooperation between learners, reduced effort and mental demand, increased awareness of peer activities, and higher subjective engagement with the tutorial.
8
+
9
+ Keywords: Software learning, 3D modeling, remote learning
10
+
11
+ § 1 INTRODUCTION
12
+
13
+ Users starting out in 3D design software face a range of learnability challenges [36], which have motivated the development of a variety of innovative software learning systems (e.g., $\left\lbrack {{13},{36},{51},{59},{78}}\right\rbrack$ ). In particular, step-by-step tutorials have emerged as a key means for learning complex software, and tutorials of this type exist for nearly all popular applications. In some ways, these tutorials replicate the experience of working on non-trivial projects using the software, with the tutorial providing a clear goal and scaffolding the user's skills and abilities [56]. However, this format of tutorials is primarily designed for individuals learning independently, so users cannot benefit from over-the-shoulder learning [88] and other advantages that come from learning alongside other people, such as occurs in workplace settings. This is unfortunate, because education research has established cooperative learning as a fundamental pedagogical technique [22] with benefits in terms of learner motivation [18, 75], retention [5, 80], and effective knowledge gain and transfer $\left\lbrack {{50},{64}}\right\rbrack$ .
14
+
15
+ In this paper, we are interested in how the benefits of cooperative learning can be made available on-demand to remote learners of 3D-design software, both to address some of the above challenges with tutorials, and to extend the benefits of over-the-shoulder learning to users who would not be able to benefit from it otherwise (e.g., users who are learning at home, either informally or through online courses). To this end, we developed TwoTorials (Figure 1) a tutorial system that allows pairs of remote users to complete a tutorial in parallel, with mechanisms to facilitate beneficial learning interactions. Through the design and development of TwoTorials, and two studies, our work addresses the following research questions: (1) What are the salient components of cooperative learning for step-by-step 3D-design tutorials, including potential benefits, challenges, and common strategies? and (2) What are the appropriate design principles and interface features to support these components?
16
+
17
+ To address these questions, we first conducted an observational study of four dyads completing step-by-step tutorials in Tinkercad, a popular 3D solid modeling application. The results of this study revealed potential benefits, challenges, and coordination strategies between users cooperatively completing step-by-step tutorials, such as the need to rapidly establish shared context to support their communication, and a hesitance to help one another if not explicitly asked.
18
+
19
+ Based on this initial study, we derived a set of five design principles for cooperative software tutorial systems and instantiated these in our TwoTorials prototype. The system provides mechanisms to help establish shared context, synchronize user progress, and facilitate non-disruptive communication between the peers.
20
+
21
+ To evaluate TwoTorials, we ran a second user study with six dyads of participants, comparing a baseline system with minimal coordination features to the TwoTorials system. Our findings, based on a within-subjects mixed-methods user study, indicate that TwoTorials helped participants to complete tutorials faster, significantly reduced their effort and mental demand, and helped them to maintain a higher level of awareness of each other's progress.
22
+
23
+ Building on previous work in software learnability and cooperative learning, and our interest in fostering peer help for 3D design software tutorials, this work makes three main contributions. First, we contribute a deeper understanding of the potential benefits, challenges, and common behaviors surrounding cooperative learning of $3\mathrm{D}$ design software. Second, based on these findings, we present a set of design principles for remote cooperative tutorial systems, and instantiate these principles in what we believe to be the first cooperative software tutorial system. Finally, a user study contributes an understanding of the benefits of such system for learning feature-rich software, and points to directions for further work, including generalizations to larger peer groups and other software domains.
24
+
25
+ § 2 RELATED WORK
26
+
27
+ This work is related to prior research on software learning and tutorial systems, cooperative learning and distributed teamwork, and cooperative interfaces for multiplayer games. We review each of these areas below.
28
+
29
+ § 2.1 SOFTWARE LEARNING AND TUTORIAL SYSTEMS
30
+
31
+ Early research on software learning established a tendency for learners to abandon printed manuals and other learning materials that take time away from their primary task $\left\lbrack {8,{23},{36}}\right\rbrack$ . This has led to a rich body of research on systems and tools to support learning of software applications $\left\lbrack {{13},{25},{78}}\right\rbrack$ . In particular, prior work has demonstrated the benefits of step-by-step tutorials and gamified tutorial systems $\left\lbrack {{23},{59}}\right\rbrack$ , as well as systems that allow users to learn in the context of realistic tasks [28, 35, 78].
32
+
33
+ A number of research projects have explored methods for harnessing community-created content, or improvements to learning content contributed by other learners, such as improved workflows $\left\lbrack {{11},{37},{58}}\right\rbrack$ , multimedia demonstrations of tutorial steps [13], or comments on tutorial content [7]. Recent research has also proposed new approaches for how groups learn 3D design [25, 48]. For example, Maestro [25] enables facilitators of 3D modelling workshops to track the progress of their classrooms in real-time, and provides simple mechanisms for provide help to students when needed.
34
+
35
+ While the above approaches appear to be valuable, research in the education community has demonstrated a range of benefits to active cooperative learning approaches, in which learners are able to directly interact with one another [14, 15, 52] (discussed in detail in the next section). To provide such an active learning experience, some work on software tutorial systems has integrated elements from games [21, 57, 59]. For example, CADament [59] enables users to learn 3D design software skills by observing the workflows of opponents in a competitive multiplayer learning game. The system enables competitors to engage in "over the shoulder" learning, but it is not focused on creating an environment where learners can work on tasks together to help and benefit from each other. Currently, there exists no step-by-step tutorials that explicitly support remote cooperation. To fill this gap, we build on this body of prior work but focus on supporting active cooperative approaches for learning 3D design software with other users, and propose the first known step-by-step tutorial system specifically designed to support remote peer learning.
36
+
37
+ < g r a p h i c s >
38
+
39
+ Figure 1: The TwoTorials system.
40
+
41
+ § 2.2 REMOTE AND COOPERATIVE LEARNING
42
+
43
+ Remote learning is becoming increasingly prevalent in our world. The global coronavirus (COVID-19) ${}^{1}$ pandemic has demonstrated the need to establish effective remote learning environments [3, 10, 70], and is forcing educators and learners to rethink pedagogical methods and approaches [6, 20, 81]. A key question raised in this context is how to enable remote learning that preserves the social aspects of in-person learning, giving learners opportunities to engage and interact with each other, which has been shown to aid in motivation and creating positive learning experiences $\left\lbrack {{16},{31},{47},{76},{91}}\right\rbrack$ . In the present work, we aim to support remote synchronous coordinated learning for the specific learning resource of step-by-step tutorials.
44
+
45
+ Collaborative learning is an educational approach in which learners work together to solve a problem or complete a task, recognizing that learning is a naturally social activity [22, 65]. There are many approaches to foster collaborative learning, and significant literature showing its effectiveness both in co-located and remote learning environments [22, 44, 72]. Cooperative learning is a particular type of collaborative learning in which a set of processes help people interact together to accomplish specific goals, helping themselves and others to learn [64, 65, 85]. In terms of specific benefits, social activities have been shown to increase the motivation to learn from others, and to result in effective knowledge gain and transfer $\left\lbrack {{16},{18},{75}}\right\rbrack$ . Critically, this kind of social learning does not need to consist of continuous interaction between the learners $\left\lbrack {{64},{73}}\right\rbrack -$ simply being able to work together in a social environment provides the opportunity for both passive and active learning. For example, "over-the-shoulder learning" can occur from observing another learner while they are completing a task, or by actively engaging in completing the task together [18, 88]. When compared to individual and competitive learning, cooperative learning has been demonstrated to be particularly effective for sustaining learner motivation [63, 71].
46
+
47
+ In terms of particular mechanisms for enabling remote cooperative learning, prior work has shown that cooperative learning is most effective when learners organize their activities, synchronize their effort, and maintain shared situational awareness $\left\lbrack {{29},{50},{82}}\right\rbrack$ . Research on distributed teams has also shown the importance of awareness mechanisms, to enable team members to inform one another of their status [40, 92]. Although team or group awareness can be easily maintained in co-located collaborative environments, it can be difficult in remote collaboration [40]. Thus, groupware research has focused on interfaces and techniques that facilitate communication, increase group awareness, and enable cooperation, such as capturing eye gaze [18, 19] and other awareness cues [12, 92]. A full literature review of groupware research is beyond the scope of this paper but we point readers to existing surveys on the topic [26,40,79].
48
+
49
+ § 2.3 COOPERATIVE GAMING
50
+
51
+ Games are a prominent example of systems that cultivate cooperative behavior [68, 84, 87]. Cooperative games provide different interfaces and mechanics to facilitate multiplayer interaction $\left\lbrack {1,{45},{55}}\right\rbrack$ . In cooperative games, mutual understanding of the objectives between the players are essential to their success. Players must maintain awareness of each other and establish a common ground for communication [9]. Cooperative games also provide players with a verity of explicit and implicit communication mechanics, including awareness cues and cooperative communication mechanics [2, 12, 67, 87, 92]. These mechanics help teams communicate with each other and maintain high level of awareness. In this work, we draw on the body of past work on multiplayer games and gamification to develop interfaces to foster effective cooperative learning for a qualitatively different domain - step-by-step tutorials for 3D design software.
52
+
53
+ In summary, our current work contributes to the understanding of cooperative learning for the domain of step-by-step tutorials for 3D design software, and the TwoTorials prototype system provides specific mechanisms to enable shared awareness and support cooperative learning in this domain. To the best of our knowledge this work represents the first application of a cooperative active learning approach to step-by-step software tutorial systems.
54
+
55
+ ${}^{1}$ The 2019/2020 global novel coronavirus (COVID-19) pandemic [10]
56
+
57
+ § 3 OBSERVATIONAL LAB STUDY
58
+
59
+ To inform the design of our cooperative step-by-step tutorial system, we conducted an observational study with pairs of participants. Our main goal was to understand how peers cooperate with each other to complete this type of tutorial, the challenges they face, how they synchronize their work, and how they encourage and support each other.
60
+
61
+ § 3.1 STUDY PROCEDURE
62
+
63
+ Each pair of participants completed two step-by-step tutorials for Tinkercad drawn from those provided in-product in the software (Balloon Powered Car and Roman Dome), each lasting $\sim {30}$ minutes, followed by an individual survey and a short semi-structured interview.
64
+
65
+ We intentionally tested a range of different cooperative tutorial setups (Figure 2), to gain a broad set of insights into the benefits and challenges that arise in different forms of collaboration. These included co-located vs. distributed setups (simulated by a partition between the participants, which permitted them to talk to each other, but still required the use of screen sharing to view each other's workspaces), and separate vs. shared workspace setups (i.e., whether both participants were working on one project together, or working on the same project in parallel). Our decision to use a partition to simulate a distributed condition was designed to reduce the complexity of the study setup, and is an approach that has been used in prior work [18]. In this study, each dyad of participants completed two tutorials across one of the axes of the four setups shown in Figure 2, enabling them to comment in greater detail on the effect of that axis.
66
+
67
+ The experimenter took observations and provided participants with assistance with technical difficulties but did not help the participants with completing the tutorial instructions. Video and audio recording of the study and post-study interviews were transcribed and analyzed for common themes. Each study session lasted 6̃0 minutes total.
68
+
69
+ < g r a p h i c s >
70
+
71
+ Figure 2: The four cooperative tutorial setups that were tested.
72
+
73
+ § 3.2 ANALYSIS
74
+
75
+ Interview transcripts and observations were analyzed using methods drawn from grounded theory [33]. Specifically, open coding was used to label transcript data, and emerging themes and patterns were identified by the first author and then shared and discussed with the broader research team. The themes that emerged relate to the potential benefits to users from cooperative leaning of $3\mathrm{D}$ design software, challenges experienced by peers when learning 3D design software together, and common strategies used to cooperatively learn.
76
+
77
+ § 3.3 PARTICIPANTS
78
+
79
+ Four dyads ( 8 participants total ( 6 male, 2 female), mean age 38.4 years (SD 10.3)) were recruited via an email to employees of a large software company. As dyads volunteered together, they are best considered as coworkers or friends. Two dyads were all male, and two were mixed (one male, one female). All participants reported having completed a bachelor's degree. All participants were screened for prior experience with 3D modeling software; $1/8$ participants had no experience, $3/8$ participants had minimal experience, $3/8$ participants had some experience, and $1/8$ participants had extensive experience. The most common 3D modeling applications used previously by participants were Maya, Blender, and Alias. Only one participant had prior experience with TinkerCAD. Each participant received a $25 gift card as compensation for their participation.
80
+
81
+ § 3.4 RESULTS
82
+
83
+ We begin by discussing our observations of the tradeoffs of separate vs. shared workspaces and co-located vs. distributed workspaces, and then discuss our findings on the benefits, challenges, and strategies used by participants to cooperatively complete step-by-step tutorials.
84
+
85
+ § 3.4.1 SEPARATE VS. SHARED WORKSPACE
86
+
87
+ Neither the separate nor shared workspace setups were revealed to be clearly superior for enabling cooperative learning, with both showing advantages and disadvantages. Having a shared workspace forced the peers to collaborate, which was beneficial, but created a situation where the participant that was not 'driving' the system could become frustrated, and feel like they were missing out on learning:
88
+
89
+ When I was just watching it was frustrating to not be able to take actions myself. We were trying to figure out how the interface works, and I want to be able to create my own objects to explore the manipulators and what is possible. (P5)
90
+
91
+ Conversely, participants reported that working on separate workspaces created a feeling of working in parallel on separate tasks:
92
+
93
+ We had the video sharing, but we were both doing our own thing, so we only looked at each other's views to make sure our work looked somewhat similar. It didn't really seem like a cooperative effort [in the distributed separate workspace condition], more like we were just doing the same thing at the same time. (P3)
94
+
95
+ This observation is consistent with prior work on personalizable groupware that can support individual and group activities [34, 55]. A design approach that emerged from this observation was the idea of a hybrid system, which would allow each of the peers to benefit from actively working on the tutorial individually, while encouraging cooperation and peer help.
96
+
97
+ § 3.4.2 CO-LOCATED VS. DISTRIBUTED
98
+
99
+ Contrasting the co-located and distributed setups revealed a range of challenges to coordinating effort when participants were distributed. When co-located, it was much easier for participants to make spatial references to parts of the $3\mathrm{D}$ environment and to assist one another by looking at each other's screens, pointing at parts of their peer's screen, or even taking over the mouse of their peer to rotate the camera or make simple changes to 3D objects. Consistent with prior work on collaborative remote physical tasks $\left\lbrack {{29},{54}}\right\rbrack$ , cooperative help-giving and receiving in a $3\mathrm{D}$ design tutorial was much more difficult when participants were distributed. Participants were not always able to clearly understand each other due to a mismatch in their respective views of the 3D environment, and providing verbal instructions became complex without the ability to ground the instructions in spatial references, or to make direct changes:
100
+
101
+ Explaining how I want my partner to try using the manipulator with words is much slower than just being able to do it myself. (P4)
102
+
103
+ These observations suggest that additional coordination mechanisms are needed to enable productive cooperative learning in distributed setups.
104
+
105
+ § 3.4.3 BENEFITS OF COOPERATIVE LEARNING OF 3D DESIGN SOFTWARE
106
+
107
+ In terms of the benefits to cooperative learning of $3\mathrm{D}$ design software, participants reported having an overall positive experience, and suggested that this approach allowed them to gain additional insights beyond the tutorial content:
108
+
109
+ When we both were doing the tutorial, it just felt that, wow, that moved on very quickly, and [I] actually still learned something, and it might not [have] been what was intended to be learned through the steps, but, like, the other person's insights. (P2)
110
+
111
+ Participants also pointed out the benefit of being able to quickly detect errors and identify if they were misunderstanding the tutorial instructions:
112
+
113
+ I think working together has a lot of advantages. You can detect errors very quickly and keep on making progress. (P5)
114
+
115
+ Participants also indicated that cooperatively working on a tutorial helped them to accelerate their learning and sustain motivation to learn the tutorial content:
116
+
117
+ It accelerated the learning since it was a shared experience and we could communicate what our successes and failures were to each other. (P8)
118
+
119
+ Overall, these observations point to several potential benefits of cooperative learning of 3D design software, which are worthy of further investigation.
120
+
121
+ § 3.4.4 STRATEGIES FOR COOPERATION
122
+
123
+ Our observations and interviews indicated several common strategies that peers used to cooperate with one another. During help-seeking instances, we observed that participants in distributed setups started by establishing common ground and shared 3D context with their peer as a first step when providing assistance. For example, they would ask questions such as, "which view are you on - top, side, bottom?" And then they would change to that view and proceed to make recommendations and provide help:
124
+
125
+ § GOT TO FIRST UNDERSTAND THE LANGUAGE AND PERSPECTIVE AND THEN GIVE FEEDBACK AFTER. (P4)
126
+
127
+ This strategy was more common when participants were distributed. Related to this theme, we observed that peers would frequently communicate which step in the tutorial they were on, or signal to their peer that they are moving on to the next step, as suggested by the following quote in response to a question on what techniques or practices P4 was using to work together and synchronize their efforts with their peer:
128
+
129
+ § MAKE SURE TO COMMUNICATE THAT WE WERE ON THE SAME STEP AND SUB-STEP. (P4)
130
+
131
+ This strategy suggests that mechanisms for establishing shared context between learners could be beneficial, particularly if they can support the sharing of $3\mathrm{D}$ viewpoints and the step of the tutorial a learner is currently on.
132
+
133
+ A final beneficial strategy we observed was that looking at their peer's workspace provided learners with insights into their own work and how it could be improved. This over-the-shoulder learning [88] was observed in multiple instances where peers would spend time observing each other completing a step of the tutorial and then attempt it themselves.
134
+
135
+ I can look at the other person's work and say, mine doesn't look anything like that, and then you know there is a problem. (P1)
136
+
137
+ In terms of design guidance, providing explicit support for this learning strategy could be valuable.
138
+
139
+ § 3.4.5 CHALLENGES IN COOPERATIVE LEARNING OF 3D DESIGN SOFTWARE
140
+
141
+ Finally, we observed several challenges that can come from working cooperatively on step-by-step tutorials for $3\mathrm{D}$ design software. For example, participants struggled to maintain awareness of each other's activities, due to a lack of shared context:
142
+
143
+ Sometimes it is difficult when you can not see what I'm seeing, you would be like, oh it is like this, and I'm like no it is not, we both are seeing different things, and we are arguing about nothing, that was frustrating. (P5)
144
+
145
+ While maintaining awareness is a known issue for groupware applications [27, 40-42, 92], these challenges appeared to be exacerbated from working in a 3D environment, where each peer had a different camera orientation on the scene, making it difficult to establish shared context when help is needed:
146
+
147
+ At some times, we both were oriented in different ways, and I'm like something is wrong here, and she is like, oh no we just need to adjust the orientations to match. (P1)
148
+
149
+ Another challenge reported by participants was that it was difficult to determine when feedback or help was needed or would be welcomed by their peer. This was especially prominent in the distributed setups, where awareness of activities between peers was less strong. We believe that the domain of 3D design software exacerbates this problem, because the editing history of a model, or any mistakes made on previous steps, are not obvious to a peer observing the model "over the shoulder" as the user works on it.
150
+
151
+ Finally, while not necessarily a challenge specific to 3D design software, peers found it difficult to synchronize their progress in the step-by-step tutorials. In the distributed setups, participants would verbally share which step they were on in the tutorial, to help each other maintain constant awareness of each other's progress, and to signal if one of them was falling behind and might need help. This was less of a problem in the co-located setups, where participants could glance at their peer's screen to get the same information:
152
+
153
+ I found that my partner jumped to the next step. I needed to confirm what step he was on so that I could help with the next step (P4)
154
+
155
+ Past work has suggested that this kind of communication overhead can be distracting [62], which can detract from the learning process. Thus, it may be beneficial to design features to reduce this "orienting communication" overhead.
156
+
157
+ § 4 DESIGN PRINCIPLES
158
+
159
+ The results of our observational study complement prior literature and provide an understanding of the main challenges and breakdowns faced by peers learning 3D design software through step-by-step tutorials. In particular, our findings are consistent with known issues surrounding control ownership, and collocated use of groupware systems, but also reveal important and unique insights specifically related to both cooperative use of step-by-step tutorials, and learning challenges for 3D software. Pulling together the observations and insights from the study, we suggest a set of five design principles for cooperative step-by-step tutorial systems for 3D design software:
160
+
161
+ § 4.1 HELP ESTABLISH SHARED 3D CONTEXT (D1)
162
+
163
+ The system should assist with establishing and maintaining shared 3D context between peers, to make giving and receiving help easier. The need to establish and maintain shared 3D context has been identified in prior work $\left\lbrack {{24},{30},{90}}\right\rbrack$ , but it presents a particular challenge for step-by-step tutorials, where each user may have a different camera position and orientation on a separate 3D workspace, whose 3D content may be at a different stage of the tutorial than their peer. This makes it difficult to establish the context necessary to reference 3D objects or meaningfully discuss their orientation.
164
+
165
+ § 4.2 BALANCE INDEPENDENT ACTION WITH ENCOURAGING COLLABORATION (D2)
166
+
167
+ Two competing challenges we observed were peers becoming frustrated with not being able to 'drive' in the shared workspace condition, and peers not engaging with each other in the separate workspace condition. Prior work has shown that giving users the power over navigation, manipulation, and representation within shared workspaces supports collaboration, but has its tradeoffs [39]. The system should balance the need for independence, while also encouraging collaboration between the peers, to create a beneficial cooperative learning experience where both users are engaged with the tutorial task.
168
+
169
+ § 4.3 HIGH-LEVEL AWARENESS OF PROGRESS (D3)
170
+
171
+ The system should provide learners with high-level awareness of where their peer is in the tutorial steps and in the 3D workspace, and help make learners aware of any challenges or setbacks faced by their peer. This is particularly relevant for 3D environments, where it is more difficult to maintain awareness and establish mutual orientation and view between peers $\left\lbrack {{24},{30}}\right\rbrack$ .
172
+
173
+ § 4.4 NON-DISRUPTIVE COMMUNICATION MECHANISMS (D4)
174
+
175
+ The system should provide non-disruptive communication modalities that simplify and complement the beneficial cooperative learning practices that we observed. Prior work suggests that communication can be less disruptive when timing and communication method are selected appropriately [17].
176
+
177
+ § 4.5 SYNCHRONIZE PROGRESS (D5)
178
+
179
+ To increase the likelihood of beneficial cooperation, and to try and avoid the situation where one learner quickly finishes the tutorial and becomes bored, the system should encourage peers to work together and synchronize their progress through the tutorial steps.
180
+
181
+ Guided by the principles above and previous research in this area, we developed TwoTorials, a cooperative step-by-step tutorial system designed for pairs of learners, which we present next.
182
+
183
+ § 5 THE TWOTORIALS SYSTEM
184
+
185
+ TwoTorials offers a cooperative learning environment for two distributed users. The pair of users work cooperatively to learn 3D design software by each completing the same step-by-step tutorial in parallel (Figure 1). The system includes a set of features to support coordination and establish shared context within the tutorial. In the current work, we designed the system to work with Tinkercad, a popular web-based 3D solid modeling tool [4]. In this section, we start with a high-level overview description of TwoTorials, then we highlight the main features, noting in parentheses the relevant design principles each of these features is intended to address, and citing any prior work that influenced the designed features.
186
+
187
+ § 5.1 SYSTEM OVERVIEW
188
+
189
+ Each remote peer gets an individual workspace, as well as access to a constantly-updating and editable view of their peer's workspace, enabling users to observe their peers and actively assist them if needed. Peers can communicate, help, and encourage each other using both verbal and non-verbal communication modalities. The system also provides implicit awareness cues to helps learners maintain awareness of their own progress in the tutorial and whether their peers are falling behind and may need help; and can enforce a level of interdependence between the learners as a means to encourage them to work together and help each other out. Finally, as a user completes each step, their peer is provided with a screen recording of their efforts on that step, providing further material for the peers to reference when helping one another. The sections that follow describe the above features in detail.
190
+
191
+ < g r a p h i c s >
192
+
193
+ Figure 3: User and peer workspaces. (A) the user's workspace; (B) a constantly-updating view of the peer's screen; (C) an expanded view of the peer's screen (accessible by clicking the small view).
194
+
195
+ § 5.2 SEAMLESS VIEW, TRANSITION, AND EDITING BETWEEN WORKSPACES
196
+
197
+ In TwoTorials, each user gets a small, constantly-updating view of their peer's screen (Figure 3B) displayed above their own workspace (Figure 3A). Clicking the small view of their peer's screen expands it to full screen (Figure 3C) and allows the user to directly edit their peer's workspace. Through these mechanisms, learners are able to constantly monitor their peer's progress, enabling over-the-shoulder learning [88], and helping to establish shared context (D1). The ability to make changes to their peer's workspace in the expanded view enables a user to directly provide assistance or demonstrate editing operations on the peer's 3D model (D2). Prior work has shown these kinds of seamless transition mechanisms from individual to shared spaces to be important for facilitating collaboration [29, 32, 53, 83].
198
+
199
+ < g r a p h i c s >
200
+
201
+ Figure 4: Drawing Annotations enable the user to draw over the 3D workspace, enabling cooperation and conversational grounding between peers [2, 44].
202
+
203
+ § 5.3 VERBAL AND NON-VERBAL COMMUNICATION FEATURES
204
+
205
+ The system provides in-tutorial voice and text chat, allowing peers to verbally communicate or send text messages to each other (D4). To complement these communication methods, the user can also create free-hand drawing annotations on top of both workspaces in the form of free-hand lines and simple shapes (Figure 4) [2, 74]. These non-verbal communication help users to ground their conversations or direct their peer's attention to particular parts of the UI or 3D workspace (D1). Prior work has shown such mechanisms to be effective for keeping users engaged in collaborative activities in games $\left\lbrack {2,{89}}\right\rbrack$ and online courses [44].
206
+
207
+ Users can also send peer pings, a set of predefined visual messages that provide simple, non-disruptive communication between users. Clicking on one of these pings (Figure 5), sends a visual message to their peers that lasts for a couple of seconds, displayed on top of their peer's workspace. Each of these pings is designed to indicate a situation where cooperation is needed, such as having a question, being stuck, or expressing the need to move faster (D5). Peer pings are also supported to celebrate success or provide encouragement, such as sending fireworks, high-fives, and thumbs ups (D2). This type of lightweight communication mechanism has been shown to be effective in encouraging participation in gaming and live streaming contexts [43, 44, 69].
208
+
209
+ < g r a p h i c s >
210
+
211
+ Figure 5: Peer Pings are predefined visual messages that provide lightweight communication between peers.
212
+
213
+ § 5.4 IMPLICIT AWARENESS CUES
214
+
215
+ To enable users to maintain awareness of their peer's progress (D3), each user has a simple avatar that moves through the list of tutorial steps as they proceed though the tutorial content (Figure 6). A timer indicates the amount of time the user has spent at the current step, further fostering peer awareness. Finally, to provide spatial awareness $\left\lbrack {{41},{42}}\right\rbrack$ of a peer’s activities within a step, the system displays the peer's mouse cursor in the user's workspace.
216
+
217
+ The non-verbal awareness cues described above allow users to maintain awareness of each other's activities in a lightweight manner, without the need to constantly communicate their status explicitly (D5). Similar mechanisms have been shown to be important for enabling users to maintain shared awareness [40, 92], especially for distributed environments, which lack the sensory cues that ease collaboration in co-located settings [12].
218
+
219
+ < g r a p h i c s >
220
+
221
+ Figure 6: Tutorial-progress awareness cues, including the user avatar, and indicator of time spent on the current step.
222
+
223
+ § 5.5 PROGRESS CONTROL MECHANISM
224
+
225
+ Before starting a tutorial, users can select one of three levels of step-synchronization, which affect how much the system enforces synchronization of activities between the peers (Figure 7). At the most extreme, the Strict setting prevents each user from moving on to the next step until they are both finished the current step (indicated by clicking a button). The Moderate setting enables a user to move one step ahead of their peer, and if they try to move any further they are prompted to wait. Finally, the Free setting puts no restrictions on movement through the steps. These mechanisms provide system-imposed synchronization of progress (D5), primarily motivated by our observational study results.
226
+
227
+ < g r a p h i c s >
228
+
229
+ Figure 7: The progress control mechanism enables users to control the progress of each peer in the tutorial.
230
+
231
+ § 5.6 WORKFLOW REPLAY
232
+
233
+ The system records a video of each user's screen as they work on a step and makes this recording available to their peer upon proceeding to the next step. By clicking a replay icon, the peer can view a video showing the exact steps the user took to complete that step (Figure 7). Past work has shown that this kind of short demonstration video can be particularly valuable for learning design software $\left\lbrack {{37},{59}}\right\rbrack$ , and this feature also frees a user from having to explain the exact process they followed - they can simply prompt their peer to check the recording video (D2).
234
+
235
+ < g r a p h i c s >
236
+
237
+ Figure 8: Video replay window.
238
+
239
+ § 5.7 ACCESS TO ONLINE HELP RESOURCES
240
+
241
+ The system provides quick in-application access to online and community-based help resources (e.g., the Tinkercad help center). This enables users to access help without disengaging from the tutorial experience (D2).
242
+
243
+ § 5.8 SYSTEM IMPLEMENTATION
244
+
245
+ TwoTorials was implemented in two parts. First, the step-by-step tutorial system was built as a Unity application. This enabled us to quickly build a multi-user system by taking advantage of Unity's networking capabilities to provide a reliable, low-latency connection between the peers for sending media streams, including voice and text chat, user progress data, shared annotations, and peer pings. Screen recording and playback was implemented using a Unity plugin that enables real-time video and audio capture and streaming. The Tinkercad application was embedded using a Unity web-browser component, which mirrored a locally-running version of Tinkercad. The second part of the system consisted of a modified version of the Tinkercad application to add required concurrent editing features to the application.
246
+
247
+ § 5.9 TUTORIAL FORMAT, AUTHORING, AND PROGRESS TRACKING
248
+
249
+ Each tutorial step consists of text and images (Figure 1, left). We adopted this format to match as close as possible the in-product tutorials available in Tinkercad, which we used for the baseline condition in our evaluation study, explained in the next section. In terms of tutorial authoring, text was manually entered, and figures were added to a folder that was read by the system. TwoTorials tracks progress solely based on navigation through the tutorial steps (users explicitly clicking "next step"). More sophisticated tracking of Tinkercad tool usage or the 3D content being created is an interesting avenue for future work.
250
+
251
+ § 6 EVALUATION
252
+
253
+ We conducted a user study to understand users' reactions to the TwoTorials system and its cooperative features, and to gain further insights into the cooperative experience of step-by-step tutorials.
254
+
255
+ § 6.1 STUDY PROCEDURE AND DESIGN
256
+
257
+ The study followed a within-subjects mixed-methods design, with each dyad of participants completing two step-by-step tutorials, one using TwoTorials, and the other using Tinkercad's built-in tutorial interface. These tutorials were the same as those used in the previous observational study, which had revealed them to be about the same level of difficulty. For the TwoTorials condition, the progress control setting was set to 'Free'. Although inapplication voice chat was implemented in the system, the setup of the study resulted in us not needing to use it - participants were simply instructed to talk with each other over the divider (similar to our observational study and methods used in prior work [18]).
258
+
259
+ In the baseline condition, participants used the Tinkercad tutorial along with a live screencast of their workspace, shared with their peer through Google Hangouts. We provided this capability in the baseline condition because it seemed unrealistic for users to collaborate with no view of their peer's workspace whatsoever. Participants in this condition were also able to talk with each other over the divider.
260
+
261
+ To rule out ordering and learning effects, condition order and mapping of tutorials to conditions was fully counterbalanced.
262
+
263
+ At the start of the study, participants were provided informed consent, and asked to complete a questionnaire on demographics and prior 3D design software experience. Next, the experimenter introduced the study system, and the available cooperative features, before allowing the participants to work on the tutorial. The experimenter did not help participants with working through the tutorial instructions but did provide limited assistance in response to technical difficulties with the study system. After completing each condition, a set of Likert-style questions were administered on the overall experience, ease of following the tutorial, learning, and usefulness of the cooperative features. The NASA-TLX questionnaire was also administered, to assess workload $\left\lbrack {{45},{46}}\right\rbrack$ . At the end of the study session, a post-study open-ended questionnaire was administered. The study took $\sim {60}$ minutes total to complete.
264
+
265
+ § 6.2 PARTICIPANTS
266
+
267
+ Six dyads (12 participants total (10 male, 2 female), mean age 35.8 years (SD 7.9)) were recruited via an email to employees of a large software company. Each dyad was either friends or coworkers, with 1 dyad all female, and 5 all male. All participants were screened for prior experience with 3D modeling software; $1/{12}$ participants had no experience, $4/{12}$ participants had minimal experience, 5/12 participants had some experience, and 2/12 participants had extensive experience. Most common 3D modeling applications used previously by participants were Fusion 360, Maya, and SolidWorks. Only two participants had prior experience with TinkerCAD. Each participant received a $\$ {25}$ gift card as compensation for their participation.
268
+
269
+ § 6.3 RESULTS
270
+
271
+ We begin by presenting the main quantitative findings, comparing TwoTorials to the baseline. We then present results from the postcondition questionnaire, and the usage and subjective ratings for TwoTorials features. Finally, we discuss our qualitative and semi-structured interview findings.
272
+
273
+ < g r a p h i c s >
274
+
275
+ Figure 9: Completion time in seconds and NASA-TLX results (lower is better).
276
+
277
+ § 6.3.1 PERFORMANCE RESULTS - TWOTORIALS VS. BASELINE
278
+
279
+ A Wilcoxon Signed-Rank Test showed that dyads spent significantly less time to complete the tutorial together using TwoTorials $\left( {\mathrm{M} = {18.5}}\right)$ compared to the Baseline $\left( {\mathrm{M} = {23}}\right) (\mathrm{z} =$ ${2.831},\mathrm{p} < {.05})$ (Figure 9). These findings provide evidence that the features of TwoTorials helped participants to complete the tutorial together more quickly.
280
+
281
+ § 6.3.2 COGNITIVE LOAD RESULTS - TWOTORIALS VS. BASELINE
282
+
283
+ For the cognitive load results (Figure 9), a Wilcoxon Signed-Rank Test showed significantly lower rating for effort $(\mathrm{z} = {2.668}$ , p<.01), mental demand (z = 2.201, p<.05), and frustration (z = ${2.254},\mathrm{p} < {.05}$ ) for the TwoTorials condition as compared to the baseline condition. These findings provide compelling evidence that the features of TwoTorials helped reduce the cognitive load on participants. For the rest of the TLX subscales, we found no significant difference.
284
+
285
+ < g r a p h i c s >
286
+
287
+ $\mathbf{A} =$ I learned something from this tutorial B = Co-operatively working with my peer helped me to learn the tutorial content
288
+
289
+ $\mathrm{C} = 1$ learned something new from my peer, beyond what was in the tutorial itself
290
+
291
+ $D = 1$ helped my peer to learn the tutorial content
292
+
293
+ E = Working on this tutorial cooperatively was an enjoyable experience
294
+
295
+ Figure 10: Rating on the learning experience questionnaire (higher is better). Error bars show standard error.
296
+
297
+ § 6.3.3 QUESTIONNAIRE RESULTS - TWOTORIALS VS. BASELINE
298
+
299
+ When asked which of the two conditions they preferred overall, TwoTorials was rated higher by 5/12 participants compared to the in-application tutorial, with 6/12 participants expressing no preference and 1/12 preferring the baseline condition. While this suggests a preference for the TwoTorials system, a Wilcoxon signed-rank test did not show this difference to be statistically significant.
300
+
301
+ For each condition, we asked participants a set of questions on what they learned from the tutorial experience (Figure 10). For most of the questions we found no significant difference, but a Wilcoxon Signed-Rank Test showed a significant difference in medians for the statement "I learned something from this tutorial" favoring the TwoTorials condition over the baseline condition (z $= {2.000},\mathrm{p} < {.05})$ .
302
+
303
+ We also asked participants a set of questions on the various other aspects of the tutorial-following experience (Figure 11). A Wilcoxon Signed-Rank Test determined that there was a significantly higher median for the TwoTorials system for "maintaining awareness of your peer's activities" as compared to the baseline $\left( {\mathrm{z} = {2.197},\mathrm{p} < {.05}}\right)$ . We did not find a significant difference for the other questions in this group.
304
+
305
+ $\mathbf{A} =$ Following the tutorial instructions B = Helping your peer with completing parts of the
306
+
307
+ < g r a p h i c s >
308
+
309
+ tutorial $\mathbf{C} =$ Receiving help from your peer on parts of the
310
+
311
+ tutorial D = Communicating with your peer E = Maintaining awareness of your peer's activities F = Using the tutorial system
312
+
313
+ Figure 11: Ratings of the tutorial systems for various statements. Error bars show standard error.
314
+
315
+ § 6.3.4 TWOTORIALS FEATURES
316
+
317
+ For the TwoTorials condition, we analyzed how many times each feature was used by participants, and asked participants to rate the usefulness of the individual features. In terms of usage, participants switched to their peer's workspace an average of 4.8 times $\left( {\mathrm{{SD}} = {1.94}}\right)$ and edited their peer’s workspace directly 2.2 times $\left( {\mathrm{{SD}} = {0.75}}\right)$ . Participants annotated each other’s workspaces 2.3 times $\left( {\mathrm{{SD}} = {1.03}}\right)$ and sent 4.6 peer pings $\left( {\mathrm{{SD}} = {2.42}}\right)$ . Considering that dyads in the TwoTorials condition took less than 25 minutes to complete the tutorial, these numbers suggest that the features of TwoTorials were used frequently by participants.
318
+
319
+ The ratings of usefulness for the individual features of TwoTorials are shown in Figure 12. Participants generally reported the features to be useful. There was strong support for the voice chat, the ability to view the peer's workspace, and the ability to directly edit the peer's workspace. The only feature to receive a strong negative rating for usefulness was the text chat, which is likely because the voice chat provided a much richer and more convenient communication medium.
320
+
321
+ < g r a p h i c s >
322
+
323
+ Figure 12: Rating of individual TwoTorials features.
324
+
325
+ § 6.4 PARTICIPANT FEEDBACK AND OBSERVATIONS
326
+
327
+ At the end of the study session, we asked participants to contrast the experience of working with TwoTorials and the baseline tutorial system. Qualitative data were analyzed using methods drawn from grounded theory [33]. Specifically, open coding was used to label the data and emerging themes were identified by the first author and then shared and discussed with the broader research team.
328
+
329
+ § 6.4.1 IMPROVED COMMUNICATION, AWARENESS, AND COORDINATION
330
+
331
+ Participants reported being able to coordinate with each other more effectively using the TwoTorials system, with smoother information flow between peers. Participants noted that having a constant view into their peer's workspace helped them solve problems more effectively without breaking the flow of working on the tutorial:
332
+
333
+ Having the constant visual of my peer helped quite a bit to solve common problems on my workflow instead of having to stop the flow to find the assistance. (P11)
334
+
335
+ Participants also appreciated the ease with which they could switch from viewing their own workspace to that of their peer:
336
+
337
+ The live view of your companion was a big plus. Easily being able to switch to their view and affect their workspace is a big plus as well. (P12)
338
+
339
+ Participants reported that TwoTorials helped them to maintain an ongoing awareness of the other user, and this helped to encourage dialog:
340
+
341
+ The first system [TwoTorials] reminded me to think about discussing, because the view of the other screen was always present [...] it helped slightly by encouraging dialog. (P9)
342
+
343
+ Participants also described using the shared awareness features to ground their discussions with their peer:
344
+
345
+ It helped to see where the person was so we could say "look at my screen this is what you're supposed to have." (P6)
346
+
347
+ § 6.4.2 A COOPERATIVE LEARNING ENVIRONMENT
348
+
349
+ A second common theme was that the TwoTorials features created an environment where cooperative learning was supported. Along these lines, one feature cited by participants was the ability to directly edit their peer's workspace. We observed several occasions where one peer would provide help by directly making changes in the workspace of their peer. Participants reported that this was an efficient way to help each other:
350
+
351
+ The fact that I could work directly on my peer's workspace in [TwoTorials], let me help him more efficiently. (P7)
352
+
353
+ Participants also expressed appreciation for the annotation features, and highlighted how it created more of a "lesson experience" than a tutorial:
354
+
355
+ In [TwoTorials], the fact that my peer could chime in and add his notes in real time made it more of a lesson experience than a tutorial - the chance to clarify and question each other as we followed the steps was a very useful addition. (P11)
356
+
357
+ This quote is particularly encouraging because it suggests the features of the TwoTorials system were able to change the experience to one where cooperation and helping each other was more natural. Along similar lines, P8 suggested that TwoTorials could be used in formal educational settings to enable teacher-student interactions:
358
+
359
+ In [TwoTorials], getting help was much easier. I would imagine a TA or teacher helping students through that system. (P8)
360
+
361
+ Participants also commented that they took advantage of the expertise of their peer less in the baseline condition:
362
+
363
+ If I got stuck, the person knew exactly where I was (they were there too or had just been there) and most likely had the same problems. I used the person less [in the baseline condition]. (P6)
364
+
365
+ Overall, this feedback provides validation that the TwoTorials features encouraged cooperation and helped to create an environment that supports cooperative learning.
366
+
367
+ § 6.4.3 MOTIVATING AND ENJOYABLE EXPERIENCE
368
+
369
+ Finally, participants reported enjoying the cooperative tutorial experience (in both conditions), and found it to be engaging:
370
+
371
+ Working cooperatively was fun and kept me engaged. Also, I learned some tips from the other person. (P9)
372
+
373
+ While participants reported enjoying the experience of cooperating in both tutorials, some participants noted that TwoTorials enhanced this aspect of the experience:
374
+
375
+ In [TwoTorials], the second layer of interaction added a different [kind] of enjoyment, where we could interact and made the experience more fun. (P11)
376
+
377
+ A specific feature cited as creating an enjoyable experience was the peer pings. Four participants stated that they felt the peer pings were fun and helped encourage them to cooperate:
378
+
379
+ "chat icons" were a nice touch to encourage each other. (P11)
380
+
381
+ There was a sense of competition that reduced co-operative work in both tutorials. This was less so in [TwoTorials] because of the added features like thumbs up etc. (P8)
382
+
383
+ This final quote is particularly encouraging, it suggests that peer pings were able to reduce the sense of competition between the peers, which could stand in the way of the cooperative experience the system is designed to foster.
384
+
385
+ § 6.5 CHALLENGES ENCOUNTERED
386
+
387
+ While participants were generally supportive of the features of TwoTorials, some features elicited mixed feelings. Specifically, the ability to directly modify content in a peer's workspace was cited as undesirable by some participants:
388
+
389
+ I do not want to interfere with my partner's screen. Annotation can be helpful though, and stickers [peer pings] make it more fun, but not direct interaction. (P1)
390
+
391
+ I did not feel comfortable editing my partner's workspace. (P9)
392
+
393
+ As we discuss in the next section, we believe this indicates the need for better social mechanisms to be built around these features, to ensure that they can only be used to provide help or edit a peer's workspace when that help is welcome, as suggested by prior work on collaboration boundaries [83].
394
+
395
+ More broadly than any individual feature, one of the participants expressed that he would prefer to work on his own, because he did not like being observed while he worked:
396
+
397
+ I personally like working on a tutorial alone and having others watching my work is kind of irritating. (P8)
398
+
399
+ This is important feedback, but in practice we believe that those who are interested in cooperative learning will choose to use TwoTorials or other systems like it, while those who are not can continue to use the many resources currently available to support individual learning.
400
+
401
+ § 7 DISCUSSION AND FUTURE WORK
402
+
403
+ Overall, our evaluation indicated that TwoTorials helped participants to engage in cooperative learning, improved their performance, reduced effort and mental demand, and helped participants to maintain awareness of each other's progress in the tutorial. Feedback from participants also suggests that the system's features helped to create a supportive environment for cooperative learning, helped keep learner motivation high, and helped foster a feeling of cooperation rather than competition between the learners. These are promising findings for applying the cooperative software learning approach to step-by-step 3D design software tutorials.
404
+
405
+ While our study results are generally encouraging, we found that some participants did not appreciate the ability to directly allow peers to edit one another's workspaces. This is important feedback, particularly because this study was conducted with peers who knew each other as friends or colleagues - it seems likely that learners will be more hesitant about this feature if they were working with peers with whom they don't have an existing relationship. To overcome this challenge, we believe that simple permission mechanisms could be put in place. For example, a user could be prevented from editing their peer's workspace unless that peer explicitly asks for help and provides editing permission. Editing permission could also be limited to a short period of time, or to a selected subset of objects in the workspace. This approach would fit with prior research on groupware and MOOCS, which suggests that each user should have their own territory [44], with permission and roles mechanisms to enable users to control who can view and edit $\left\lbrack {{77},{83}}\right\rbrack$ . Alternately, the system could enable a "forked demonstrations" paradigm, where a user could get a copy of their peer's current workspace that they could edit to demonstrate an operation to their peer, without making any lasting change to the peer's workspace itself.
406
+
407
+ § 7.1 MATCHING WITH REMOTE PEERS
408
+
409
+ In this paper we focused on investigating features that could enable a cooperative learning experience for distributed pairs of users working on step-by-step software tutorials. Having established the benefits of this approach, a next important question is how to match pairs of remote users to work together on tutorials. There are several interesting possibilities here. The results of our observational study suggest that it may not be a good idea to match users with large differences in overall experience and expertise, which could result in the more experienced user becoming bored. Instead, the system could try to match users who are at similar levels of experience but have complementary skill sets. It would be particularly interesting if the system could consider both the skills of the learners and the required skills for the tutorial, to create an experience where peers would need to work together and help one another to reach the goal. These skill-based matchmaking mechanics could be designed in a similar way to those available in multiplayer games $\left\lbrack {1,{66}}\right\rbrack$ .
410
+
411
+ § 7.2 ADDITIONAL PEERS
412
+
413
+ Another interesting area for future work would be to consider how the cooperative software tutorial approach could accommodate more than two learners. An advantage of the approach we have adopted, where each user is working in parallel on the tutorials, is that it could naturally support additional peers - in contrast, if more than two people were working on one shared workspace, it could quickly become unwieldy. The advantage of adding additional peers is more collective expertise, which could help get the group of peers unstuck when they face challenges. However, this could also create additional conflicts between users, or situations where certain users pair off, leaving others out. These challenges make this an interesting area for investigation, and we see the potential for a scaled up system to be used as a component of interactive 3D design MOOCS [38, 44].
414
+
415
+ § 7.3 BEYOND 3D DESIGN SOFTWARE
416
+
417
+ Although we focused on step-by-step tutorials for 3D design software, we believe that the features of TwoTorials could be easily adapted to work in other software domains with a strong visual element, such as photo editing or the creation of games using game engines (e.g., Unity). From a technical standpoint, our system could be used with minimal modifications with any web-based software application.
418
+
419
+ § 7.4 LIMITATIONS
420
+
421
+ This work adds to a growing body of research on software learning (e.g., $\left\lbrack {{13},{36},{51},{59},{78}}\right\rbrack$ ) and provides insights into how step-by-step tutorial systems can be adapted to support remote cooperative learning. However, there are several limitations to this work which should be addressed in future research. First, our study was conducted with a small, specific sample (employees of a software company), which may limit the generalizability of the findings. A good next step would be to deploy TwoTorials in an online 3D design course, with remote students. Second, TwoTorials was compared against a baseline that offered minimal coordination features. This was intentional, in order to reveal which of TwoTorials' features were most useful to support collaboration, but future work should compare these features to those offered in state-of-the-art online collaborative learning solutions, such as free-form web curation tools [44, 61]. Third, prior research has shown that ethnocultural norms and backgrounds can influence the effectiveness of cooperative learning $\left\lbrack {{49},{60},{86}}\right\rbrack$ , so it is important to expand the evaluation of this type of system to a much larger and more diverse set of participants. Finally, we did not collect data on the long-term effects or value of our system in sustaining learner motivation or encouraging more extensive learning of a domain, which would be an interesting avenue for future work.
422
+
423
+ § 8 CONCLUSION
424
+
425
+ This work has demonstrated an approach and a set of features for creating cooperative remote software tutorial systems. Our findings indicate that participants enjoy the cooperative learning experience that this approach enables. Overall, we see this work as a first step toward a future where anyone, anywhere can gain the learning benefits of working alongside peers on interesting and engaging projects.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/H3GlkWt46f9/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,569 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Accidental Landmarks: How Showing (and Removing) Emphasis in a 2D Visualization Affected Retrieval and Revisitation
2
+
3
+ Category: Research
4
+
5
+ ## Abstract
6
+
7
+ Many visualizations display large datasets in which it can be difficult for users to find (and re-find) specific items. In systems that provide highlighting tools (e.g., filtering or brushing), emphasized points can become "accidental landmarks" - visual anchors that help users remember locations that are near the emphasized points. Accidental landmarks could be useful (by aiding revisitation), but if users become dependent on them, removing or changing the highlighting could cause problems. We provide designers with information about these issues through two crowdsourced studies in which people learned a set of item locations (in visualizations with or without emphasized points); we then removed or changed the highlighting to see if performance suffered. In the first study, which used a simple grid of points, results showed that changing or removing emphasized points significantly impeded users' ability to re-find targets, but the highlighting did not improve performance during training. In the second study, which used a more complex scatterplot, we found that highlighting significantly improved performance during training, but that removing or changing the emphasis points only reduced refinding performance for a few target types. Our work demonstrates that visualization designers need to consider how transient visual effects such as emphasis can affect spatial learning and revisitation, and provides new knowledge about how visual features can affect performance.
8
+
9
+ Index Terms: Human-centered computing-Visualization-Visualization techniques-; Human-centered computing-Visualization-Visualization design and evaluation methods
10
+
11
+ ## 1 INTRODUCTION
12
+
13
+ A ubiquitous task in large visualizations is finding and re-finding specific items - to inspect values during exploration or compare results to look for insights [28]. Finding and re-finding can be difficult, however, when objects in visualizations are visually undifferentiated (e.g., dots in a scatterplot), and names or labels are only available through inspection (e.g., hovering over a dot); in many visualizations, finding items for the first time can involve laborious visual search. Once the user finds an item, the problem changes to one of revisitation - i.e., finding items that have already been visited. Revisitation can be much faster than visual search if the user can remember where the item was $\left\lbrack {{39},{57}}\right\rbrack$ ; however, the undifferentiated nature of data items in many visualizations provides little support for users' spatial memory.
14
+
15
+ One way to support the development of spatial memory - and thus support revisitation - is to include landmarks in the visual presentation. Landmarks are obvious visual features that are noticeably different from their surroundings, and that can provide a frame of reference in which users can remember nearby locations based on their relative position to the landmark. Structural elements such as corners can be strong landmarks [55], and previous research has also shown that adding artificial landmarks such as coloured blocks can provide valuable anchors for spatial learning when there are a large number of items in the dataset [56].
16
+
17
+ Information visualizations often add visual features such as colour to a set of items in the presentation (through actions such as highlighting a subset of the dataset) and can contain clusters of data that serve as spatial landmarks - but the reason for these features is almost never to add landmarks. Instead, visual highlighting is typically the result of a user operation such as filtering or brushing: for example, the user might set a filter threshold of a third variable to emphasize datapoints in a scatterplot that are above that threshold (see Figure 1) or utilize dynamic queries [61] to hide/show items. We note that in some visualizations all datapoints are coloured or augmented based on an attribute variable, but here we consider representations that only provide a standard glyph for each datapoint.
18
+
19
+ ![01963e5b-408a-73d7-a8ef-8dee81eeec96_0_944_406_681_852_0.jpg](images/01963e5b-408a-73d7-a8ef-8dee81eeec96_0_944_406_681_852_0.jpg)
20
+
21
+ Figure 1: Top: Screen capture of the Tableau visualization tool. Users highlight data points through the "Marks Card" that allows specification of highlights and colours during exploratory data analysis. Bottom: Screen capture of a document explorer tool, highlighting document positions based on filters (from https://bit.ly/3BQZue2).
22
+
23
+ When a visualization has a subset of datapoints that are visually emphasized, the highlighted points can become "accidental landmarks" - items that have the visual characteristics of landmarks, even though this is unintended by the designer. When users find and re-find items in a visualization that has some items highlighted, they may start to use the accidental landmarks as anchors for finding nearby items (e.g., "the item I need to remember in the scatterplot is just below the red item").
24
+
25
+ These accidental landmarks can be useful by providing anchors for revisitation, but they could also cause problems if users become dependent on them, because the highlights could disappear or change (e.g., when a user selects a different subset to emphasize). If a user comes to rely on the visual landmarks, when they eventually need to revisit data points without this aid, they will have difficulty because the aid is missing or different. This phenomenon of users becoming dependent on external aid or feedback is known as the guidance hypothesis $\left\lbrack {{44},{47}}\right\rbrack$ , which suggests that a reduction in effort provided by guidance during training will lead to poorer retention [17]. However, contrasting research to the guidance hypothesis suggests that spatial knowledge can also be gained through incidental learning $\left\lbrack {3,{26}}\right\rbrack$ , which occurs by simply interacting with an environment in a spatial fashion.
26
+
27
+ These competing hypotheses mean that it is difficult to predict what will happen to spatial learning and revisitation when accidental landmarks occur in visualizations. To determine both the potential benefits and risks of visual emphasis that could be used as landmarks, we carried out two between-participants crowdsourced studies $\left( {\mathrm{N} = {180}}\right)$ to test the effects of highlighting points in scatterplots, and then removing or changing the emphasized items.
28
+
29
+ In our first study, we asked participants to find and re-find several targets in a simple grid visualization that did not provide strong structural or layout landmarks (other than corners and edges). We tested three conditions: a baseline version with no emphasis, a version with emphasis that was removed after training, and a version with emphasis that was changed to a different subset after training. We measured people's performance during three training blocks where any emphasis effects were present, and in a fourth block where the emphasis was removed or changed.
30
+
31
+ Results of the first study showed that accidental landmarks did not improve search time or number of hovers during the three training blocks, but did have an effect on performance when removed or changed - in the fourth block, both search time and hovers increased substantially when compared to the no-landmarks condition. In addition, the results were stronger for some targets (e.g., for the target that was emphasized during training, there was a larger detriment to removing / changing the highlighting). Subjective results showed that participants felt that finding targets was more difficult when the highlights were removed or changed.
32
+
33
+ Our second study tested the same experimental conditions, but in a more-complex scatterplot based on a real-world Gapminder dataset [43]; this visualization had substantially more internal structure that provided additional landmarks (such as clusters of points, edges, and areas of white space). Results of the second study showed that search time and hovers during the learning blocks were both lower with the accidental-landmarks conditions, but there was no significant decrease in performance when the highlighting was changed or removed. We attribute the change in results seen with Study 2 to the additional structural landmarks that were available in the more-complex scatterplot.
34
+
35
+ Our two studies provide new understanding of how 'accidental' visual features affect visual search, spatial learning, and revisitation in information visualizations. Our findings suggest that in visualizations without extensive structural or layout-based landmarks, participants may become overly dependent on visual emphasis that arises from filtering or brushing. In more complex visualizations, the value of accidental landmarks increases during early use, but the additional landmarks provided by structure and layout appear to mitigate any over-reliance on the highlighting. Our work makes four main contributions. First, we identify a phenomenon - emphasis that provides accidental landmarks in visualization - that has not been considered previously. Second, we provide empirical evidence that emphasis-based landmarks can provide a benefit for visual search (depending on the visualization), but can also cause problems when they are taken away or changed. Third, we provide new knowledge that can guide designers' choices about what emphasis and potential aids to use to support spatial awareness, and possible design improvements for emphasis effects that address some of the issues seen in our study.
36
+
37
+ ## 2 RELATED WORK
38
+
39
+ ### 2.1 Learning and Retrieval
40
+
41
+ A wide variety of research has been carried out to investigate how humans acquire knowledge and skills. Prior work in psychology has extensively studied human memory $\left\lbrack {5,6,{13},{15}}\right\rbrack$ , how the skills necessary for learning and retrieval are developed $\left\lbrack {2,{41}}\right\rbrack$ , the development of learning abilities in children [24], and how sex differences may affect navigation and spatial orientation [35].
42
+
43
+ Anderson [2] and Fitts et al. [18] suggest that skill development occurs in three main stages: cognitive, associative and autonomous. When applied to 2D visual displays, users in the cognitive phase learn items through slow visual search and visual inspection (e.g., finding icons in a toolbar or files in a file browser [17]). In the associative stage, users understand the general contents of the dataset and begin to remember items and locations, allowing faster revisitation for some items. In this stage, however, users still typically perform visual search within a local area after reaching the vicinity of an object of interest. Finally, users in the autonomous stage have memorized item locations, and can recall and revisit an object's location without needing any visual search.
44
+
45
+ People learn object locations in 2D visualizations as a side effect of interacting with them, and the rate at which locations are learned follows a power law of practice [10]. In previous HCI research, several interfaces have shown the utility of spatial memory to improve performance. For example, Robertson et al.'s initial Data Mountain study and a subsequent study by Jansen et al. which evaluated Data Mountain in a wall display show how the spatial arrangements of thumbnails in a spatial environment allows faster retrieval times than standard bookmarking systems [30,42]. Similar benefits have also been found in tasks such as list revisitation [21] and command selections in interfaces $\left\lbrack {{54},{58}}\right\rbrack$ .
46
+
47
+ ### 2.2 Supporting Spatial Learning
48
+
49
+ Knowledge of the location of an item (be it in a natural environment or digital space) is often relative to other objects or items. People learn, organize, and communicate spatial knowledge by reorganizing the spatial relations among items in an environment [40]. Mou et al. suggested that human memory systems use frames of reference to specify the remembered locations of objects [40]: for example, Scarr et al. stated that "explicit rectangular boundaries, such as the walls of a room or the edges of a table, can generate a frame of reference" and added that a grid-based item layout can also support spatial knowledge by creating an implicit axis of reference [12].
50
+
51
+ Previous work on supporting spatial learning has considered two main strategies: spatially stable layouts, and landmarks. Researchers have demonstrated the benefits of laying out interfaces in a way that are spatially stable $\left\lbrack {{46},{56}}\right\rbrack$ , for example, Gutwin et al. and later work by Cockburn et al. showed that a stable layout of commands in an interface can improve recall efficiency compared to hierarchical ribbons or menus [11, 21]. Similarly, Scarr et al.'s CommandMap showed that spatially stable icon design on a desktop interface improved the recall of icons in real tasks $\left\lbrack {{45},{46}}\right\rbrack$ . The benefits of spatial stability have also been shown in other interfaces such as smartphones [65], tablets [20], smartwatches [33], and virtual environments [19].
52
+
53
+ Landmarks are a second strategy for improving navigation performance. Landmarks are easily identifiable objects that have distinct spatial features (such as shape, colour, or semantic value [53]) that can provide a frame of reference for nearby objects. Similar to the benefits of landmarks in real life (e.g., using a prominent building when navigating a city), landmarks have exhibited potential in digital workspaces. Several types of landmark have been considered, such as the corners of a screen or the bezel on a device $\left\lbrack {{20},{48}}\right\rbrack$ , which can provide a strong reference for nearby objects. However, since these landmarks may not naturally occur in larger workspaces (e.g., there are no corners or edges in the middle of a display), researchers have also examined the use of hands [58] and the idea of adding artificial landmarks (e.g., a background picture, or simple coloured shapes) [56] to assist users in remembering the locations of objects in the visual field.
54
+
55
+ ### 2.3 Emphasis and Attention in Infovis
56
+
57
+ The goal of emphasis is to manipulate the visual features of a chosen data element to make it visually prominent so that a viewer's bottom-up attention is directed to an element of interest [23]. Many theories have been developed over time to explain how emphasis can guide a viewer's attention. For example, similarity theory developed by Duncan and Humphreys shows that the efficacy of emphasis decreases with increased target/non-target similarity and with decreased similarity between the non-targets [16]. Similarly, The Guided Search Theory theory by Wolfe follows a two-stage process for attention, first guided by visual salience (bottom-up attention) but adding that attention can be biased toward targets of interests (e.g., a user looking for a red circle) by encoding items of user interest: for example, assigning a higher weight to the items with red colour [63].
58
+
59
+ Another theory, the relational account of attention theory, also follows on the premise that if users are given a specific task or have a feature, they are interested in (e.g., a user searching for a red circle), attention will be guided to the mark that differs in the given direction from the other marks (in this case, attention will be guided to the reddest circle among all circles displayed) [16,60].
60
+
61
+ Similarly, a recently proposed model suggests three main processes for how attention is guided when viewing a visualization: current goals, selection history and physical salience (bottom-up attention) [4]. This model suggests that there is an inherent bias to prioritize items that have been previously selected, which may differ from current goals, and as such, selection history, goal-driven selection and visual salience are competing processes, affecting the effectiveness of emphasis to serve as landmarks.
62
+
63
+ Consistency is a fundamental guideline in HCI for supporting spatial awareness and memory/recall capabilities [17, 45, 46]. Landmarks are known to supplement the capabilities of an interface by providing anchors that users can build better spatial awareness. However, in the absence of a consistent interface - such as an interactive visualization which may change depending on actions such as filters or changes in the underlying dataset - landmarks can provide a method for spatial learning within this uncertainty. However, landmarks in visualization remain relatively unexplored, with questions such as whether removing a landmark (such as when a user removes a highlighting feature in a visualization) or changing the landmarks (e.g., users selecting a different set of objects to highlight) affects the spatial memory of previously learned objects. In addition, there are other factors such as the visual salience of these landmarks, current tasks, and previous selections that may affect how users perform revisitation tasks in a visualization. In the following studies, we set out to determine the effects of using emphasis as a landmark in visualization for spatial awareness and test the limits of emphasis by re-creating common tasks such as removing and changing emphasized objects.
64
+
65
+ ## 3 STUDY 1: EFFECTS OF ACCIDENTAL LAND- MARKS IN A SIMPLE GRID VISUALIZATION
66
+
67
+ We conducted an online experiment to explore whether accidental landmarks in a simple grid visualization would affect spatial location learning and performance, both when the assistance was present and after it was removed or changed. The study asked participants to repeatedly find a set of seven targets in an $8 \times 8$ grid that had few structural or layout-based landmarks, other than corners and edges; we recorded search time, hovers required to find a target, and errors.
68
+
69
+ ![01963e5b-408a-73d7-a8ef-8dee81eeec96_2_932_151_746_537_0.jpg](images/01963e5b-408a-73d7-a8ef-8dee81eeec96_2_932_151_746_537_0.jpg)
70
+
71
+ Figure 2: Example of the study system interface that participants would see when completing a trial. In the No-Landmarks condition, there were no red highlighted circles.
72
+
73
+ ### 3.1 S1 Study System
74
+
75
+ A web-based application was developed using HTML, CSS, and JavaScript (D3.js [7]) to display an 8x8 grid of circles that contained targets and distractors (some of which were also accidental landmarks). The interface presented the name of the target, and the user had to click on the target item to confirm a selection. Item names are not permanently visible, but could be shown in a tooltip by hovering the mouse on any item (names were taken from an existing plant-breeding dataset). Hover feedback was immediate (similar to commercial visualization systems), however, we only considered hovers with duration of ${300}\mathrm{\;{ms}}$ for analysis to remove hovers that were simply due to traversing over the items. An example of the study interface is shown in Figure 2.
76
+
77
+ To ensure that the way we tested targets and emphasis was fairly compared for each task type, we used a grid-style visualization. We used a simple grid for our first study in order to control the number of structural or layout-based landmarks in the visual presentation, and to control the distance between targets and landmarks. Although this style of visualization is less common than other types such as scatterplots, there are still many examples of grid-based visual layout: for example, a visualization of a plant-breeding field trial would typically use a grid to match the arrangement of the physical layout of plots in the field; similarly, the document map shown in 1 organizes items into rows and columns. We also used the same combination of targets and accidental landmarks for all study conditions to ensure equal difficulty. This required that we use a between-participants design for the study.
78
+
79
+ The study had three conditions that differed in terms of how accidental landmarks were used:
80
+
81
+ - No Landmarks: this condition provided no accidental landmarks - participants saw the plain grid of items, with no red highlights.
82
+
83
+ - Landmarks-Removed: in this condition, participants saw the same grid of items, but with six items coloured red (simulating a previous filtering operation that had highlighted these items as accidental landmarks). The red highlights were removed in the final block.
84
+
85
+ - Landmarks-Changing: this condition provided the same grid and red highlights as above during the training blocks, but in the fourth block the highlights were moved to a different set of items (rather than being removed altogether).
86
+
87
+ #### 3.1.1 S1 Targets
88
+
89
+ For the study, seven of the 64 items were used as targets, and six of the 64 were coloured red as accidental landmarks. One of the items was both a target and a landmark. Target positions were sampled from three areas of the grid [56]: three from the corner regions, two from the edges and two from the centre region. Targets and their locations are shown in Figure 3.
90
+
91
+ ![01963e5b-408a-73d7-a8ef-8dee81eeec96_3_158_486_734_434_0.jpg](images/01963e5b-408a-73d7-a8ef-8dee81eeec96_3_158_486_734_434_0.jpg)
92
+
93
+ Figure 3: Locations of the targets (shown here in blue, not shown in the study) in relation to the landmarks. Target NAM-63 (shown with a blue square) was both a target and a landmark. In the No-Landmarks condition, there were no red highlights.
94
+
95
+ ### 3.2 S1 Procedure
96
+
97
+ Each condition in the experiment followed seven phases: (1) informed consent, (2) demographics questionnaire, (3) vision test, (4) guided tour, (5) study tasks, (6) post-study questionnaires, and (7) debriefing. The specific questions and tasks for each condition are described in each condition's section below. Participants first completed informed consent and demographics forms, and were then asked to complete an Ishihara test and questionnaire to screen for colour vision deficiencies [29]. Participants then completed a guided tour through all the targets, after which they could proceed to the study.
98
+
99
+ #### 3.2.1 Guided Tour
100
+
101
+ Participants were first randomly assigned to one of the three study conditions. In the guided tour phase, the experimental system showed the grid (including red highlights if the condition included them). The system then took the participants on a "guided tour" of the seven targets, with each target shown one at a time, highlighted in blue. Participants had to click on the target to proceed to the next target. After all targets were presented, the interface automatically proceeded to the study.
102
+
103
+ #### 3.2.2 Study Phase
104
+
105
+ After the guided tour, participants completed the study trials. Every trial began by displaying the name of a target at the top of the screen (the name remained visible for the duration of the trial), and participants were asked to find and select the corresponding target item from the grid. Targets were presented in random order (sampling without replacement); locations of targets (and landmarks if shown) were the same in all conditions. Participants could see item names immediately upon hovering over the item with the mouse. After each correct selection, the screen was blanked for ${0.5}\mathrm{\;s}$ to prevent contrast effects between trials. The study consisted of three training blocks in which landmarks (if part of the condition) were shown and a fourth block in which any landmarks were either removed or changed. Because the Landmarks-Changing and Landmarks-Removed conditions used the same landmarks, this means that these conditions were identical for the first three blocks. In the No-Landmarks condition, no landmarks were shown at any point. After completing all blocks, participants were asked to fill out post-study questionnaires, were shown debriefing information, and were compensated for their participation.
106
+
107
+ ### 3.3 S1 Participant Recruitment
108
+
109
+ We recruited 90 participants $\left( {{\mu }_{\text{age }} = {33.15},{\sigma }_{\text{age }} = {10.84},{55}\mathrm{\;{men}}}\right.$ , 33 women, 2 non-binary) across the three conditions (30 per condition) using Amazon's Mechanical Turk (MTurk), and gathered data through a custom browser-based experiment tool [31]. MTurk is an online platform where requesters can post tasks that workers can opt-in to complete. Data collected from MTurk has been previously used in a variety of human-computer interaction studies $\left\lbrack {{14},{32},{34},{51}}\right\rbrack$ and to model perception in visualization $\left\lbrack {{27},{52}}\right\rbrack$ , including assessing separability of variables [50], measuring colormaps [37], and effectively detecting motion [59]. Using MTurk, however, requires that special care must be taken to ensure the integrity of the data, as bots or negligent workers must be filtered out. Our study required workers to have over ${90}\%$ HIT acceptance rate (i.e., a measure of the quality of a worker's previous tasks). We also checked the questionnaire responses to ensure that the same answer was not used for all of the questions, as well as whether the study was completed too quickly or too slowly.
110
+
111
+ All participants were paid $\$ 3$ for completing the study, which took approximately 15 minutes. Self-reported estimates of monthly visualization usage among participants averaged 33 hours $(\mathrm{{SD}} =$ 66.14) with pie charts, line charts, bar graphs and maps/weather charts as the most commonly used or viewed charts.
112
+
113
+ ### 3.4 S1 Study Design
114
+
115
+ Our goal was to understand the effects of landmarks on spatial learning and revisitation in visualizations. Our main research questions (RQ) for this study were:
116
+
117
+ - RQ-1: Do accidental landmarks improve finding and re-finding when they are present (i.e., decreased search time, number of hovers, and error rate)?
118
+
119
+ - RQ-2: Does removing or changing landmarks after a learning period affect re-finding (i.e., increased search time, hover counts, and error rate)?
120
+
121
+ To investigate these questions, the study used a mixed factorial design with three factors:
122
+
123
+ - Condition (between-subjects): No-Landmarks, Landmarks-Removed, Landmarks-Changing
124
+
125
+ - Target Locations (within subjects): seven target locations (see Figure 3)
126
+
127
+ - Blocks (within subjects): 1-4 (blocks 1-3 are training; block 4 removes/changes any landmarks).
128
+
129
+ Our primary dependent variables were search time, hover counts (only included if longer than ${300}\mathrm{\;{ms}}$ ), error counts (i.e., incorrect clicks), and subjective ratings of difficulty and effort from post-session questionnaires. Targets were the same for all participants.
130
+
131
+ ## 4 S1 STUDY RESULTS
132
+
133
+ We report effect sizes for significant ANOVA results as general eta-squared ${\eta }^{2}$ (considering .01 small,.06 medium, and >.14 large [36]). Outliers were determined as any trial with a search time greater than 3 SDs above the block's mean. 73 of the 2520 trials were removed from the analysis. All pairwise t-tests were corrected using the Holm-Bonferroni method.
134
+
135
+ ![01963e5b-408a-73d7-a8ef-8dee81eeec96_4_151_151_747_626_0.jpg](images/01963e5b-408a-73d7-a8ef-8dee81eeec96_4_151_151_747_626_0.jpg)
136
+
137
+ Figure 4: Mean trial search times (± s.e.) across learning blocks (1-3). Block 4 shows the results of removing or changing emphasized objects.
138
+
139
+ ### 4.1 S1 Effects of Landmarks on Learning: Search time, Hovers, and Errors in Learning Blocks
140
+
141
+ #### 4.1.1 S1 Learning Blocks - Search Times:
142
+
143
+ Search time in the test trials was measured from the time a target name appeared on the screen to the time the system registered a correct item selection. Search times across all blocks for the three conditions are shown in Fig 4.
144
+
145
+ For the learning blocks, a 3x3x7 RM-ANOVA (Condition x Block x Target) showed a main effect of Condition $\left( {{F}_{2,{178}} = {15.21}, p < }\right.$ ${0.001},{\eta }^{2} = {0.02})$ , Block $\left( {{F}_{2,{174}} = {18.00}, p < {0.001},{\eta }^{2} = {0.02}}\right)$ and Target $\left( {{F}_{6,{522}} = {6.16}, p < {0.001},{\eta }^{2} = {0.01}}\right)$ and an interaction between Condition x Block $\left( {{F}_{4,{174}} = {2.44}, p = {0.004},{\eta }^{2} = {0.01}}\right)$ on search time. Post-hoc pairwise t-tests showed significant differences between No Landmarks and both landmarks conditions (both $p < {0.05})$ .
146
+
147
+ Across all magnitudes and targets, search time was lowest with the No-Landmarks condition (mean 12268ms); the average mean of the landmark conditions was ${16685}\mathrm{\;{ms}}$ (note that both landmark conditions were identical in the learning phase, so any difference between them is due to group differences). To further investigate the Condition x Block interaction and consider the rate at which the different groups improved, Fig 5 shows a version of the data that normalizes the other blocks based on block 1 performance. Fig 5 suggests that there were group differences in the two landmark conditions, but also indicates that participants in both landmark conditions learned less quickly than the No-Landmarks condition.
148
+
149
+ #### 4.1.2 S1 Learning Blocks - Hovers:
150
+
151
+ We measured the number of hovers as the number of times the participant held the cursor over a target for ${300}\mathrm{\;{ms}}$ or more to show the name. Hovers are a more sensitive measure of progress through the stages of learning and performance: as a participant moves through the different blocks, there should be a reduction in the number of items that they need to inspect. Mean hovers per trial are shown in Figure 6.
152
+
153
+ For the learning blocks, a similar 3x3x7 RM-ANOVA (Condition $\mathrm{x}$ Block $\mathrm{x}$ Target) showed a main effect of Block $\left( {{F}_{2,{178}} = }\right.$ ${11.047}, p < {0.001},{\eta }^{2} = {0.03})$ and Target $\left( {{F}_{6.522} = {2334}, p < }\right.$ ${0.001},{\eta }^{2} = {0.02}$ ) on hover count, and also showed two interactions: Condition $\mathrm{x}$ Block $\left( {{F}_{4,{356}} = {1.84}, p < {0.01},{\eta }^{2} = {0.01}}\right)$ and Block x Target $\left( {{F}_{{12},{1044}} = {1.84}, p < {0.001},{\eta }^{2} = {0.01}}\right)$ . However, the ANOVA found no main effect of condition $\left( {\mathrm{p} = {0.37}}\right)$ . Across all magnitudes and targets, hovers were similar in all conditions, with the lowest mean in the No-Landmarks condition (10.12 hovers per correct selection) compared to the averaged mean of the landmark conditions at 10.81 hovers. We again investigated the Condition $\mathrm{x}$ Block interaction to consider learning rates. Fig 7 represents the same data, taking block 1 as a baseline and normalizing the following blocks based on block 1 performance. Fig 7 again suggests group differences in the landmark conditions, and shows similar learning rates across all three conditions.
154
+
155
+ ![01963e5b-408a-73d7-a8ef-8dee81eeec96_4_924_153_752_623_0.jpg](images/01963e5b-408a-73d7-a8ef-8dee81eeec96_4_924_153_752_623_0.jpg)
156
+
157
+ Figure 5: Search time results after block 0 normalization
158
+
159
+ #### 4.1.3 S1 Learning Blocks - Errors:
160
+
161
+ We measured errors as the number of incorrect clicks before choosing a correct target. As participants could hover over targets until they found the correct one, overall errors were low overall, with an average of 0.63 errors per trial across all conditions and blocks. For the learning blocks, A 3x3x7 RM-ANOVA (Condition x Block x Target) showed no main effect of Condition (p=0.42), Block (p=0.08), or Target $\left( {\mathrm{p} = {0.59}}\right)$ on errors.
162
+
163
+ #### 4.1.4 S1 Learning Blocks - Target-by-Target Analysis:
164
+
165
+ As the ANOVA results showed a main effect of Target on search time and hover counts, we looked into the results of each specific target. Overall, as seen in Figures 8 and 9 we found that the targets that required the fewest hover actions and were found fastest were NAM-12 (9964ms) and NAM-9 (11905ms), both of which were located at or near the corners. We also found the hardest targets were those located in the centre, such as NAM-18 (19295ms) and NAM-2 (18345ms). Although previous research suggests that targets such as NAM-18 and NAM-2 should have been much more difficult in the No-Landmarks condition (where there were no visual features to help users remember these locations), search times actually favoured the No-Landmarks condition. Even with target NAM-63 - which was highlighted in the landmark conditions - we found that participants in the No-Landmarks condition found the target faster (9919ms) than those in the landmark conditions (15103ms).
166
+
167
+ ![01963e5b-408a-73d7-a8ef-8dee81eeec96_5_150_152_748_625_0.jpg](images/01963e5b-408a-73d7-a8ef-8dee81eeec96_5_150_152_748_625_0.jpg)
168
+
169
+ Figure 6: Mean trial hover counts (± s.e.) across learning blocks (1-3). Block 4 shows the results of removing or changing emphasized objects.
170
+
171
+ ### 4.2 S1 Effects of Change/Removal of Landmarks: Search time, Hovers, and Errors Block 3 to 4
172
+
173
+ #### 4.2.1 S1 Change/Removal - Search Times:
174
+
175
+ To look for effects of removing/changing the accidental landmarks, we carried out an analysis using only block 3 (the block before the removal/change) and block 4 (the block after the removal/change). The 3x2 RM-ANOVA (Condition x Block) found an interaction between the two factors $\left( {{F}_{2,{178}} = {3.43}, p = {0.03},{\eta }^{2} = {0.02}}\right)$ in terms of search time.
176
+
177
+ Search times increased in Landmarks-Removed from 13472ms in Block 3 to ${15034}\mathrm{\;{ms}}$ in block 4, and in Landmarks-Changing from ${11830}\mathrm{\;{ms}}$ to ${13774}\mathrm{\;{ms}}$ . By contrast (and as expected from previous literature on learning), search performance continued to improve in the No-Landmarks condition: from 10128ms in Block 3 to 7768ms in Block 4. To check whether each condition changed significantly between block 3 and 4, we carried out additional follow-up t-tests. However, no specific differences were found for this per-condition analysis (all $p > {0.05}$ ).
178
+
179
+ Following the analysis on the learning blocks presented above, and the significant interaction between Condition x Block for Blocks 3 and 4, we carried out an analysis of the final block using a similar 3x7 RM-ANOVA (Condition x Block). We found a main effect of Condition $\left( {{F}_{2,{178}} = {12.61}, p < {0.001},{\eta }^{2} = {0.04}}\right)$ and Target $\left( {{F}_{6,{522}} = {7.84}, p < {0.001},{\eta }^{2} = {0.04}}\right)$ on search times, and an interaction between Condition x Target $\left( {{F}_{12.522} = {1.84}, p < {0.001},{\eta }^{2} = }\right.$ 0.02 ). Post-hoc pairwise t-tests again showed significant differences between No-Landmarks and both landmarks conditions (both $p < {0.05})$ .
180
+
181
+ #### 4.2.2 S1 Change/Removal - Hovers:
182
+
183
+ To look for effects of removing/changing the landmarks on hover count, we carried out a similar analysis using hover data from block 3 and block 4. The 3x2 RM-ANOVA (Condition x Block) found an interaction between Condition x Block $\left( {{F}_{2,{178}} = {3.49}, p = {0.03},{\eta }^{2} = }\right.$ 0.01 ). Hovers increased by in Landmarks-Removed from 7.75 to 8.97 and in Landmarks-Changing from 6.64 to 7.9. As with search time, performance continued to improve in the No Landmarks condition: from 7.1 hovers in Block 3 to 5.7 in Block 4. We carried out additional follow-up t-tests to check whether each condition changed significantly between block 3 and 4, however, no specific differences were found for this per-condition analysis (all $p > {0.05}$ ).
184
+
185
+ ![01963e5b-408a-73d7-a8ef-8dee81eeec96_5_926_154_751_623_0.jpg](images/01963e5b-408a-73d7-a8ef-8dee81eeec96_5_926_154_751_623_0.jpg)
186
+
187
+ Figure 7: Hover count results after block 0 normalization
188
+
189
+ Following the significant interaction between Condition x Block, we carried out an analysis of the final block using a similar $3 \times 7\mathrm{{RM}}$ - ANOVA (Condition x Block). We found main effects of Condition $\left( {{F}_{2,{178}} = {3.28}, p = {0.04},{\eta }^{2} = {0.04}}\right)$ and Target $\left( {{F}_{6,{522}} = {7.22}, p < }\right.$ ${0.001},{\eta }^{2} = {0.05}$ ) on hovers, and an interaction between Condition x Target $\left( {{F}_{{12},{522}} = {2.04}, p < {0.001},{\eta }^{2} = {0.02}}\right)$ . Post-hoc pairwise t-tests showed significant differences between No-Landmarks and both landmark conditions (both $p < {0.05}$ ).
190
+
191
+ #### 4.2.3 S1 Target-by-Target Analysis:
192
+
193
+ As the ANOVA results for the final block showed a main effect of Target on search time and hover counts, and also showed interactions between Condition and Target, we again looked into the results of each specific target.
194
+
195
+ The Condition x Target interaction indicates that the effect of Condition on search time and hover count varied by target. Inspecting the target-by-target charts shows that there were no targets for which the landmarks were particularly helpful during training, and that the majority of targets were affected by the removal of the landmark. To explore this further, we repeated an ANOVA for each target in Block 4, to see which targets were affected by Condition. For Search times, the following targets showed significant effects of Condition: NAM9 (p=0.04), NAM18 (p=0.03), NAM32 (p=0.01), and NAM63 (p=0.01). For hover count, only NAM9 showed a significant effect of condition $\left( {\mathrm{p} = {0.02}}\right)$ . In all these cases, search times and hover counts were significantly better in the No-Landmarks condition.
196
+
197
+ Similar to the learning blocks, we found that targets located near the corners required the fewest hover actions and were found fastest: NAM-12 (8926ms) and NAM-9 (7361ms). The most difficult targets were those located in the centre: NAM-18 (18456ms) and NAM-2 (16945ms). For NAM-63, which was located directly on a landmark during training, we found that participants in the No-Landmarks condition on average found the target twice as fast(6328ms)as those in the landmark conditions (14197ms).
198
+
199
+ ![01963e5b-408a-73d7-a8ef-8dee81eeec96_6_180_153_1430_1170_0.jpg](images/01963e5b-408a-73d7-a8ef-8dee81eeec96_6_180_153_1430_1170_0.jpg)
200
+
201
+ Figure 8: Mean search time ( $\pm$ s.e.) by Target and Block
202
+
203
+ ### 4.3 S1 Subjective Measures
204
+
205
+ #### 4.3.1 S1 Perceived Change in Difficulty
206
+
207
+ For each specific target, we also asked participants in the landmarks conditions to rate on a 1-7 scale how much more difficult it was to find the target in block 4 compared to block 3 . (We did not ask this question to participants in the No-Landmarks condition, but we can assume that they would not have seen any major difference in difficulty between block 3 and 4). Mean results are shown in Fig 10. Overall, the target that participants felt was least affected by changing or removing the landmarks was NAM-9, located in the bottom right corner of the grid.
208
+
209
+ Using an Aligned Rank Transform on the difficulty ratings [62], one-way ANOVAs were performed for each of the targets using Condition as the factor. The ANOVA found a significant effect of Condition for the NAM-40 target (middle of the last column in the grid): for this target, participants in the Landmarks-Removed condition rated the target as more difficult (4.65) than participants in the Landmarks-Changing condition (3.8).
210
+
211
+ #### 4.3.2 S1 Perceived Effort
212
+
213
+ Participants' perceived effort was recorded using the NASA-TLX questionnaire [25]. For the Landmarks-Changing condition and the Landmarks-Removed condition we specifically asked the effort questions in relation to their perceived effort after the landmarks were changed or removed. We used an Aligned Rank Transform on the aggregated responses to perform a one-way ANOVA on each of the TLX questions using Condition as a factor. The mean responses to the TLX questions are shown in Fig 11. Significant effects were found in the responses for perceived success and frustration, both (p<0.05). Holm-corrected post-hoc pairwise t-tests were performed on the questions that had significant effects. For perceived success, the pair-wise comparison found a significant difference between Landmarks-Removed and No-Landmarks, with participants having a greater perceived success with no landmarks rather than when landmarks are initially presented and then removed altogether.
214
+
215
+ #### 4.3.3 S1 Participant Comments
216
+
217
+ At the end of the study, we asked participants to explain their general process of finding targets and whether some were easier or harder than others. Their responses generally echoed several of the findings from the previous sections. While search times and hover counts did not show a clear improvement in conditions with landmarks, some of the participant's remarks do state the benefit of having landmarks. For example, P1 stated "When they [targets] were close to a different colored circle, in the immediate vicinity, it made it easier." Similarly, P3 stated "For some of the targets, I was able to find them easily because they were near a red colored circle." A few other participants also remarked how specific targets were easier, such as P8 emphasized, "NAM-9 in the corner, [and] it was [easier] around the colored ones."
218
+
219
+ ![01963e5b-408a-73d7-a8ef-8dee81eeec96_7_186_155_1427_1170_0.jpg](images/01963e5b-408a-73d7-a8ef-8dee81eeec96_7_186_155_1427_1170_0.jpg)
220
+
221
+ Figure 9: Mean hover count (± s.e.) by Target and Block (hovers counted after 300ms)
222
+
223
+ 7 (large increase in difficulty)
224
+
225
+ Mean Score 5 4 (medium increase in difficulty) - 3 -
226
+
227
+ 2-
228
+
229
+ 1 (no increase in difficulty) -
230
+
231
+ NAM63 NAM9 NAM18 NAM12 NAM2 NAM32
232
+
233
+ Increase in Difficulty
234
+
235
+ Landmarks-Removed Landmarks-Changing
236
+
237
+ Figure 10: S1 Perceived Change in Difficulty in Block 4
238
+
239
+ For participants in the No-Landmarks condition, comments suggested that people had to resort to other techniques: for example,
240
+
241
+ ![01963e5b-408a-73d7-a8ef-8dee81eeec96_7_928_1422_746_373_0.jpg](images/01963e5b-408a-73d7-a8ef-8dee81eeec96_7_928_1422_746_373_0.jpg)
242
+
243
+ Figure 11: S1 Mean NASA Task Load Index scores, by condition
244
+
245
+ P10 stated "I used Cartesian indication (from high school; 2 axes: $\mathrm{x}$ and $\mathrm{y})$ ." P12 said,"32 was in the first column second row which was easy to find, 9 was at the end and 63 was at the third column last raw" and P13 stated "corner, first and last row."
246
+
247
+ ## 5 STUDY 2: EFFECTS OF ACCIDENTAL LAND- MARKS IN MORE-COMPLEX SCATTERPLOTS
248
+
249
+ Our second study considers visualizations that are more complex than the grid used in the first study. Visualizations with irregular layout (e.g., typical scatterplots) may contain more structural and layout-based visual features that can act as landmarks - such as clusters of points and white space, in addition to edges and corners. We need to understand how users perceive landmarks in more complex visualizations, so we designed our study task to show a scatterplot visualization based on a real-world dataset from the Gapminder site [43].
250
+
251
+ ### 5.1 S2 Study System and Targets
252
+
253
+ A web-based system was developed using JavaScript and d3 [7] that showed a scatterplot based on Gapminder data [43]. The X axis showed the per-capita income of a country, and the $\mathrm{Y}$ axis showed the life expectancy in that country for a single selected year (similar to other online recreations of the dataset). The interface presented the names of target countries, and the user had to click on a target item to confirm a selection. Similar to Study 1, item names were shown in a tooltip by hovering the mouse on an item. Hover feedback was immediate, but as with Study 1 we only considered hovers with a duration of ${300}\mathrm{\;{ms}}$ for analysis.
254
+
255
+ To ensure that the way we tested emphasis was fair among each task type, we used the same scatterplot, dataset, and combination of targets/landmarks (except for the No-landmarks condition which did not require landmarks) for each condition.
256
+
257
+ For the study we used seven targets, out of 142 total items on the screen; eight items were initially highlighted. The seven targets were chosen to have different inherent difficulties based on their proximity to landmarks and other potential spatial cues in the visualization (e.g., edges, clusters, and white space). Targets and locations are shown in Figure 12.
258
+
259
+ ![01963e5b-408a-73d7-a8ef-8dee81eeec96_8_153_1181_708_645_0.jpg](images/01963e5b-408a-73d7-a8ef-8dee81eeec96_8_153_1181_708_645_0.jpg)
260
+
261
+ Figure 12: Locations of the targets (shown here in blue, not shown in the study) in relation to the landmarks. Target Nicaragua (shown with a red dot) was both a target and a landmark. In the No-Landmarks condition, there were no red highlights.
262
+
263
+ ### 5.2 S2 Procedure
264
+
265
+ We followed a similar procedure to Study 1, with seven phases: (1) informed consent, (2) demographics questionnaire, (3) vision test, (4) guided tour, (5) study tasks, (6) post-study questionnaires, and (7) debriefing. Participants similarly first completed a guided tour through all the targets after which the study proceeded to the test phase.
266
+
267
+ ### 5.3 S2 Participant Recruitment
268
+
269
+ We initially recruited 90 participants across the three conditions (30 per condition) using Amazon's Mechanical Turk (MTurk), and gathered data through a custom browser-based experiment tool [31], however 3 were removed for having an overall completion time for the study over 3.sd from the mean, and an additional participant was removed due to completing experimental tasks more than once (refreshing the browser causes the tasks to restart). The remaining participants were distributed as follows: 30 in Landmarks-Removed, 28 in Landmarks-Changing and 28 in No-Landmarks $\left( {{\mu }_{\text{age }} = {35.47}}\right.$ , ${\sigma }_{age} = {12.11},{55}\mathrm{\;{men}},{30}$ women,1 preferred not to answer). Our study required workers to have over ${90}\%$ HIT acceptance rate, and we also checked the questionnaire responses to ensure that the same answer was not used for all of the questions, as well as whether the study was completed too quickly or too slowly (which could represent participants simply clicked through the study, or were focused on additional tasks).
270
+
271
+ All participants were paid $\$ 3$ for completing the study, which took approximately 15 minutes.
272
+
273
+ ### 5.4 S2 Study Design
274
+
275
+ The goal for this study was to understand the effects of landmarks on spatial learning in more-complex visualizations. Our main research questions (RQ) for this study were:
276
+
277
+ - RQ-1: Do landmarks improve finding and re-finding targets in scatterplots when they are present (i.e., decreased search time, number of hovers, and error rate)?
278
+
279
+ - RQ-2: Does removing or changing landmarks after a learning period affect re-finding (i.e., increased search time, hover counts, and error rate)?
280
+
281
+ To investigate these questions, the study used a mixed factorial design with three factors:
282
+
283
+ - Condition (between-subjects): No-Landmarks, Landmarks-Removed, Landmarks-Changing
284
+
285
+ - Target Locations (within subjects): seven target locations (see Figure 12)
286
+
287
+ - Blocks (within subjects): 1-4 (blocks 1-3 are training; block 4 removes/changes any landmarks).
288
+
289
+ Similar to Study 1, our dependent variables were search time, hover counts (only included if longer than ${300}\mathrm{\;{ms}}$ ), and error counts (i.e., incorrect clicks), and subjective ratings of effort from post-session questionnaires. Targets were the same for all participants.
290
+
291
+ ## 6 S2 STUDY RESULTS
292
+
293
+ We again report effect sizes for significant ANOVA results as general eta-squared ${\eta }^{2}$ (considering .01 small,.06 medium, and >.14 large [36]). Outliers were determined as any trial with a search time greater than $3\mathrm{{SDs}}$ above the block’s mean. 85 of the 2408 trials were removed from the analysis. All pairwise t-tests were corrected using the Holm-Bonferroni method.
294
+
295
+ ![01963e5b-408a-73d7-a8ef-8dee81eeec96_9_150_150_717_596_0.jpg](images/01963e5b-408a-73d7-a8ef-8dee81eeec96_9_150_150_717_596_0.jpg)
296
+
297
+ Figure 13: ScatterPlot Mean trial search times (± s.e.) across learning blocks (1-3). Block 4 shows the results of removing or changing emphasized objects.
298
+
299
+ ### 6.1 S2 Effects of Landmarks on Learning: Search Time, Hovers, and Errors in Learning Blocks
300
+
301
+ #### 6.1.1 S2 Learning Blocks - Search Times
302
+
303
+ Search time in the test trials was measured from the time a target name appeared on the screen to the time a user correctly found a target. Search times across all blocks for the three conditions are shown in Fig 13.
304
+
305
+ For the learning blocks, a 3x3x7 RM-ANOVA (Condition x Block x Target) showed a main effect of Condition $\left( {{F}_{2,{172}} = {4.99}, p = }\right.$ ${0.007},{\eta }^{2} = {0.005})$ Block $\left( {{F}_{2,{172}} = {70.06}, p < {0.001},{\eta }^{2} = {0.07}}\right)$ and Target $\left( {{F}_{6,{510}} = {7.72}, p < {0.001},{\eta }^{2} = {0.02}}\right)$ . Aggregated across all training blocks and targets, participants in the Landmarks-removed took ${23367}\mathrm{\;{ms}}$ to find a target, ${21907}\mathrm{\;{ms}}$ in the Landmarks-Changing compared to an average of 25339ms in the No-Landmarks condition. A Post-hoc pairwise t-test showed a significant difference on search times between Landmarks-Changing and No-Landmarks $\left( {\mathrm{p} = {0.008}}\right)$ for the learning blocks.
306
+
307
+ #### 6.1.2 S2 Learning Blocks - Hovers
308
+
309
+ We again measured the number of hovers as the number of times the participant held the cursor over an element for ${300}\mathrm{\;{ms}}$ or more to show the name. Mean hovers per trial are shown in Figure 14. For the learning blocks, a 3x3x7 RM-ANOVA (Condition x Block x Target) showed a main effect of Condition $\left( {{F}_{2,{172}} = {8.92}, p < }\right.$ ${0.001},{\eta }^{2} = {0.006})$ , Block $\left( {{F}_{3,{255}} = {52.57}, p < {0.01},{\eta }^{2} = {0.06}}\right)$ and Target $\left( {{F}_{6.510} = {12.26}, p < {0.001},{\eta }^{2} = {0.02}}\right)$ . On average (accross all learning blocks), it took a participants 18.25 hovers to find a target in the Landmarks-Removed, 22.24 hovers in the Landmarks-Changing, while the No-Landmarks required 28.48 hovers for a correct selection. A Post-hoc pairwise t-test showed a significant difference on Hovers between both Landmarks conditions and No-Landmarks (both $p < {0.01}$ ) for the learning blocks.
310
+
311
+ #### 6.1.3 S2 Learning Blocks - Errors
312
+
313
+ As participants could hover over elements in the scatterplot until they found the correct item, errors were low overall. Errors were measured as the number of incorrect clicks before choosing a correct target. this study with an average of 0.69 errors per trial across all conditions and blocks. For the learning blocks, a 3x3x7 RM-ANOVA (Condition x Block x Target) showed a main effect of Condition $\left( {{F}_{2,{170}} = {4.13}, p = {0.01},{\eta }^{2} = {0.003}}\right)$ and Block $\left( {{F}_{3,{255}} = }\right.$ ${4.51}, p = {0.003},{\eta }^{2} = {0.006}$ ) errors, but no interactions between the factors. A Post-hoc pairwise t-test showed significant differences between Landmarks-Changing and No-Landmarks $\left( {\mathrm{p} = {0.01}}\right)$ , with participants making fewer errors with the Landmarks-Changing conditions overall (0.50 errors per trial) compared to 2.37 errors per trial for the No-Landmarks condition.
314
+
315
+ ![01963e5b-408a-73d7-a8ef-8dee81eeec96_9_924_152_715_594_0.jpg](images/01963e5b-408a-73d7-a8ef-8dee81eeec96_9_924_152_715_594_0.jpg)
316
+
317
+ Figure 14: ScatterPlot Mean Hovers(± s.e.) across learning blocks (1-3). Block 4 shows the results of removing or changing emphasized objects.
318
+
319
+ #### 6.1.4 S2 Learning Blocks - Target-by-Target Analysis
320
+
321
+ As the ANOVA results showed a main effect of Target on search time and hover counts, we looked into the results of each specific target. Although there is little difference among most targets, targets near the centre (such as Jordan and Montenegro) were the hardest to find (see Figures 15 and 16). Nicaragua (which was both a target and highlighted in the landmark conditions) was substantially harder to find in the No-Landmarks condition than the rest of the targets (required 42.5 hovers and ${60900}\mathrm{\;{ms}}$ ), but was only of average difficulty in the landmarks conditions (19.85 hovers and 31900ms). In all conditions, France, located near the top right corner (between landmarks in the landmark conditions) was the easiest to find (13364ms and 6.9 hovers).
322
+
323
+ ### 6.2 S2 Effects of Change/Removal of Landmarks: Search time, Hovers, and Errors Block 3 to 4
324
+
325
+ #### 6.2.1 S2 Change/Removal - Search Times
326
+
327
+ To investigate the effects of changing or removing landmarks in the scatterplot, we carried out an analysis using only block 3 (the block before the removal/change) and block 4 (the block after the removal/change). The 3x2 RM-ANOVA (Condition x Block) did not find an interaction between the two factors $\left( {\mathrm{p} = {0.054}}\right)$ for search time. There was also no interaction between (Condition x Target) $\left( {\mathrm{p} = {0.54}}\right)$ .
328
+
329
+ Search times continued to decrease in Landmarks-Removed from ${16072}\mathrm{\;{ms}}$ in Block 3 to ${10788}\mathrm{\;{ms}}$ in block 4, and in Landmarks-Changing from 14454ms to 13385ms. These results were similar to the No-Landmarks condition: from 19716ms in Block 3 to 11401ms in Block 4. However, this improvement varied by target, and some targets actually decreased in performance. Nicaragua, which was both a landmark and a target, saw search times go from 18384ms to ${22522}\mathrm{\;{ms}}$ in the Landmarks-Changing condition. We saw a similar (although smaller) effect with France and Montenegro.
330
+
331
+ ![01963e5b-408a-73d7-a8ef-8dee81eeec96_10_183_153_1423_1165_0.jpg](images/01963e5b-408a-73d7-a8ef-8dee81eeec96_10_183_153_1423_1165_0.jpg)
332
+
333
+ Figure 15: Mean search time ( $\pm$ s.e.) by Target and Block in Scatterplot Study. Target/landmark locations (and the change in Landmarks) are included in the bottom right corner.
334
+
335
+ #### 6.2.2 S2 Change/Removal - Hovers
336
+
337
+ We did a similar analysis investigating the change or removal of landmarks for hovers. Similar to search times, we saw an improvement in hovers required to find a target for all conditions, going from 9.6 hovers to 7.8 for Landmarks-Changing; 10.79 to 6.93 in Landmarks-Removed, and 16.5 to 7.97 for No-Landmarks. The 3x2 RM-ANOVA (Condition x Block) found an interaction between Condition x Block $\left( {{F}_{2,{172}} = {3.73}, p = {0.002},{\eta }^{2} = {0.006}}\right)$ for Hovers.
338
+
339
+ Similar to search times, hover counts continued to decrease from the 3rd to 4th block, but certain targets were affected negatively. We saw the same effect for Nicaragua (going from 9 hovers in Block 3 to 13 when landmarks were changed- but this effect did not happen when the landmark was removed). Conversely, Moldova was negatively affected by the removal of landmarks (from 8 hovers in Block 3, to 10 in Block 4), but not by changing the landmarks (continued to improve to just 3.14 hovers in the final Block).
340
+
341
+ ### 6.3 S2 Subjective Measures
342
+
343
+ #### 6.3.1 S2 Perceived Change in Difficulty
344
+
345
+ For each specific target, we also asked participants to rate on a 1-7 scale how much more difficult it was to find the target in block 4 (for the landmark conditions). As shown in Fig 17, while participants did report that the change/removal of landmarks made the task more difficult, the change affected most targets equally. Overall, France (located top right corner between two red dots) was the least affected by the change. A One-way ANOVA on each of the targets ratings using Aligned Rank Transform [62] with Condition as the factor found no differences between the conditions.
346
+
347
+ #### 6.3.2 S2 Perceived Effort
348
+
349
+ We again measured participants' perceived effort in relation to changing or removing the landmarks was recorded using the NASA-TLX questionnaire [25]. For the No-Landmarks condition, perceived effort relates to finding the target in the final block. Results are summarized in Fig 18. We used an Aligned Rank Transform on the aggregated responses to perform a one-way ANOVA on each of the TLX questions using Condition as a factor. The ANOVA found no significant differences between the conditions on any of the TLX measures (all $p > {0.05}$ ).
350
+
351
+ ![01963e5b-408a-73d7-a8ef-8dee81eeec96_11_181_154_1425_1173_0.jpg](images/01963e5b-408a-73d7-a8ef-8dee81eeec96_11_181_154_1425_1173_0.jpg)
352
+
353
+ Figure 16: Hovers (± s.e.) by Target and Block in Scatterplot Study. Target/landmark locations (and the change in Landmarks) are included in the bottom right corner.
354
+
355
+ ![01963e5b-408a-73d7-a8ef-8dee81eeec96_11_155_1451_743_376_0.jpg](images/01963e5b-408a-73d7-a8ef-8dee81eeec96_11_155_1451_743_376_0.jpg)
356
+
357
+ Figure 17: S2 Perceived Change in Difficulty in Block 4
358
+
359
+ #### 6.3.3 S2 Participant Comments
360
+
361
+ At the end of this study, we also asked participants to explain their overall process of finding targets and whether they employed any specific strategy throughout the task. Regarding the scatterplot configuration and using a real dataset, P8 stated "Yes, some targets [were easier as they] were located near a corner, or distinct cluster" while P12 commented, "some [targets] were located next to same continent countries or countries near by" while P44 mentioned, "I tried to remember some of the countries in a certain area of dots whose names I am familiar with." Other participants mentioned the use of landmarks, such as P32 stated, "[targets were easier] only when they were within the red circles" and P90 mentioned, "Some [targets] were close to the edges and red circles." Participants in the No-Landmarks condition more commonly stating using their personal experience to help with the task, such as P2 mentioning "I noticed countries that are close together on a map were relatively close together on this chart", and P79 "I found middle east countries, mostly all together, and European and Asian countries were similarly grouped, and from there I just had to try to build a memory."
362
+
363
+ ![01963e5b-408a-73d7-a8ef-8dee81eeec96_11_929_1451_747_376_0.jpg](images/01963e5b-408a-73d7-a8ef-8dee81eeec96_11_929_1451_747_376_0.jpg)
364
+
365
+ Figure 18: S2 Mean NASA Task Load Index scores, by condition
366
+
367
+ ## 7 Discussion
368
+
369
+ Our studies investigated whether accidental landmarks could help users find and re-find items in a visualization, and whether they impaired performance when taken away or changed. The studies provide several main findings:
370
+
371
+ - In Study 1, performance in the learning blocks (in terms of search time and hovers) was no better for the accidental-landmark conditions (in fact, the No-Landmarks condition was best), but in the final block, performance was impaired when landmarks were removed or changed;
372
+
373
+ - In Study 2, search time and hovers in the learning blocks were lower for the accidental-landmark conditions, but there was no significant detriment in the final block when the landmarks were removed or changed;
374
+
375
+ - In both studies, participants in the landmarks conditions reported that finding targets in the final block was more difficult than in the previous blocks;
376
+
377
+ - In both studies, participant comments suggested that people were using the highlight colours to assist them in finding the targets, as well as structural landmarks such as corners, edges, clusters, and white space;
378
+
379
+ - In both studies, participants reported no major differences between the three conditions in terms of overall effort.
380
+
381
+ In the following sections, we provide explanations for these results, discuss how our findings can generalize to real-world visualizations, and outline limitations of the study and opportunities to extend the research.
382
+
383
+ ### 7.1 Explanation of Results
384
+
385
+ #### 7.1.1 The Effects of Landmarks in Learning Blocks
386
+
387
+ Our two studies showed contrasting results about the usefulness of accidental landmarks in helping participants learn item locations during the learning blocks. The only change between the studies was in the type of visualization used, and the differences between the grid and scatterplot can help to explain the contrasting study results. First, in the simple grid used with Study 1, the visual search task was easier than with the more complex scatterplot of Study 2. Study 1 participants could carry out a row-by-row or column-by-column search pattern to look for the target, which may have made the coloured highlights less valuable. In contrast, the irregular and more complex organization of items in the Study 2 scatterplot did not allow users to carry out a methodical search strategy, and when users are carrying out a less-organized search, the anchors provided by the highlighted landmarks may have been more valuable. For example, a general problem in searching a complex dataset is that users repeat some areas and miss others; the reference frame provided by the highlights may have assisted users in organizing their search and reducing repetition.
388
+
389
+ Second, the attentional draw of the emphasized points may have affected the two visualizations differently. It is known that bottom-up attention will be guided to areas of visual emphasis (e.g., our studies showed the highlights as red circles among white circles) [16,64]. Visual attention will be in part guided to objects that differ from others as a first step in the multi-step process of attention, which is then guided by the task and previous selections. In the simpler grid visualization, the attentional draw of the emphasized points may have distracted participants from a regularized search strategy, reducing the efficiency of their visual search. Although this could also have occurred in the more-complex scatterplot of Study 2, any negative effects may have been outweighed by the organizational benefit provided by the reference frame of the coloured landmarks.
390
+
391
+ These possibilities should be explored further in additional studies. In addition, we also note that our between-participants design leads to the potential for inherent group differences that may account for some of the overall difference between conditions during training. It was not possible to completely remove these group differences (e.g., we could not use performance on the first block as a covariate, because the experience of visual search was substantially different for the landmarks and no-landmarks conditions); further studies can help to further investigate the initial differences between the conditions.
392
+
393
+ #### 7.1.2 The Effects of Removing / Changing Landmarks
394
+
395
+ Our studies also showed contrasting results in terms of whether changing or removing landmarks impaired performance: Study 1 saw a significant reduction in performance when landmarks were taken away or changed, whereas Study 2 did not (there were indications of a performance reduction for some targets, but not overall).
396
+
397
+ Again, differences between the grid and scatterplot visualizations can help to explain these contrasting results. In Study 1, the relative lack of structural or layout-based landmarks in the grid means that the coloured landmarks were more likely to be seen as a primary reference frame for participants in the landmarks conditions (particularly because people were not forewarned that the highlights would be changed / removed). For example, Study 1 saw strong performance impairments for both targets that were in the interior of the grid (near to a coloured highlight but not near to a corner or an edge).
398
+
399
+ The scatterplot used in Study 2 had many more structural and layout-based landmarks in addition to the coloured highlights (e.g., clusters of points and areas of white space in addition to the edges and corners of the datapoints). This means that participants in Study 2 had multiple frames of reference available to them, and they likely made use of both structural and colour-based landmarks when learning item locations. Previous research suggests that people will use whatever reference frame makes their task easiest, but in Study 2, neither reference frame was dominant. There were eight highlighted items in the Study 2 scatterplot, meaning that the coloured items did not simplify the task so much that it was trivially easy (e.g., the task was much more difficult than if there had been two targets that were beside two coloured landmarks). The overall difficulty means that participants were likely to make use of the structural landmarks in addition to the highlighting - and since structural landmarks were unchanged in the final block, people may have been able to rely on this other reference frame to maintain their performance. Limited evidence for this hypothesis can be seen in the performance of the Nicaragua target - because this target was also highlighted in the training blocks, it was easy to find using only colour, which may have led participants to rely more on colour rather than structural landmarks such as nearby clusters.
400
+
401
+ Overall, our results align with the guidance and effort hypotheses (i.e., that providing guidance and reducing effort in training will lead to over-reliance on the guide). When colour highlighting was the only reference frame available, or when it made the retrieval task easier, participants relied on it more and had larger reductions in performance when the highlighting was removed or changed. The presence of other reference frames (e.g., structural and layout-based landmarks) appeared to mitigate the problems caused by removing the colour highlights - but it is worth noting that in Study 2, participants in the landmarks conditions subjectively rated the task as substantially more difficult when the landmarks were removed or changed, even though they were able to make use of other knowledge to preserve performance.
402
+
403
+ ### 7.2 Generalizing the Findings to other Contexts
404
+
405
+ Our study examined the effects of accidental landmarks in two visualization settings, a simple grid and a more-complex scatterplot, and there are several underlying commonalities between our experiments and real-world scenarios that argue for the generalizability of our findings.
406
+
407
+ First, our learning task - repeatedly visiting target locations - is common in many real-world visualization tasks. A typical exploration of a dataset involves investigating interesting data points or patterns to identify relationships between them. Additionally, it is common for visualization designers to use emphasis to encourage exploration (e.g., by highlighting regions of interest to signify importance or to alert viewers to missing links). Similarly, in narrative visualization, when known aspects of a data set are presented to the viewers $\left\lbrack {8,{49}}\right\rbrack$ , different data points are explained and presented to viewers, and designers may alter an element's size or colour to improve its legibility relative to other areas of a visualization, potentially making it more memorable.
408
+
409
+ Second, our manipulation of the landmarks - changing the emphasized set or removing emphasis altogether - is also something that is likely to occur in many real-world visualizations. Emphasizing or highlighting one particular subset of the displayed data is a common action as viewers explore different aspects of a visualizations or review different findings. As the exploration or story-telling process continues, it is common for users to focus on a different subset of the data. For example, in Study 2, a normal exploration process could involve highlighting countries in different continents. Once an analysis or exploration session is finished, unless the visualization system has a history mechanism built in, there will be no emphasized points upon returning to a visualization (similar to our Landmarks-Removed condition).
410
+
411
+ Third, our participants were MTurk workers rather than users who have naturally arrived at a visualization task, and although there are likely to be differences between these populations in terms of intrinsic motivation and interest in the dataset, there are also many similarities. In particular, there is a wide range of visualization users who could be affected by accidental landmarks, and the demographics of our MTurk sample covered a variety of prior experience with visualizations. The characteristics of an MTurk study help increase ecological validity compared to more typical lab studies: we had a larger sample than typical laboratory studies ( 180 total participants) who had a much more diverse background than what is generally seen in HCI experiments, as such our findings can be more representative of a generalized user base.
412
+
413
+ Fourth, our use of emphasis in the studies reasonably represents the type of accidental landmarks that may be available in a visualization system - e.g., highlight-based filtering and brushing capabilities are now common in many tools, such as Tableau as shown in Figure 1 - and many users will take advantage of these capabilities.
414
+
415
+ ### 7.3 Limitations, Extensions, and Future Work
416
+
417
+ There are limitations to our evaluation - many of which were necessary to test the use of emphasis as landmarks in controlled environments - and these limitations provide opportunities to expand our work in future studies.
418
+
419
+ The grid-style visualization, the underlying dataset (target/distractor names), and the target/landmark locations were chosen for the study in order to control potential external factors such as cluster-based layout cues that provide visual indications about location. As our grid with circles most resembles a scatterplot, we then extended our initial results to evaluate the effects of emphasis on spatial memory using a scatterplot. However, we used a single dataset behind the scatterplot, and we note that participants may have formed relationships within the scatterplot and dataset (familiar names or clusters of data). This can be counteracted by evaluating multiple distinct datasets in followup work.
420
+
421
+ Second, our future work involves evaluating the use of landmarks in a greater variety of chart types including bar charts or more complex, interactive visualizations (e.g., basic charts in a small-multiples configuration). Involving multiple charts may result the benefits or drawbacks of landmarks being amplified as there may be more structural landmarks occurring on the outlines of multiple charts, but it may be harder to find items within each chart.
422
+
423
+ Third, we explored the effects of accidental landmarks with only one visual variable (colour), but there are many other emphasis effects that could be tested, including size, outline, transparency, texture, or shape. Previous research has shown that different visual variables attract attention and affect learning at different levels [9, ${22},{38}\rbrack$ , and designers must decide on a trade-off between noticeable highlights and the potential unintended distraction in learning.
424
+
425
+ Fourth, our study focused on immediate learning performance and short-term memory and spatial awareness through our revisitation task; we did not test longer-term retention after hours or days (which would be common in visualization as analysts can work with datasets over extended periods of times of weeks and months). Our approach was necessary to establish an initial baseline understanding of how emphasis affects the initial spatial learning process, but in future studies we will extend the work to look at both longer retention. Furthermore, development of real expertise with a visualization system often requires much longer training duration than those provided by our studies. In our future work, long-term studies will allow us to also examine how longer training periods and varying gaps of hours or days can lead to better spatial development and retention.
426
+
427
+ In addition, there are several research directions that could explore ways of better supporting users even when accidental landmarks change or disappear. Our results and participant comments show that users do use and rely on highlights to revisit previous targets, particularly when the landmarks make the retrieval task easier. Even though designers cannot control the application of filters and highlights when users explore a visualization, there may be ways of avoiding the problems that can arise from changes to emphasis and highlighting. One possibility is to show traces of previous highlights (i.e., "ghost echos" or "phosphor effects"); these marks would provide assistance to users who are relying on accidental landmarks, by providing at least a trace of the landmarks' previous locations. These traces could slowly fade away after a period of time, which could also encourage users to find other strategies for remembering the items.
428
+
429
+ Further study is also needed on the general problem of supporting revisitation, and whether other mechanisms that could be used to improve re-finding can also act as accidental landmarks. For example, "visit wear" techniques can visually mark the items that people visit in a visualization, making revisitation much easier. An example of this technique is the Footprints scrollbar, which records user locations with marks in a scrollbar if the user pauses for more than one second [1]. This system also analysed usage data to improve and automate the state saving algorithm such that the most relevant locations would be saved without cluttering the scrollbar. While visualizations can range from very simple representations to very complex multi-dimensional parameter spaces, a combination of methods such as visit wear and state saving mechanisms can ease revisiting objects while exploring visualizations. Further work is needed to understand whether and how annotations such as visit-wear marks function as landmarks, and whether their obvious value in supporting revisitation can lead to larger problems of over-reliance.
430
+
431
+ ## 8 CONCLUSION
432
+
433
+ Many visualizations display large datasets in which it can be difficult for users to find (and re-find) specific items. Interactive systems that provide highlighting tools such as filtering or brushing emphasize certain data points - these can become "accidental landmarks," visual anchors that help users remember locations that are near the emphasized points. Landmarks are known to be useful (by aiding revisitation), but previous research on the guidance hypothesis suggests that if users become dependent on them, removing or changing the highlighting could cause problems. We provide designers with new information about these issues: we carried out two crowd-sourced studies, first in a basic grid configuration and then in a traditional scatterplot, in which people were asked to learn a set of item locations with or without emphasized points. We then removed or changed the highlighting to see if performance suffered. Results show that accidental landmarks did not improve performance during training in a basic grid, but did so for a scatterplot, and changing or removing emphasized data points affected users' ability to re-find targets - particularly those that were not near structural landmarks such as the corners of the visualization. Our work provides new knowledge about how visual features, emphasis and landmarks in visualizations can affect revisitation, and new understanding for designers who want to support spatial awareness and learning in visualizations.
434
+
435
+ ## REFERENCES
436
+
437
+ [1] J. Alexander, A. Cockburn, S. Fitchett, C. Gutwin, and S. Greenberg. Revisiting read wear: analysis, design, and evaluation of a footprints scrollbar. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1665-1674, 2009.
438
+
439
+ [2] J. R. Anderson. Learning and memory: An integrated approach. John Wiley & Sons Inc, 2000.
440
+
441
+ [3] J. Andrade and P. Meudell. Is spatial information encoded automatically in memory? The Quarterly Journal of Experimental Psychology, 46(2):365-375, 1993.
442
+
443
+ [4] E. Awh, A. V. Belopolsky, and J. Theeuwes. Top-down versus bottom-up attentional control: A failed theoretical dichotomy. Trends in Cognitive Sciences, 16(8):437-443, 2012. doi: 10.1016/j.tics.2012.06. 010
444
+
445
+ [5] A. Baddeley. Essentials of human memory (classic edition). Psychology Press, 2013.
446
+
447
+ [6] A. D. Baddeley. Human memory: Theory and practice. psychology press, 1997.
448
+
449
+ [7] M. Bostock, V. Ogievetsky, and J. Heer. ${\mathrm{D}}^{3}$ data-driven documents. IEEE transactions on visualization and computer graphics, 17(12):2301-2309, 2011.
450
+
451
+ [8] J. D. Bradbury and R. E. Guadagno. Documentary narrative visualization: Features and modes of documentary film in narrative visualization. Information Visualization, 19(4):339-352, 2020. doi: 10. 1177/1473871620925071
452
+
453
+ [9] F. Chajadi, M. S. Uddin, and C. Gutwin. Effects of visual distinctiveness on learning and retrieval in icon toolbars. In Proceedings of the 46th Graphics Interface Conference, GI 2020, p. 11. Toronto, ON, Canada, 2020.
454
+
455
+ [10] A. Clauset, C. R. Shalizi, and M. E. Newman. Power-law distributions in empirical data. SIAM review, 51(4):661-703, 2009.
456
+
457
+ [11] A. Cockburn, C. Gutwin, and J. Alexander. Faster document navigation with space-filling thumbnails. In Proceedings of the SIGCHI conference on Human Factors in computing systems, pp. 1-10, 2006.
458
+
459
+ [12] A. Cockburn, C. Gutwin, J. Scarr, and S. Malacria. Supporting novice to expert transitions in user interfaces. ACM Computing Surveys (CSUR), 47(2):1-36, 2014.
460
+
461
+ [13] F. I. Craik and R. S. Lockhart. Levels of processing: A framework for memory research. Journal of verbal learning and verbal behavior, 11(6):671-684, 1972.
462
+
463
+ [14] M. Dechant, S. Poeller, C. Johanson, K. Wiley, and R. L. Mandryk. In-game and out-of-game social anxiety influences player motivations, activities, and experiences in mmorpgs. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20, p. 1-14. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3313831.3376734
464
+
465
+ [15] J. Deese and R. A. Kaufman. Serial effects in recall of unorganized and sequentially organized verbal material. Journal of experimental psychology, 54(3):180, 1957.
466
+
467
+ [16] J. Duncan and G. W. Humphreys. Visual search and stimulus similarity. Psychological review, 96(3):433, 1989.
468
+
469
+ [17] B. D. Ehret. Learning where to look: Location learning in graphical user interfaces. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 211-218, 2002.
470
+
471
+ [18] P. M. Fitts and M. I. Posner. Human performance. 1967.
472
+
473
+ [19] B. Gao, B. Kim, J.-I. Kim, and H. Kim. Amphitheater layout with egocentric distance-based item sizing and landmarks for browsing in virtual reality. International Journal of Human-Computer Interaction, 35(10):831-845, 2019.
474
+
475
+ [20] V. Gaur, M. S. Uddin, and C. Gutwin. Multiplexing spatial memory: Increasing the capacity of fasttap menus with multiple tabs. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI '18. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10. 1145/3229434.3229482
476
+
477
+ [21] C. Gutwin and A. Cockburn. Improving list revisitation with listmaps. In Proceedings of the working conference on Advanced visual interfaces, pp. 396-403, 2006.
478
+
479
+ [22] C. Gutwin, A. Cockburn, and A. Coveney. Peripheral popout: The influence of visual angle and stimulus intensity on popout effects. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 208-219. ACM, 2017.
480
+
481
+ [23] K. W. Hall, C. Perin, P. G. Kusalik, C. Gutwin, and S. Carpendale. Formalizing emphasis in information visualization. Comput. Graph. Forum, 35(3):717-737, June 2016. doi: 10.1111/cgf. 12936
482
+
483
+ [24] D. A. Hardwick, C. W. McIntyre, and H. L. Pick Jr. The content and manipulation of cognitive maps in children and adults. Monographs of the society for research in child development, pp. 1-55, 1976.
484
+
485
+ [25] S. G. Hart and L. E. Staveland. Development of nasa-tlx (task load index): Results of empirical and theoretical research. In Advances in psychology, vol. 52, pp. 139-183. Elsevier, 1988.
486
+
487
+ [26] L. Hasher and R. T. Zacks. Automatic and effortful processes in memory. Journal of experimental psychology: General, 108(3):356, 1979.
488
+
489
+ [27] J. Heer and M. Bostock. Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visualization Design. Proceedings of the 28th Annual Chi Conference on Human Factors in Computing Systems, pp. 203-212, 2010. doi: 10.1145/1753326.1753357
490
+
491
+ [28] I. Herman, G. Melançon, and M. S. Marshall. Graph visualization and navigation in information visualization: A survey. IEEE Transactions on visualization and computer graphics, 6(1):24-43, 2000.
492
+
493
+ [29] S. Ishihara. Test for colour-blindness. Kanehara Tokyo, Japan, 1987.
494
+
495
+ [30] Y. Jansen, J. Schjerlund, and K. Hornbæk. Effects of locomotion and visual overview on spatial memory when interacting with wall displays. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-12, 2019.
496
+
497
+ [31] C. Johanson. bride-of-frankensystem 1.1, apr 2020. doi: 10.5281/ zenodo.3738761
498
+
499
+ [32] C. Johanson, C. Gutwin, J. T. Bowey, and R. L. Mandryk. Press pause when you play: Comparing spaced practice intervals for skill development in games. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play, CHI PLAY '19, p. 169-184. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3311350.3347195
500
+
501
+ [33] B. Lafreniere, C. Gutwin, A. Cockburn, and T. Grossman. Faster command selection on touchscreen watches. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp.
502
+
503
+ 4663-4674, 2016.
504
+
505
+ [34] K. C. Lam, C. Gutwin, M. Klarkowski, and A. Cockburn. The effects of system interpretation errors on learning new input mechanisms. In Proceedings of the 2021 CHI Conference on Human Factors in
506
+
507
+ Computing Systems, CHI '21. Association for Computing Machinery, New York, NY, USA, 2021. doi: 10.1145/3411764.3445366
508
+
509
+ [35] C. A. Lawton and J. Kallai. Gender differences in way finding strategies and anxiety about wayfinding: A cross-cultural comparison. Sex roles, 47(9):389-401, 2002.
510
+
511
+ [36] T. R. Levine and C. R. Hullett. Eta squared, partial eta squared, and misreporting of effect size in communication research. Human Communication Research, 28(4):612-625, 2002.
512
+
513
+ [37] Y. Liu and J. Heer. Somewhere over the rainbow: An empirical assessment of quantitative colormaps. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-12, 2018.
514
+
515
+ [38] A. Mairena, M. Dechant, C. Gutwin, and A. Cockburn. A baseline study of emphasis effects in information visualization. In Proceedings of Graphics Interface 2020, GI 2020, pp. 327 - 339. Canadian Human-Computer Communications Society / Societe canadienne du dialogue humain-machine, 2020. doi: 10.20380/GI2020.33
516
+
517
+ [39] E. S. Mollashahi, M. S. Uddin, and C. Gutwin. Improving revisitation in long documents with two-level artificial-landmark scrollbars. In Proceedings of the 2018 International Conference on Advanced Visual Interfaces, AVI '18. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10.1145/3206505.3206554
518
+
519
+ [40] W. Mou and T. P. McNamara. Intrinsic frames of reference in spatial memory. Journal of experimental psychology: learning, memory, and cognition, 28(1):162, 2002.
520
+
521
+ [41] A. Postma and E. H. De Haan. What was where? memory for object locations. The Quarterly Journal of Experimental Psychology Section $A,{49}\left( 1\right) : {178} - {199},{1996}$ .
522
+
523
+ [42] G. Robertson, M. Czerwinski, K. Larson, D. C. Robbins, D. Thiel, and M. Van Dantzich. Data mountain: using spatial memory for document management. In Proceedings of the 11th annual ACM symposium on User interface software and technology, pp. 153-162, 1998.
524
+
525
+ [43] H. Rosling. Data - gapminder.org. http://www.gapminder.org/ data/, 2022.
526
+
527
+ [44] A. W. Salmoni, R. A. Schmidt, and C. B. Walter. Knowledge of results and motor learning: a review and critical reappraisal. Psychological bulletin, 95(3):355, 1984.
528
+
529
+ [45] J. Scarr, A. Cockburn, C. Gutwin, and A. Bunt. Improving command selection with commandmaps. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 257-266, 2012.
530
+
531
+ [46] J. Scarr, A. Cockburn, C. Gutwin, and S. Malacria. Testing the robustness and performance of spatially consistent interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 3139-3148, 2013.
532
+
533
+ [47] R. A. Schmidt, D. E. Young, S. Swinnen, and D. C. Shapiro. Summary knowledge of results for skill acquisition: Support for the guidance hypothesis. Journal of Experimental Psychology: Learning, Memory, and Cognition, 15(2):352, 1989.
534
+
535
+ [48] K. Schramm, C. Gutwin, and A. Cockburn. Supporting transitions to expertise in hidden toolbars. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16, p. 4687-4698. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/2858036.2858412
536
+
537
+ [49] E. Segel and J. Heer. Narrative visualization: Telling stories with data. IEEE transactions on visualization and computer graphics, 16(6):1139- 1148, 2010.
538
+
539
+ [50] S. Smart and D. A. Szafir. Measuring the Separability of Shape, Size, and Color in Scatterplots. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI '19, pp. 1-14, 2019. doi: 10.1145/3290605.3300899
540
+
541
+ [51] S. Smart, K. Wu, and D. A. Szafir. Color crafting: Automating the construction of designer quality color ramps. IEEE Transactions on Visualization and Computer Graphics, 26(1):1215-1225, 2020. doi: 10 .1109/TVCG.2019.2934284
542
+
543
+ [52] D. A. Szafir, M. Stone, and M. Gleicher. Adapting color difference for design. In Color and Imaging Conference, vol. 2014, pp. 228-233. Society for Imaging Science and Technology, 2014.
544
+
545
+ [53] M. Tlauka and P. N. Wilson. The effect of landmarks on route-learning in a computer-simulated environment. Journal of Environmental Psychology, 14(4):305-313, 1994.
546
+
547
+ [54] M. S. Uddin and C. Gutwin. Rapid command selection on multi-touch tablets with single-handed handmark menus. In Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces, pp. 205-214, 2016.
548
+
549
+ [55] M. S. Uddin and C. Gutwin. The image of the interface: How people use landmarks to develop spatial memory of commands in graphical interfaces. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1-17, 2021.
550
+
551
+ [56] M. S. Uddin, C. Gutwin, and A. Cockburn. The effects of artificial landmarks on learning and performance in spatial-memory interfaces. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 3843-3855, 2017.
552
+
553
+ [57] M. S. Uddin, C. Gutwin, and A. Goguey. Using artificial landmarks to improve revisitation performance and spatial learning in linear control widgets. In Proceedings of the 5th Symposium on Spatial User Interaction, SUI '17, p. 48-57. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10.1145/3131277.3132184
554
+
555
+ [58] M. S. Uddin, C. Gutwin, and B. Lafreniere. Handmark menus: Rapid command selection and large command sets on multi-touch displays. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 5836-5848, 2016.
556
+
557
+ [59] R. Veras and C. Collins. Saliency deficit and motion outlier detection in animated scatterplots. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, p. 1-12. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/ 3290605.3300771
558
+
559
+ [60] M. Waldner, A. Karimov, and E. Gröller. Exploring visual prominence of multi-channel highlighting in visualizations. In Proceedings of the 33rd Spring Conference on Computer Graphics, SCCG '17, pp. 8:1-8:10. ACM, New York, NY, USA, 2017. doi: 10.1145/3154353. 3154369
560
+
561
+ [61] C. Williamson and B. Shneiderman. The dynamic homefinder: Evaluating dynamic queries in a real-estate information exploration system. In Proceedings of the 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '92, p. 338-346. Association for Computing Machinery, New York, NY, USA, 1992. doi: 10.1145/133160.133216
562
+
563
+ [62] J. O. Wobbrock, L. Findlater, D. Gergle, and J. J. Higgins. The aligned rank transform for nonparametric factorial analyses using only anova procedures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '11, pp. 143-146. ACM, New York, NY, USA, 2011. doi: 10.1145/1978942.1978963
564
+
565
+ [63] J. M. Wolfe. Guided search 4.0: A guided search model that does not require memory for rejected distractors. Journal of Vision, 1(3):349- 349, 2001.
566
+
567
+ [64] J. M. Wolfe and T. S. Horowitz. What attributes guide the deployment of visual attention and how do they do it? Nature reviews neuroscience, 5(6):495, 2004.
568
+
569
+ [65] S. Zhai and P.-O. Kristensson. Shorthand writing on stylus keyboard. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 97-104, 2003.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/H3GlkWt46f9/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,433 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § ACCIDENTAL LANDMARKS: HOW SHOWING (AND REMOVING) EMPHASIS IN A 2D VISUALIZATION AFFECTED RETRIEVAL AND REVISITATION
2
+
3
+ Category: Research
4
+
5
+ § ABSTRACT
6
+
7
+ Many visualizations display large datasets in which it can be difficult for users to find (and re-find) specific items. In systems that provide highlighting tools (e.g., filtering or brushing), emphasized points can become "accidental landmarks" - visual anchors that help users remember locations that are near the emphasized points. Accidental landmarks could be useful (by aiding revisitation), but if users become dependent on them, removing or changing the highlighting could cause problems. We provide designers with information about these issues through two crowdsourced studies in which people learned a set of item locations (in visualizations with or without emphasized points); we then removed or changed the highlighting to see if performance suffered. In the first study, which used a simple grid of points, results showed that changing or removing emphasized points significantly impeded users' ability to re-find targets, but the highlighting did not improve performance during training. In the second study, which used a more complex scatterplot, we found that highlighting significantly improved performance during training, but that removing or changing the emphasis points only reduced refinding performance for a few target types. Our work demonstrates that visualization designers need to consider how transient visual effects such as emphasis can affect spatial learning and revisitation, and provides new knowledge about how visual features can affect performance.
8
+
9
+ Index Terms: Human-centered computing-Visualization-Visualization techniques-; Human-centered computing-Visualization-Visualization design and evaluation methods
10
+
11
+ § 1 INTRODUCTION
12
+
13
+ A ubiquitous task in large visualizations is finding and re-finding specific items - to inspect values during exploration or compare results to look for insights [28]. Finding and re-finding can be difficult, however, when objects in visualizations are visually undifferentiated (e.g., dots in a scatterplot), and names or labels are only available through inspection (e.g., hovering over a dot); in many visualizations, finding items for the first time can involve laborious visual search. Once the user finds an item, the problem changes to one of revisitation - i.e., finding items that have already been visited. Revisitation can be much faster than visual search if the user can remember where the item was $\left\lbrack {{39},{57}}\right\rbrack$ ; however, the undifferentiated nature of data items in many visualizations provides little support for users' spatial memory.
14
+
15
+ One way to support the development of spatial memory - and thus support revisitation - is to include landmarks in the visual presentation. Landmarks are obvious visual features that are noticeably different from their surroundings, and that can provide a frame of reference in which users can remember nearby locations based on their relative position to the landmark. Structural elements such as corners can be strong landmarks [55], and previous research has also shown that adding artificial landmarks such as coloured blocks can provide valuable anchors for spatial learning when there are a large number of items in the dataset [56].
16
+
17
+ Information visualizations often add visual features such as colour to a set of items in the presentation (through actions such as highlighting a subset of the dataset) and can contain clusters of data that serve as spatial landmarks - but the reason for these features is almost never to add landmarks. Instead, visual highlighting is typically the result of a user operation such as filtering or brushing: for example, the user might set a filter threshold of a third variable to emphasize datapoints in a scatterplot that are above that threshold (see Figure 1) or utilize dynamic queries [61] to hide/show items. We note that in some visualizations all datapoints are coloured or augmented based on an attribute variable, but here we consider representations that only provide a standard glyph for each datapoint.
18
+
19
+ < g r a p h i c s >
20
+
21
+ Figure 1: Top: Screen capture of the Tableau visualization tool. Users highlight data points through the "Marks Card" that allows specification of highlights and colours during exploratory data analysis. Bottom: Screen capture of a document explorer tool, highlighting document positions based on filters (from https://bit.ly/3BQZue2).
22
+
23
+ When a visualization has a subset of datapoints that are visually emphasized, the highlighted points can become "accidental landmarks" - items that have the visual characteristics of landmarks, even though this is unintended by the designer. When users find and re-find items in a visualization that has some items highlighted, they may start to use the accidental landmarks as anchors for finding nearby items (e.g., "the item I need to remember in the scatterplot is just below the red item").
24
+
25
+ These accidental landmarks can be useful by providing anchors for revisitation, but they could also cause problems if users become dependent on them, because the highlights could disappear or change (e.g., when a user selects a different subset to emphasize). If a user comes to rely on the visual landmarks, when they eventually need to revisit data points without this aid, they will have difficulty because the aid is missing or different. This phenomenon of users becoming dependent on external aid or feedback is known as the guidance hypothesis $\left\lbrack {{44},{47}}\right\rbrack$ , which suggests that a reduction in effort provided by guidance during training will lead to poorer retention [17]. However, contrasting research to the guidance hypothesis suggests that spatial knowledge can also be gained through incidental learning $\left\lbrack {3,{26}}\right\rbrack$ , which occurs by simply interacting with an environment in a spatial fashion.
26
+
27
+ These competing hypotheses mean that it is difficult to predict what will happen to spatial learning and revisitation when accidental landmarks occur in visualizations. To determine both the potential benefits and risks of visual emphasis that could be used as landmarks, we carried out two between-participants crowdsourced studies $\left( {\mathrm{N} = {180}}\right)$ to test the effects of highlighting points in scatterplots, and then removing or changing the emphasized items.
28
+
29
+ In our first study, we asked participants to find and re-find several targets in a simple grid visualization that did not provide strong structural or layout landmarks (other than corners and edges). We tested three conditions: a baseline version with no emphasis, a version with emphasis that was removed after training, and a version with emphasis that was changed to a different subset after training. We measured people's performance during three training blocks where any emphasis effects were present, and in a fourth block where the emphasis was removed or changed.
30
+
31
+ Results of the first study showed that accidental landmarks did not improve search time or number of hovers during the three training blocks, but did have an effect on performance when removed or changed - in the fourth block, both search time and hovers increased substantially when compared to the no-landmarks condition. In addition, the results were stronger for some targets (e.g., for the target that was emphasized during training, there was a larger detriment to removing / changing the highlighting). Subjective results showed that participants felt that finding targets was more difficult when the highlights were removed or changed.
32
+
33
+ Our second study tested the same experimental conditions, but in a more-complex scatterplot based on a real-world Gapminder dataset [43]; this visualization had substantially more internal structure that provided additional landmarks (such as clusters of points, edges, and areas of white space). Results of the second study showed that search time and hovers during the learning blocks were both lower with the accidental-landmarks conditions, but there was no significant decrease in performance when the highlighting was changed or removed. We attribute the change in results seen with Study 2 to the additional structural landmarks that were available in the more-complex scatterplot.
34
+
35
+ Our two studies provide new understanding of how 'accidental' visual features affect visual search, spatial learning, and revisitation in information visualizations. Our findings suggest that in visualizations without extensive structural or layout-based landmarks, participants may become overly dependent on visual emphasis that arises from filtering or brushing. In more complex visualizations, the value of accidental landmarks increases during early use, but the additional landmarks provided by structure and layout appear to mitigate any over-reliance on the highlighting. Our work makes four main contributions. First, we identify a phenomenon - emphasis that provides accidental landmarks in visualization - that has not been considered previously. Second, we provide empirical evidence that emphasis-based landmarks can provide a benefit for visual search (depending on the visualization), but can also cause problems when they are taken away or changed. Third, we provide new knowledge that can guide designers' choices about what emphasis and potential aids to use to support spatial awareness, and possible design improvements for emphasis effects that address some of the issues seen in our study.
36
+
37
+ § 2 RELATED WORK
38
+
39
+ § 2.1 LEARNING AND RETRIEVAL
40
+
41
+ A wide variety of research has been carried out to investigate how humans acquire knowledge and skills. Prior work in psychology has extensively studied human memory $\left\lbrack {5,6,{13},{15}}\right\rbrack$ , how the skills necessary for learning and retrieval are developed $\left\lbrack {2,{41}}\right\rbrack$ , the development of learning abilities in children [24], and how sex differences may affect navigation and spatial orientation [35].
42
+
43
+ Anderson [2] and Fitts et al. [18] suggest that skill development occurs in three main stages: cognitive, associative and autonomous. When applied to 2D visual displays, users in the cognitive phase learn items through slow visual search and visual inspection (e.g., finding icons in a toolbar or files in a file browser [17]). In the associative stage, users understand the general contents of the dataset and begin to remember items and locations, allowing faster revisitation for some items. In this stage, however, users still typically perform visual search within a local area after reaching the vicinity of an object of interest. Finally, users in the autonomous stage have memorized item locations, and can recall and revisit an object's location without needing any visual search.
44
+
45
+ People learn object locations in 2D visualizations as a side effect of interacting with them, and the rate at which locations are learned follows a power law of practice [10]. In previous HCI research, several interfaces have shown the utility of spatial memory to improve performance. For example, Robertson et al.'s initial Data Mountain study and a subsequent study by Jansen et al. which evaluated Data Mountain in a wall display show how the spatial arrangements of thumbnails in a spatial environment allows faster retrieval times than standard bookmarking systems [30,42]. Similar benefits have also been found in tasks such as list revisitation [21] and command selections in interfaces $\left\lbrack {{54},{58}}\right\rbrack$ .
46
+
47
+ § 2.2 SUPPORTING SPATIAL LEARNING
48
+
49
+ Knowledge of the location of an item (be it in a natural environment or digital space) is often relative to other objects or items. People learn, organize, and communicate spatial knowledge by reorganizing the spatial relations among items in an environment [40]. Mou et al. suggested that human memory systems use frames of reference to specify the remembered locations of objects [40]: for example, Scarr et al. stated that "explicit rectangular boundaries, such as the walls of a room or the edges of a table, can generate a frame of reference" and added that a grid-based item layout can also support spatial knowledge by creating an implicit axis of reference [12].
50
+
51
+ Previous work on supporting spatial learning has considered two main strategies: spatially stable layouts, and landmarks. Researchers have demonstrated the benefits of laying out interfaces in a way that are spatially stable $\left\lbrack {{46},{56}}\right\rbrack$ , for example, Gutwin et al. and later work by Cockburn et al. showed that a stable layout of commands in an interface can improve recall efficiency compared to hierarchical ribbons or menus [11, 21]. Similarly, Scarr et al.'s CommandMap showed that spatially stable icon design on a desktop interface improved the recall of icons in real tasks $\left\lbrack {{45},{46}}\right\rbrack$ . The benefits of spatial stability have also been shown in other interfaces such as smartphones [65], tablets [20], smartwatches [33], and virtual environments [19].
52
+
53
+ Landmarks are a second strategy for improving navigation performance. Landmarks are easily identifiable objects that have distinct spatial features (such as shape, colour, or semantic value [53]) that can provide a frame of reference for nearby objects. Similar to the benefits of landmarks in real life (e.g., using a prominent building when navigating a city), landmarks have exhibited potential in digital workspaces. Several types of landmark have been considered, such as the corners of a screen or the bezel on a device $\left\lbrack {{20},{48}}\right\rbrack$ , which can provide a strong reference for nearby objects. However, since these landmarks may not naturally occur in larger workspaces (e.g., there are no corners or edges in the middle of a display), researchers have also examined the use of hands [58] and the idea of adding artificial landmarks (e.g., a background picture, or simple coloured shapes) [56] to assist users in remembering the locations of objects in the visual field.
54
+
55
+ § 2.3 EMPHASIS AND ATTENTION IN INFOVIS
56
+
57
+ The goal of emphasis is to manipulate the visual features of a chosen data element to make it visually prominent so that a viewer's bottom-up attention is directed to an element of interest [23]. Many theories have been developed over time to explain how emphasis can guide a viewer's attention. For example, similarity theory developed by Duncan and Humphreys shows that the efficacy of emphasis decreases with increased target/non-target similarity and with decreased similarity between the non-targets [16]. Similarly, The Guided Search Theory theory by Wolfe follows a two-stage process for attention, first guided by visual salience (bottom-up attention) but adding that attention can be biased toward targets of interests (e.g., a user looking for a red circle) by encoding items of user interest: for example, assigning a higher weight to the items with red colour [63].
58
+
59
+ Another theory, the relational account of attention theory, also follows on the premise that if users are given a specific task or have a feature, they are interested in (e.g., a user searching for a red circle), attention will be guided to the mark that differs in the given direction from the other marks (in this case, attention will be guided to the reddest circle among all circles displayed) [16,60].
60
+
61
+ Similarly, a recently proposed model suggests three main processes for how attention is guided when viewing a visualization: current goals, selection history and physical salience (bottom-up attention) [4]. This model suggests that there is an inherent bias to prioritize items that have been previously selected, which may differ from current goals, and as such, selection history, goal-driven selection and visual salience are competing processes, affecting the effectiveness of emphasis to serve as landmarks.
62
+
63
+ Consistency is a fundamental guideline in HCI for supporting spatial awareness and memory/recall capabilities [17, 45, 46]. Landmarks are known to supplement the capabilities of an interface by providing anchors that users can build better spatial awareness. However, in the absence of a consistent interface - such as an interactive visualization which may change depending on actions such as filters or changes in the underlying dataset - landmarks can provide a method for spatial learning within this uncertainty. However, landmarks in visualization remain relatively unexplored, with questions such as whether removing a landmark (such as when a user removes a highlighting feature in a visualization) or changing the landmarks (e.g., users selecting a different set of objects to highlight) affects the spatial memory of previously learned objects. In addition, there are other factors such as the visual salience of these landmarks, current tasks, and previous selections that may affect how users perform revisitation tasks in a visualization. In the following studies, we set out to determine the effects of using emphasis as a landmark in visualization for spatial awareness and test the limits of emphasis by re-creating common tasks such as removing and changing emphasized objects.
64
+
65
+ § 3 STUDY 1: EFFECTS OF ACCIDENTAL LAND- MARKS IN A SIMPLE GRID VISUALIZATION
66
+
67
+ We conducted an online experiment to explore whether accidental landmarks in a simple grid visualization would affect spatial location learning and performance, both when the assistance was present and after it was removed or changed. The study asked participants to repeatedly find a set of seven targets in an $8 \times 8$ grid that had few structural or layout-based landmarks, other than corners and edges; we recorded search time, hovers required to find a target, and errors.
68
+
69
+ < g r a p h i c s >
70
+
71
+ Figure 2: Example of the study system interface that participants would see when completing a trial. In the No-Landmarks condition, there were no red highlighted circles.
72
+
73
+ § 3.1 S1 STUDY SYSTEM
74
+
75
+ A web-based application was developed using HTML, CSS, and JavaScript (D3.js [7]) to display an 8x8 grid of circles that contained targets and distractors (some of which were also accidental landmarks). The interface presented the name of the target, and the user had to click on the target item to confirm a selection. Item names are not permanently visible, but could be shown in a tooltip by hovering the mouse on any item (names were taken from an existing plant-breeding dataset). Hover feedback was immediate (similar to commercial visualization systems), however, we only considered hovers with duration of ${300}\mathrm{\;{ms}}$ for analysis to remove hovers that were simply due to traversing over the items. An example of the study interface is shown in Figure 2.
76
+
77
+ To ensure that the way we tested targets and emphasis was fairly compared for each task type, we used a grid-style visualization. We used a simple grid for our first study in order to control the number of structural or layout-based landmarks in the visual presentation, and to control the distance between targets and landmarks. Although this style of visualization is less common than other types such as scatterplots, there are still many examples of grid-based visual layout: for example, a visualization of a plant-breeding field trial would typically use a grid to match the arrangement of the physical layout of plots in the field; similarly, the document map shown in 1 organizes items into rows and columns. We also used the same combination of targets and accidental landmarks for all study conditions to ensure equal difficulty. This required that we use a between-participants design for the study.
78
+
79
+ The study had three conditions that differed in terms of how accidental landmarks were used:
80
+
81
+ * No Landmarks: this condition provided no accidental landmarks - participants saw the plain grid of items, with no red highlights.
82
+
83
+ * Landmarks-Removed: in this condition, participants saw the same grid of items, but with six items coloured red (simulating a previous filtering operation that had highlighted these items as accidental landmarks). The red highlights were removed in the final block.
84
+
85
+ * Landmarks-Changing: this condition provided the same grid and red highlights as above during the training blocks, but in the fourth block the highlights were moved to a different set of items (rather than being removed altogether).
86
+
87
+ § 3.1.1 S1 TARGETS
88
+
89
+ For the study, seven of the 64 items were used as targets, and six of the 64 were coloured red as accidental landmarks. One of the items was both a target and a landmark. Target positions were sampled from three areas of the grid [56]: three from the corner regions, two from the edges and two from the centre region. Targets and their locations are shown in Figure 3.
90
+
91
+ < g r a p h i c s >
92
+
93
+ Figure 3: Locations of the targets (shown here in blue, not shown in the study) in relation to the landmarks. Target NAM-63 (shown with a blue square) was both a target and a landmark. In the No-Landmarks condition, there were no red highlights.
94
+
95
+ § 3.2 S1 PROCEDURE
96
+
97
+ Each condition in the experiment followed seven phases: (1) informed consent, (2) demographics questionnaire, (3) vision test, (4) guided tour, (5) study tasks, (6) post-study questionnaires, and (7) debriefing. The specific questions and tasks for each condition are described in each condition's section below. Participants first completed informed consent and demographics forms, and were then asked to complete an Ishihara test and questionnaire to screen for colour vision deficiencies [29]. Participants then completed a guided tour through all the targets, after which they could proceed to the study.
98
+
99
+ § 3.2.1 GUIDED TOUR
100
+
101
+ Participants were first randomly assigned to one of the three study conditions. In the guided tour phase, the experimental system showed the grid (including red highlights if the condition included them). The system then took the participants on a "guided tour" of the seven targets, with each target shown one at a time, highlighted in blue. Participants had to click on the target to proceed to the next target. After all targets were presented, the interface automatically proceeded to the study.
102
+
103
+ § 3.2.2 STUDY PHASE
104
+
105
+ After the guided tour, participants completed the study trials. Every trial began by displaying the name of a target at the top of the screen (the name remained visible for the duration of the trial), and participants were asked to find and select the corresponding target item from the grid. Targets were presented in random order (sampling without replacement); locations of targets (and landmarks if shown) were the same in all conditions. Participants could see item names immediately upon hovering over the item with the mouse. After each correct selection, the screen was blanked for ${0.5}\mathrm{\;s}$ to prevent contrast effects between trials. The study consisted of three training blocks in which landmarks (if part of the condition) were shown and a fourth block in which any landmarks were either removed or changed. Because the Landmarks-Changing and Landmarks-Removed conditions used the same landmarks, this means that these conditions were identical for the first three blocks. In the No-Landmarks condition, no landmarks were shown at any point. After completing all blocks, participants were asked to fill out post-study questionnaires, were shown debriefing information, and were compensated for their participation.
106
+
107
+ § 3.3 S1 PARTICIPANT RECRUITMENT
108
+
109
+ We recruited 90 participants $\left( {{\mu }_{\text{ age }} = {33.15},{\sigma }_{\text{ age }} = {10.84},{55}\mathrm{\;{men}}}\right.$ , 33 women, 2 non-binary) across the three conditions (30 per condition) using Amazon's Mechanical Turk (MTurk), and gathered data through a custom browser-based experiment tool [31]. MTurk is an online platform where requesters can post tasks that workers can opt-in to complete. Data collected from MTurk has been previously used in a variety of human-computer interaction studies $\left\lbrack {{14},{32},{34},{51}}\right\rbrack$ and to model perception in visualization $\left\lbrack {{27},{52}}\right\rbrack$ , including assessing separability of variables [50], measuring colormaps [37], and effectively detecting motion [59]. Using MTurk, however, requires that special care must be taken to ensure the integrity of the data, as bots or negligent workers must be filtered out. Our study required workers to have over ${90}\%$ HIT acceptance rate (i.e., a measure of the quality of a worker's previous tasks). We also checked the questionnaire responses to ensure that the same answer was not used for all of the questions, as well as whether the study was completed too quickly or too slowly.
110
+
111
+ All participants were paid $\$ 3$ for completing the study, which took approximately 15 minutes. Self-reported estimates of monthly visualization usage among participants averaged 33 hours $(\mathrm{{SD}} =$ 66.14) with pie charts, line charts, bar graphs and maps/weather charts as the most commonly used or viewed charts.
112
+
113
+ § 3.4 S1 STUDY DESIGN
114
+
115
+ Our goal was to understand the effects of landmarks on spatial learning and revisitation in visualizations. Our main research questions (RQ) for this study were:
116
+
117
+ * RQ-1: Do accidental landmarks improve finding and re-finding when they are present (i.e., decreased search time, number of hovers, and error rate)?
118
+
119
+ * RQ-2: Does removing or changing landmarks after a learning period affect re-finding (i.e., increased search time, hover counts, and error rate)?
120
+
121
+ To investigate these questions, the study used a mixed factorial design with three factors:
122
+
123
+ * Condition (between-subjects): No-Landmarks, Landmarks-Removed, Landmarks-Changing
124
+
125
+ * Target Locations (within subjects): seven target locations (see Figure 3)
126
+
127
+ * Blocks (within subjects): 1-4 (blocks 1-3 are training; block 4 removes/changes any landmarks).
128
+
129
+ Our primary dependent variables were search time, hover counts (only included if longer than ${300}\mathrm{\;{ms}}$ ), error counts (i.e., incorrect clicks), and subjective ratings of difficulty and effort from post-session questionnaires. Targets were the same for all participants.
130
+
131
+ § 4 S1 STUDY RESULTS
132
+
133
+ We report effect sizes for significant ANOVA results as general eta-squared ${\eta }^{2}$ (considering .01 small,.06 medium, and >.14 large [36]). Outliers were determined as any trial with a search time greater than 3 SDs above the block's mean. 73 of the 2520 trials were removed from the analysis. All pairwise t-tests were corrected using the Holm-Bonferroni method.
134
+
135
+ < g r a p h i c s >
136
+
137
+ Figure 4: Mean trial search times (± s.e.) across learning blocks (1-3). Block 4 shows the results of removing or changing emphasized objects.
138
+
139
+ § 4.1 S1 EFFECTS OF LANDMARKS ON LEARNING: SEARCH TIME, HOVERS, AND ERRORS IN LEARNING BLOCKS
140
+
141
+ § 4.1.1 S1 LEARNING BLOCKS - SEARCH TIMES:
142
+
143
+ Search time in the test trials was measured from the time a target name appeared on the screen to the time the system registered a correct item selection. Search times across all blocks for the three conditions are shown in Fig 4.
144
+
145
+ For the learning blocks, a 3x3x7 RM-ANOVA (Condition x Block x Target) showed a main effect of Condition $\left( {{F}_{2,{178}} = {15.21},p < }\right.$ ${0.001},{\eta }^{2} = {0.02})$ , Block $\left( {{F}_{2,{174}} = {18.00},p < {0.001},{\eta }^{2} = {0.02}}\right)$ and Target $\left( {{F}_{6,{522}} = {6.16},p < {0.001},{\eta }^{2} = {0.01}}\right)$ and an interaction between Condition x Block $\left( {{F}_{4,{174}} = {2.44},p = {0.004},{\eta }^{2} = {0.01}}\right)$ on search time. Post-hoc pairwise t-tests showed significant differences between No Landmarks and both landmarks conditions (both $p < {0.05})$ .
146
+
147
+ Across all magnitudes and targets, search time was lowest with the No-Landmarks condition (mean 12268ms); the average mean of the landmark conditions was ${16685}\mathrm{\;{ms}}$ (note that both landmark conditions were identical in the learning phase, so any difference between them is due to group differences). To further investigate the Condition x Block interaction and consider the rate at which the different groups improved, Fig 5 shows a version of the data that normalizes the other blocks based on block 1 performance. Fig 5 suggests that there were group differences in the two landmark conditions, but also indicates that participants in both landmark conditions learned less quickly than the No-Landmarks condition.
148
+
149
+ § 4.1.2 S1 LEARNING BLOCKS - HOVERS:
150
+
151
+ We measured the number of hovers as the number of times the participant held the cursor over a target for ${300}\mathrm{\;{ms}}$ or more to show the name. Hovers are a more sensitive measure of progress through the stages of learning and performance: as a participant moves through the different blocks, there should be a reduction in the number of items that they need to inspect. Mean hovers per trial are shown in Figure 6.
152
+
153
+ For the learning blocks, a similar 3x3x7 RM-ANOVA (Condition $\mathrm{x}$ Block $\mathrm{x}$ Target) showed a main effect of Block $\left( {{F}_{2,{178}} = }\right.$ ${11.047},p < {0.001},{\eta }^{2} = {0.03})$ and Target $\left( {{F}_{6.522} = {2334},p < }\right.$ ${0.001},{\eta }^{2} = {0.02}$ ) on hover count, and also showed two interactions: Condition $\mathrm{x}$ Block $\left( {{F}_{4,{356}} = {1.84},p < {0.01},{\eta }^{2} = {0.01}}\right)$ and Block x Target $\left( {{F}_{{12},{1044}} = {1.84},p < {0.001},{\eta }^{2} = {0.01}}\right)$ . However, the ANOVA found no main effect of condition $\left( {\mathrm{p} = {0.37}}\right)$ . Across all magnitudes and targets, hovers were similar in all conditions, with the lowest mean in the No-Landmarks condition (10.12 hovers per correct selection) compared to the averaged mean of the landmark conditions at 10.81 hovers. We again investigated the Condition $\mathrm{x}$ Block interaction to consider learning rates. Fig 7 represents the same data, taking block 1 as a baseline and normalizing the following blocks based on block 1 performance. Fig 7 again suggests group differences in the landmark conditions, and shows similar learning rates across all three conditions.
154
+
155
+ < g r a p h i c s >
156
+
157
+ Figure 5: Search time results after block 0 normalization
158
+
159
+ § 4.1.3 S1 LEARNING BLOCKS - ERRORS:
160
+
161
+ We measured errors as the number of incorrect clicks before choosing a correct target. As participants could hover over targets until they found the correct one, overall errors were low overall, with an average of 0.63 errors per trial across all conditions and blocks. For the learning blocks, A 3x3x7 RM-ANOVA (Condition x Block x Target) showed no main effect of Condition (p=0.42), Block (p=0.08), or Target $\left( {\mathrm{p} = {0.59}}\right)$ on errors.
162
+
163
+ § 4.1.4 S1 LEARNING BLOCKS - TARGET-BY-TARGET ANALYSIS:
164
+
165
+ As the ANOVA results showed a main effect of Target on search time and hover counts, we looked into the results of each specific target. Overall, as seen in Figures 8 and 9 we found that the targets that required the fewest hover actions and were found fastest were NAM-12 (9964ms) and NAM-9 (11905ms), both of which were located at or near the corners. We also found the hardest targets were those located in the centre, such as NAM-18 (19295ms) and NAM-2 (18345ms). Although previous research suggests that targets such as NAM-18 and NAM-2 should have been much more difficult in the No-Landmarks condition (where there were no visual features to help users remember these locations), search times actually favoured the No-Landmarks condition. Even with target NAM-63 - which was highlighted in the landmark conditions - we found that participants in the No-Landmarks condition found the target faster (9919ms) than those in the landmark conditions (15103ms).
166
+
167
+ < g r a p h i c s >
168
+
169
+ Figure 6: Mean trial hover counts (± s.e.) across learning blocks (1-3). Block 4 shows the results of removing or changing emphasized objects.
170
+
171
+ § 4.2 S1 EFFECTS OF CHANGE/REMOVAL OF LANDMARKS: SEARCH TIME, HOVERS, AND ERRORS BLOCK 3 TO 4
172
+
173
+ § 4.2.1 S1 CHANGE/REMOVAL - SEARCH TIMES:
174
+
175
+ To look for effects of removing/changing the accidental landmarks, we carried out an analysis using only block 3 (the block before the removal/change) and block 4 (the block after the removal/change). The 3x2 RM-ANOVA (Condition x Block) found an interaction between the two factors $\left( {{F}_{2,{178}} = {3.43},p = {0.03},{\eta }^{2} = {0.02}}\right)$ in terms of search time.
176
+
177
+ Search times increased in Landmarks-Removed from 13472ms in Block 3 to ${15034}\mathrm{\;{ms}}$ in block 4, and in Landmarks-Changing from ${11830}\mathrm{\;{ms}}$ to ${13774}\mathrm{\;{ms}}$ . By contrast (and as expected from previous literature on learning), search performance continued to improve in the No-Landmarks condition: from 10128ms in Block 3 to 7768ms in Block 4. To check whether each condition changed significantly between block 3 and 4, we carried out additional follow-up t-tests. However, no specific differences were found for this per-condition analysis (all $p > {0.05}$ ).
178
+
179
+ Following the analysis on the learning blocks presented above, and the significant interaction between Condition x Block for Blocks 3 and 4, we carried out an analysis of the final block using a similar 3x7 RM-ANOVA (Condition x Block). We found a main effect of Condition $\left( {{F}_{2,{178}} = {12.61},p < {0.001},{\eta }^{2} = {0.04}}\right)$ and Target $\left( {{F}_{6,{522}} = {7.84},p < {0.001},{\eta }^{2} = {0.04}}\right)$ on search times, and an interaction between Condition x Target $\left( {{F}_{12.522} = {1.84},p < {0.001},{\eta }^{2} = }\right.$ 0.02 ). Post-hoc pairwise t-tests again showed significant differences between No-Landmarks and both landmarks conditions (both $p < {0.05})$ .
180
+
181
+ § 4.2.2 S1 CHANGE/REMOVAL - HOVERS:
182
+
183
+ To look for effects of removing/changing the landmarks on hover count, we carried out a similar analysis using hover data from block 3 and block 4. The 3x2 RM-ANOVA (Condition x Block) found an interaction between Condition x Block $\left( {{F}_{2,{178}} = {3.49},p = {0.03},{\eta }^{2} = }\right.$ 0.01 ). Hovers increased by in Landmarks-Removed from 7.75 to 8.97 and in Landmarks-Changing from 6.64 to 7.9. As with search time, performance continued to improve in the No Landmarks condition: from 7.1 hovers in Block 3 to 5.7 in Block 4. We carried out additional follow-up t-tests to check whether each condition changed significantly between block 3 and 4, however, no specific differences were found for this per-condition analysis (all $p > {0.05}$ ).
184
+
185
+ < g r a p h i c s >
186
+
187
+ Figure 7: Hover count results after block 0 normalization
188
+
189
+ Following the significant interaction between Condition x Block, we carried out an analysis of the final block using a similar $3 \times 7\mathrm{{RM}}$ - ANOVA (Condition x Block). We found main effects of Condition $\left( {{F}_{2,{178}} = {3.28},p = {0.04},{\eta }^{2} = {0.04}}\right)$ and Target $\left( {{F}_{6,{522}} = {7.22},p < }\right.$ ${0.001},{\eta }^{2} = {0.05}$ ) on hovers, and an interaction between Condition x Target $\left( {{F}_{{12},{522}} = {2.04},p < {0.001},{\eta }^{2} = {0.02}}\right)$ . Post-hoc pairwise t-tests showed significant differences between No-Landmarks and both landmark conditions (both $p < {0.05}$ ).
190
+
191
+ § 4.2.3 S1 TARGET-BY-TARGET ANALYSIS:
192
+
193
+ As the ANOVA results for the final block showed a main effect of Target on search time and hover counts, and also showed interactions between Condition and Target, we again looked into the results of each specific target.
194
+
195
+ The Condition x Target interaction indicates that the effect of Condition on search time and hover count varied by target. Inspecting the target-by-target charts shows that there were no targets for which the landmarks were particularly helpful during training, and that the majority of targets were affected by the removal of the landmark. To explore this further, we repeated an ANOVA for each target in Block 4, to see which targets were affected by Condition. For Search times, the following targets showed significant effects of Condition: NAM9 (p=0.04), NAM18 (p=0.03), NAM32 (p=0.01), and NAM63 (p=0.01). For hover count, only NAM9 showed a significant effect of condition $\left( {\mathrm{p} = {0.02}}\right)$ . In all these cases, search times and hover counts were significantly better in the No-Landmarks condition.
196
+
197
+ Similar to the learning blocks, we found that targets located near the corners required the fewest hover actions and were found fastest: NAM-12 (8926ms) and NAM-9 (7361ms). The most difficult targets were those located in the centre: NAM-18 (18456ms) and NAM-2 (16945ms). For NAM-63, which was located directly on a landmark during training, we found that participants in the No-Landmarks condition on average found the target twice as fast(6328ms)as those in the landmark conditions (14197ms).
198
+
199
+ < g r a p h i c s >
200
+
201
+ Figure 8: Mean search time ( $\pm$ s.e.) by Target and Block
202
+
203
+ § 4.3 S1 SUBJECTIVE MEASURES
204
+
205
+ § 4.3.1 S1 PERCEIVED CHANGE IN DIFFICULTY
206
+
207
+ For each specific target, we also asked participants in the landmarks conditions to rate on a 1-7 scale how much more difficult it was to find the target in block 4 compared to block 3 . (We did not ask this question to participants in the No-Landmarks condition, but we can assume that they would not have seen any major difference in difficulty between block 3 and 4). Mean results are shown in Fig 10. Overall, the target that participants felt was least affected by changing or removing the landmarks was NAM-9, located in the bottom right corner of the grid.
208
+
209
+ Using an Aligned Rank Transform on the difficulty ratings [62], one-way ANOVAs were performed for each of the targets using Condition as the factor. The ANOVA found a significant effect of Condition for the NAM-40 target (middle of the last column in the grid): for this target, participants in the Landmarks-Removed condition rated the target as more difficult (4.65) than participants in the Landmarks-Changing condition (3.8).
210
+
211
+ § 4.3.2 S1 PERCEIVED EFFORT
212
+
213
+ Participants' perceived effort was recorded using the NASA-TLX questionnaire [25]. For the Landmarks-Changing condition and the Landmarks-Removed condition we specifically asked the effort questions in relation to their perceived effort after the landmarks were changed or removed. We used an Aligned Rank Transform on the aggregated responses to perform a one-way ANOVA on each of the TLX questions using Condition as a factor. The mean responses to the TLX questions are shown in Fig 11. Significant effects were found in the responses for perceived success and frustration, both (p<0.05). Holm-corrected post-hoc pairwise t-tests were performed on the questions that had significant effects. For perceived success, the pair-wise comparison found a significant difference between Landmarks-Removed and No-Landmarks, with participants having a greater perceived success with no landmarks rather than when landmarks are initially presented and then removed altogether.
214
+
215
+ § 4.3.3 S1 PARTICIPANT COMMENTS
216
+
217
+ At the end of the study, we asked participants to explain their general process of finding targets and whether some were easier or harder than others. Their responses generally echoed several of the findings from the previous sections. While search times and hover counts did not show a clear improvement in conditions with landmarks, some of the participant's remarks do state the benefit of having landmarks. For example, P1 stated "When they [targets] were close to a different colored circle, in the immediate vicinity, it made it easier." Similarly, P3 stated "For some of the targets, I was able to find them easily because they were near a red colored circle." A few other participants also remarked how specific targets were easier, such as P8 emphasized, "NAM-9 in the corner, [and] it was [easier] around the colored ones."
218
+
219
+ < g r a p h i c s >
220
+
221
+ Figure 9: Mean hover count (± s.e.) by Target and Block (hovers counted after 300ms)
222
+
223
+ 7 (large increase in difficulty)
224
+
225
+ Mean Score 5 4 (medium increase in difficulty) - 3 -
226
+
227
+ 2-
228
+
229
+ 1 (no increase in difficulty) -
230
+
231
+ NAM63 NAM9 NAM18 NAM12 NAM2 NAM32
232
+
233
+ Increase in Difficulty
234
+
235
+ Landmarks-Removed Landmarks-Changing
236
+
237
+ Figure 10: S1 Perceived Change in Difficulty in Block 4
238
+
239
+ For participants in the No-Landmarks condition, comments suggested that people had to resort to other techniques: for example,
240
+
241
+ < g r a p h i c s >
242
+
243
+ Figure 11: S1 Mean NASA Task Load Index scores, by condition
244
+
245
+ P10 stated "I used Cartesian indication (from high school; 2 axes: $\mathrm{x}$ and $\mathrm{y})$ ." P12 said,"32 was in the first column second row which was easy to find, 9 was at the end and 63 was at the third column last raw" and P13 stated "corner, first and last row."
246
+
247
+ § 5 STUDY 2: EFFECTS OF ACCIDENTAL LAND- MARKS IN MORE-COMPLEX SCATTERPLOTS
248
+
249
+ Our second study considers visualizations that are more complex than the grid used in the first study. Visualizations with irregular layout (e.g., typical scatterplots) may contain more structural and layout-based visual features that can act as landmarks - such as clusters of points and white space, in addition to edges and corners. We need to understand how users perceive landmarks in more complex visualizations, so we designed our study task to show a scatterplot visualization based on a real-world dataset from the Gapminder site [43].
250
+
251
+ § 5.1 S2 STUDY SYSTEM AND TARGETS
252
+
253
+ A web-based system was developed using JavaScript and d3 [7] that showed a scatterplot based on Gapminder data [43]. The X axis showed the per-capita income of a country, and the $\mathrm{Y}$ axis showed the life expectancy in that country for a single selected year (similar to other online recreations of the dataset). The interface presented the names of target countries, and the user had to click on a target item to confirm a selection. Similar to Study 1, item names were shown in a tooltip by hovering the mouse on an item. Hover feedback was immediate, but as with Study 1 we only considered hovers with a duration of ${300}\mathrm{\;{ms}}$ for analysis.
254
+
255
+ To ensure that the way we tested emphasis was fair among each task type, we used the same scatterplot, dataset, and combination of targets/landmarks (except for the No-landmarks condition which did not require landmarks) for each condition.
256
+
257
+ For the study we used seven targets, out of 142 total items on the screen; eight items were initially highlighted. The seven targets were chosen to have different inherent difficulties based on their proximity to landmarks and other potential spatial cues in the visualization (e.g., edges, clusters, and white space). Targets and locations are shown in Figure 12.
258
+
259
+ < g r a p h i c s >
260
+
261
+ Figure 12: Locations of the targets (shown here in blue, not shown in the study) in relation to the landmarks. Target Nicaragua (shown with a red dot) was both a target and a landmark. In the No-Landmarks condition, there were no red highlights.
262
+
263
+ § 5.2 S2 PROCEDURE
264
+
265
+ We followed a similar procedure to Study 1, with seven phases: (1) informed consent, (2) demographics questionnaire, (3) vision test, (4) guided tour, (5) study tasks, (6) post-study questionnaires, and (7) debriefing. Participants similarly first completed a guided tour through all the targets after which the study proceeded to the test phase.
266
+
267
+ § 5.3 S2 PARTICIPANT RECRUITMENT
268
+
269
+ We initially recruited 90 participants across the three conditions (30 per condition) using Amazon's Mechanical Turk (MTurk), and gathered data through a custom browser-based experiment tool [31], however 3 were removed for having an overall completion time for the study over 3.sd from the mean, and an additional participant was removed due to completing experimental tasks more than once (refreshing the browser causes the tasks to restart). The remaining participants were distributed as follows: 30 in Landmarks-Removed, 28 in Landmarks-Changing and 28 in No-Landmarks $\left( {{\mu }_{\text{ age }} = {35.47}}\right.$ , ${\sigma }_{age} = {12.11},{55}\mathrm{\;{men}},{30}$ women,1 preferred not to answer). Our study required workers to have over ${90}\%$ HIT acceptance rate, and we also checked the questionnaire responses to ensure that the same answer was not used for all of the questions, as well as whether the study was completed too quickly or too slowly (which could represent participants simply clicked through the study, or were focused on additional tasks).
270
+
271
+ All participants were paid $\$ 3$ for completing the study, which took approximately 15 minutes.
272
+
273
+ § 5.4 S2 STUDY DESIGN
274
+
275
+ The goal for this study was to understand the effects of landmarks on spatial learning in more-complex visualizations. Our main research questions (RQ) for this study were:
276
+
277
+ * RQ-1: Do landmarks improve finding and re-finding targets in scatterplots when they are present (i.e., decreased search time, number of hovers, and error rate)?
278
+
279
+ * RQ-2: Does removing or changing landmarks after a learning period affect re-finding (i.e., increased search time, hover counts, and error rate)?
280
+
281
+ To investigate these questions, the study used a mixed factorial design with three factors:
282
+
283
+ * Condition (between-subjects): No-Landmarks, Landmarks-Removed, Landmarks-Changing
284
+
285
+ * Target Locations (within subjects): seven target locations (see Figure 12)
286
+
287
+ * Blocks (within subjects): 1-4 (blocks 1-3 are training; block 4 removes/changes any landmarks).
288
+
289
+ Similar to Study 1, our dependent variables were search time, hover counts (only included if longer than ${300}\mathrm{\;{ms}}$ ), and error counts (i.e., incorrect clicks), and subjective ratings of effort from post-session questionnaires. Targets were the same for all participants.
290
+
291
+ § 6 S2 STUDY RESULTS
292
+
293
+ We again report effect sizes for significant ANOVA results as general eta-squared ${\eta }^{2}$ (considering .01 small,.06 medium, and >.14 large [36]). Outliers were determined as any trial with a search time greater than $3\mathrm{{SDs}}$ above the block’s mean. 85 of the 2408 trials were removed from the analysis. All pairwise t-tests were corrected using the Holm-Bonferroni method.
294
+
295
+ < g r a p h i c s >
296
+
297
+ Figure 13: ScatterPlot Mean trial search times (± s.e.) across learning blocks (1-3). Block 4 shows the results of removing or changing emphasized objects.
298
+
299
+ § 6.1 S2 EFFECTS OF LANDMARKS ON LEARNING: SEARCH TIME, HOVERS, AND ERRORS IN LEARNING BLOCKS
300
+
301
+ § 6.1.1 S2 LEARNING BLOCKS - SEARCH TIMES
302
+
303
+ Search time in the test trials was measured from the time a target name appeared on the screen to the time a user correctly found a target. Search times across all blocks for the three conditions are shown in Fig 13.
304
+
305
+ For the learning blocks, a 3x3x7 RM-ANOVA (Condition x Block x Target) showed a main effect of Condition $\left( {{F}_{2,{172}} = {4.99},p = }\right.$ ${0.007},{\eta }^{2} = {0.005})$ Block $\left( {{F}_{2,{172}} = {70.06},p < {0.001},{\eta }^{2} = {0.07}}\right)$ and Target $\left( {{F}_{6,{510}} = {7.72},p < {0.001},{\eta }^{2} = {0.02}}\right)$ . Aggregated across all training blocks and targets, participants in the Landmarks-removed took ${23367}\mathrm{\;{ms}}$ to find a target, ${21907}\mathrm{\;{ms}}$ in the Landmarks-Changing compared to an average of 25339ms in the No-Landmarks condition. A Post-hoc pairwise t-test showed a significant difference on search times between Landmarks-Changing and No-Landmarks $\left( {\mathrm{p} = {0.008}}\right)$ for the learning blocks.
306
+
307
+ § 6.1.2 S2 LEARNING BLOCKS - HOVERS
308
+
309
+ We again measured the number of hovers as the number of times the participant held the cursor over an element for ${300}\mathrm{\;{ms}}$ or more to show the name. Mean hovers per trial are shown in Figure 14. For the learning blocks, a 3x3x7 RM-ANOVA (Condition x Block x Target) showed a main effect of Condition $\left( {{F}_{2,{172}} = {8.92},p < }\right.$ ${0.001},{\eta }^{2} = {0.006})$ , Block $\left( {{F}_{3,{255}} = {52.57},p < {0.01},{\eta }^{2} = {0.06}}\right)$ and Target $\left( {{F}_{6.510} = {12.26},p < {0.001},{\eta }^{2} = {0.02}}\right)$ . On average (accross all learning blocks), it took a participants 18.25 hovers to find a target in the Landmarks-Removed, 22.24 hovers in the Landmarks-Changing, while the No-Landmarks required 28.48 hovers for a correct selection. A Post-hoc pairwise t-test showed a significant difference on Hovers between both Landmarks conditions and No-Landmarks (both $p < {0.01}$ ) for the learning blocks.
310
+
311
+ § 6.1.3 S2 LEARNING BLOCKS - ERRORS
312
+
313
+ As participants could hover over elements in the scatterplot until they found the correct item, errors were low overall. Errors were measured as the number of incorrect clicks before choosing a correct target. this study with an average of 0.69 errors per trial across all conditions and blocks. For the learning blocks, a 3x3x7 RM-ANOVA (Condition x Block x Target) showed a main effect of Condition $\left( {{F}_{2,{170}} = {4.13},p = {0.01},{\eta }^{2} = {0.003}}\right)$ and Block $\left( {{F}_{3,{255}} = }\right.$ ${4.51},p = {0.003},{\eta }^{2} = {0.006}$ ) errors, but no interactions between the factors. A Post-hoc pairwise t-test showed significant differences between Landmarks-Changing and No-Landmarks $\left( {\mathrm{p} = {0.01}}\right)$ , with participants making fewer errors with the Landmarks-Changing conditions overall (0.50 errors per trial) compared to 2.37 errors per trial for the No-Landmarks condition.
314
+
315
+ < g r a p h i c s >
316
+
317
+ Figure 14: ScatterPlot Mean Hovers(± s.e.) across learning blocks (1-3). Block 4 shows the results of removing or changing emphasized objects.
318
+
319
+ § 6.1.4 S2 LEARNING BLOCKS - TARGET-BY-TARGET ANALYSIS
320
+
321
+ As the ANOVA results showed a main effect of Target on search time and hover counts, we looked into the results of each specific target. Although there is little difference among most targets, targets near the centre (such as Jordan and Montenegro) were the hardest to find (see Figures 15 and 16). Nicaragua (which was both a target and highlighted in the landmark conditions) was substantially harder to find in the No-Landmarks condition than the rest of the targets (required 42.5 hovers and ${60900}\mathrm{\;{ms}}$ ), but was only of average difficulty in the landmarks conditions (19.85 hovers and 31900ms). In all conditions, France, located near the top right corner (between landmarks in the landmark conditions) was the easiest to find (13364ms and 6.9 hovers).
322
+
323
+ § 6.2 S2 EFFECTS OF CHANGE/REMOVAL OF LANDMARKS: SEARCH TIME, HOVERS, AND ERRORS BLOCK 3 TO 4
324
+
325
+ § 6.2.1 S2 CHANGE/REMOVAL - SEARCH TIMES
326
+
327
+ To investigate the effects of changing or removing landmarks in the scatterplot, we carried out an analysis using only block 3 (the block before the removal/change) and block 4 (the block after the removal/change). The 3x2 RM-ANOVA (Condition x Block) did not find an interaction between the two factors $\left( {\mathrm{p} = {0.054}}\right)$ for search time. There was also no interaction between (Condition x Target) $\left( {\mathrm{p} = {0.54}}\right)$ .
328
+
329
+ Search times continued to decrease in Landmarks-Removed from ${16072}\mathrm{\;{ms}}$ in Block 3 to ${10788}\mathrm{\;{ms}}$ in block 4, and in Landmarks-Changing from 14454ms to 13385ms. These results were similar to the No-Landmarks condition: from 19716ms in Block 3 to 11401ms in Block 4. However, this improvement varied by target, and some targets actually decreased in performance. Nicaragua, which was both a landmark and a target, saw search times go from 18384ms to ${22522}\mathrm{\;{ms}}$ in the Landmarks-Changing condition. We saw a similar (although smaller) effect with France and Montenegro.
330
+
331
+ < g r a p h i c s >
332
+
333
+ Figure 15: Mean search time ( $\pm$ s.e.) by Target and Block in Scatterplot Study. Target/landmark locations (and the change in Landmarks) are included in the bottom right corner.
334
+
335
+ § 6.2.2 S2 CHANGE/REMOVAL - HOVERS
336
+
337
+ We did a similar analysis investigating the change or removal of landmarks for hovers. Similar to search times, we saw an improvement in hovers required to find a target for all conditions, going from 9.6 hovers to 7.8 for Landmarks-Changing; 10.79 to 6.93 in Landmarks-Removed, and 16.5 to 7.97 for No-Landmarks. The 3x2 RM-ANOVA (Condition x Block) found an interaction between Condition x Block $\left( {{F}_{2,{172}} = {3.73},p = {0.002},{\eta }^{2} = {0.006}}\right)$ for Hovers.
338
+
339
+ Similar to search times, hover counts continued to decrease from the 3rd to 4th block, but certain targets were affected negatively. We saw the same effect for Nicaragua (going from 9 hovers in Block 3 to 13 when landmarks were changed- but this effect did not happen when the landmark was removed). Conversely, Moldova was negatively affected by the removal of landmarks (from 8 hovers in Block 3, to 10 in Block 4), but not by changing the landmarks (continued to improve to just 3.14 hovers in the final Block).
340
+
341
+ § 6.3 S2 SUBJECTIVE MEASURES
342
+
343
+ § 6.3.1 S2 PERCEIVED CHANGE IN DIFFICULTY
344
+
345
+ For each specific target, we also asked participants to rate on a 1-7 scale how much more difficult it was to find the target in block 4 (for the landmark conditions). As shown in Fig 17, while participants did report that the change/removal of landmarks made the task more difficult, the change affected most targets equally. Overall, France (located top right corner between two red dots) was the least affected by the change. A One-way ANOVA on each of the targets ratings using Aligned Rank Transform [62] with Condition as the factor found no differences between the conditions.
346
+
347
+ § 6.3.2 S2 PERCEIVED EFFORT
348
+
349
+ We again measured participants' perceived effort in relation to changing or removing the landmarks was recorded using the NASA-TLX questionnaire [25]. For the No-Landmarks condition, perceived effort relates to finding the target in the final block. Results are summarized in Fig 18. We used an Aligned Rank Transform on the aggregated responses to perform a one-way ANOVA on each of the TLX questions using Condition as a factor. The ANOVA found no significant differences between the conditions on any of the TLX measures (all $p > {0.05}$ ).
350
+
351
+ < g r a p h i c s >
352
+
353
+ Figure 16: Hovers (± s.e.) by Target and Block in Scatterplot Study. Target/landmark locations (and the change in Landmarks) are included in the bottom right corner.
354
+
355
+ < g r a p h i c s >
356
+
357
+ Figure 17: S2 Perceived Change in Difficulty in Block 4
358
+
359
+ § 6.3.3 S2 PARTICIPANT COMMENTS
360
+
361
+ At the end of this study, we also asked participants to explain their overall process of finding targets and whether they employed any specific strategy throughout the task. Regarding the scatterplot configuration and using a real dataset, P8 stated "Yes, some targets [were easier as they] were located near a corner, or distinct cluster" while P12 commented, "some [targets] were located next to same continent countries or countries near by" while P44 mentioned, "I tried to remember some of the countries in a certain area of dots whose names I am familiar with." Other participants mentioned the use of landmarks, such as P32 stated, "[targets were easier] only when they were within the red circles" and P90 mentioned, "Some [targets] were close to the edges and red circles." Participants in the No-Landmarks condition more commonly stating using their personal experience to help with the task, such as P2 mentioning "I noticed countries that are close together on a map were relatively close together on this chart", and P79 "I found middle east countries, mostly all together, and European and Asian countries were similarly grouped, and from there I just had to try to build a memory."
362
+
363
+ < g r a p h i c s >
364
+
365
+ Figure 18: S2 Mean NASA Task Load Index scores, by condition
366
+
367
+ § 7 DISCUSSION
368
+
369
+ Our studies investigated whether accidental landmarks could help users find and re-find items in a visualization, and whether they impaired performance when taken away or changed. The studies provide several main findings:
370
+
371
+ * In Study 1, performance in the learning blocks (in terms of search time and hovers) was no better for the accidental-landmark conditions (in fact, the No-Landmarks condition was best), but in the final block, performance was impaired when landmarks were removed or changed;
372
+
373
+ * In Study 2, search time and hovers in the learning blocks were lower for the accidental-landmark conditions, but there was no significant detriment in the final block when the landmarks were removed or changed;
374
+
375
+ * In both studies, participants in the landmarks conditions reported that finding targets in the final block was more difficult than in the previous blocks;
376
+
377
+ * In both studies, participant comments suggested that people were using the highlight colours to assist them in finding the targets, as well as structural landmarks such as corners, edges, clusters, and white space;
378
+
379
+ * In both studies, participants reported no major differences between the three conditions in terms of overall effort.
380
+
381
+ In the following sections, we provide explanations for these results, discuss how our findings can generalize to real-world visualizations, and outline limitations of the study and opportunities to extend the research.
382
+
383
+ § 7.1 EXPLANATION OF RESULTS
384
+
385
+ § 7.1.1 THE EFFECTS OF LANDMARKS IN LEARNING BLOCKS
386
+
387
+ Our two studies showed contrasting results about the usefulness of accidental landmarks in helping participants learn item locations during the learning blocks. The only change between the studies was in the type of visualization used, and the differences between the grid and scatterplot can help to explain the contrasting study results. First, in the simple grid used with Study 1, the visual search task was easier than with the more complex scatterplot of Study 2. Study 1 participants could carry out a row-by-row or column-by-column search pattern to look for the target, which may have made the coloured highlights less valuable. In contrast, the irregular and more complex organization of items in the Study 2 scatterplot did not allow users to carry out a methodical search strategy, and when users are carrying out a less-organized search, the anchors provided by the highlighted landmarks may have been more valuable. For example, a general problem in searching a complex dataset is that users repeat some areas and miss others; the reference frame provided by the highlights may have assisted users in organizing their search and reducing repetition.
388
+
389
+ Second, the attentional draw of the emphasized points may have affected the two visualizations differently. It is known that bottom-up attention will be guided to areas of visual emphasis (e.g., our studies showed the highlights as red circles among white circles) [16,64]. Visual attention will be in part guided to objects that differ from others as a first step in the multi-step process of attention, which is then guided by the task and previous selections. In the simpler grid visualization, the attentional draw of the emphasized points may have distracted participants from a regularized search strategy, reducing the efficiency of their visual search. Although this could also have occurred in the more-complex scatterplot of Study 2, any negative effects may have been outweighed by the organizational benefit provided by the reference frame of the coloured landmarks.
390
+
391
+ These possibilities should be explored further in additional studies. In addition, we also note that our between-participants design leads to the potential for inherent group differences that may account for some of the overall difference between conditions during training. It was not possible to completely remove these group differences (e.g., we could not use performance on the first block as a covariate, because the experience of visual search was substantially different for the landmarks and no-landmarks conditions); further studies can help to further investigate the initial differences between the conditions.
392
+
393
+ § 7.1.2 THE EFFECTS OF REMOVING / CHANGING LANDMARKS
394
+
395
+ Our studies also showed contrasting results in terms of whether changing or removing landmarks impaired performance: Study 1 saw a significant reduction in performance when landmarks were taken away or changed, whereas Study 2 did not (there were indications of a performance reduction for some targets, but not overall).
396
+
397
+ Again, differences between the grid and scatterplot visualizations can help to explain these contrasting results. In Study 1, the relative lack of structural or layout-based landmarks in the grid means that the coloured landmarks were more likely to be seen as a primary reference frame for participants in the landmarks conditions (particularly because people were not forewarned that the highlights would be changed / removed). For example, Study 1 saw strong performance impairments for both targets that were in the interior of the grid (near to a coloured highlight but not near to a corner or an edge).
398
+
399
+ The scatterplot used in Study 2 had many more structural and layout-based landmarks in addition to the coloured highlights (e.g., clusters of points and areas of white space in addition to the edges and corners of the datapoints). This means that participants in Study 2 had multiple frames of reference available to them, and they likely made use of both structural and colour-based landmarks when learning item locations. Previous research suggests that people will use whatever reference frame makes their task easiest, but in Study 2, neither reference frame was dominant. There were eight highlighted items in the Study 2 scatterplot, meaning that the coloured items did not simplify the task so much that it was trivially easy (e.g., the task was much more difficult than if there had been two targets that were beside two coloured landmarks). The overall difficulty means that participants were likely to make use of the structural landmarks in addition to the highlighting - and since structural landmarks were unchanged in the final block, people may have been able to rely on this other reference frame to maintain their performance. Limited evidence for this hypothesis can be seen in the performance of the Nicaragua target - because this target was also highlighted in the training blocks, it was easy to find using only colour, which may have led participants to rely more on colour rather than structural landmarks such as nearby clusters.
400
+
401
+ Overall, our results align with the guidance and effort hypotheses (i.e., that providing guidance and reducing effort in training will lead to over-reliance on the guide). When colour highlighting was the only reference frame available, or when it made the retrieval task easier, participants relied on it more and had larger reductions in performance when the highlighting was removed or changed. The presence of other reference frames (e.g., structural and layout-based landmarks) appeared to mitigate the problems caused by removing the colour highlights - but it is worth noting that in Study 2, participants in the landmarks conditions subjectively rated the task as substantially more difficult when the landmarks were removed or changed, even though they were able to make use of other knowledge to preserve performance.
402
+
403
+ § 7.2 GENERALIZING THE FINDINGS TO OTHER CONTEXTS
404
+
405
+ Our study examined the effects of accidental landmarks in two visualization settings, a simple grid and a more-complex scatterplot, and there are several underlying commonalities between our experiments and real-world scenarios that argue for the generalizability of our findings.
406
+
407
+ First, our learning task - repeatedly visiting target locations - is common in many real-world visualization tasks. A typical exploration of a dataset involves investigating interesting data points or patterns to identify relationships between them. Additionally, it is common for visualization designers to use emphasis to encourage exploration (e.g., by highlighting regions of interest to signify importance or to alert viewers to missing links). Similarly, in narrative visualization, when known aspects of a data set are presented to the viewers $\left\lbrack {8,{49}}\right\rbrack$ , different data points are explained and presented to viewers, and designers may alter an element's size or colour to improve its legibility relative to other areas of a visualization, potentially making it more memorable.
408
+
409
+ Second, our manipulation of the landmarks - changing the emphasized set or removing emphasis altogether - is also something that is likely to occur in many real-world visualizations. Emphasizing or highlighting one particular subset of the displayed data is a common action as viewers explore different aspects of a visualizations or review different findings. As the exploration or story-telling process continues, it is common for users to focus on a different subset of the data. For example, in Study 2, a normal exploration process could involve highlighting countries in different continents. Once an analysis or exploration session is finished, unless the visualization system has a history mechanism built in, there will be no emphasized points upon returning to a visualization (similar to our Landmarks-Removed condition).
410
+
411
+ Third, our participants were MTurk workers rather than users who have naturally arrived at a visualization task, and although there are likely to be differences between these populations in terms of intrinsic motivation and interest in the dataset, there are also many similarities. In particular, there is a wide range of visualization users who could be affected by accidental landmarks, and the demographics of our MTurk sample covered a variety of prior experience with visualizations. The characteristics of an MTurk study help increase ecological validity compared to more typical lab studies: we had a larger sample than typical laboratory studies ( 180 total participants) who had a much more diverse background than what is generally seen in HCI experiments, as such our findings can be more representative of a generalized user base.
412
+
413
+ Fourth, our use of emphasis in the studies reasonably represents the type of accidental landmarks that may be available in a visualization system - e.g., highlight-based filtering and brushing capabilities are now common in many tools, such as Tableau as shown in Figure 1 - and many users will take advantage of these capabilities.
414
+
415
+ § 7.3 LIMITATIONS, EXTENSIONS, AND FUTURE WORK
416
+
417
+ There are limitations to our evaluation - many of which were necessary to test the use of emphasis as landmarks in controlled environments - and these limitations provide opportunities to expand our work in future studies.
418
+
419
+ The grid-style visualization, the underlying dataset (target/distractor names), and the target/landmark locations were chosen for the study in order to control potential external factors such as cluster-based layout cues that provide visual indications about location. As our grid with circles most resembles a scatterplot, we then extended our initial results to evaluate the effects of emphasis on spatial memory using a scatterplot. However, we used a single dataset behind the scatterplot, and we note that participants may have formed relationships within the scatterplot and dataset (familiar names or clusters of data). This can be counteracted by evaluating multiple distinct datasets in followup work.
420
+
421
+ Second, our future work involves evaluating the use of landmarks in a greater variety of chart types including bar charts or more complex, interactive visualizations (e.g., basic charts in a small-multiples configuration). Involving multiple charts may result the benefits or drawbacks of landmarks being amplified as there may be more structural landmarks occurring on the outlines of multiple charts, but it may be harder to find items within each chart.
422
+
423
+ Third, we explored the effects of accidental landmarks with only one visual variable (colour), but there are many other emphasis effects that could be tested, including size, outline, transparency, texture, or shape. Previous research has shown that different visual variables attract attention and affect learning at different levels [9, ${22},{38}\rbrack$ , and designers must decide on a trade-off between noticeable highlights and the potential unintended distraction in learning.
424
+
425
+ Fourth, our study focused on immediate learning performance and short-term memory and spatial awareness through our revisitation task; we did not test longer-term retention after hours or days (which would be common in visualization as analysts can work with datasets over extended periods of times of weeks and months). Our approach was necessary to establish an initial baseline understanding of how emphasis affects the initial spatial learning process, but in future studies we will extend the work to look at both longer retention. Furthermore, development of real expertise with a visualization system often requires much longer training duration than those provided by our studies. In our future work, long-term studies will allow us to also examine how longer training periods and varying gaps of hours or days can lead to better spatial development and retention.
426
+
427
+ In addition, there are several research directions that could explore ways of better supporting users even when accidental landmarks change or disappear. Our results and participant comments show that users do use and rely on highlights to revisit previous targets, particularly when the landmarks make the retrieval task easier. Even though designers cannot control the application of filters and highlights when users explore a visualization, there may be ways of avoiding the problems that can arise from changes to emphasis and highlighting. One possibility is to show traces of previous highlights (i.e., "ghost echos" or "phosphor effects"); these marks would provide assistance to users who are relying on accidental landmarks, by providing at least a trace of the landmarks' previous locations. These traces could slowly fade away after a period of time, which could also encourage users to find other strategies for remembering the items.
428
+
429
+ Further study is also needed on the general problem of supporting revisitation, and whether other mechanisms that could be used to improve re-finding can also act as accidental landmarks. For example, "visit wear" techniques can visually mark the items that people visit in a visualization, making revisitation much easier. An example of this technique is the Footprints scrollbar, which records user locations with marks in a scrollbar if the user pauses for more than one second [1]. This system also analysed usage data to improve and automate the state saving algorithm such that the most relevant locations would be saved without cluttering the scrollbar. While visualizations can range from very simple representations to very complex multi-dimensional parameter spaces, a combination of methods such as visit wear and state saving mechanisms can ease revisiting objects while exploring visualizations. Further work is needed to understand whether and how annotations such as visit-wear marks function as landmarks, and whether their obvious value in supporting revisitation can lead to larger problems of over-reliance.
430
+
431
+ § 8 CONCLUSION
432
+
433
+ Many visualizations display large datasets in which it can be difficult for users to find (and re-find) specific items. Interactive systems that provide highlighting tools such as filtering or brushing emphasize certain data points - these can become "accidental landmarks," visual anchors that help users remember locations that are near the emphasized points. Landmarks are known to be useful (by aiding revisitation), but previous research on the guidance hypothesis suggests that if users become dependent on them, removing or changing the highlighting could cause problems. We provide designers with new information about these issues: we carried out two crowd-sourced studies, first in a basic grid configuration and then in a traditional scatterplot, in which people were asked to learn a set of item locations with or without emphasized points. We then removed or changed the highlighting to see if performance suffered. Results show that accidental landmarks did not improve performance during training in a basic grid, but did so for a scatterplot, and changing or removing emphasized data points affected users' ability to re-find targets - particularly those that were not near structural landmarks such as the corners of the visualization. Our work provides new knowledge about how visual features, emphasis and landmarks in visualizations can affect revisitation, and new understanding for designers who want to support spatial awareness and learning in visualizations.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/HI9zjeYVaG9/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,259 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Future Frame Synthesis for Fast Monte Carlo Rendering
2
+
3
+ Category: Research
4
+
5
+ ![01963e6c-a8b1-7c0f-a3bc-7294a2657e15_0_148_295_1501_458_0.jpg](images/01963e6c-a8b1-7c0f-a3bc-7294a2657e15_0_148_295_1501_458_0.jpg)
6
+
7
+ Figure 1: Given two input frames ${I}_{1}$ and ${I}_{2}$ together with the optical flows between them ${\mathbf{f}}_{2 \rightarrow 1}$ and the flow confidence map ${M}_{2}$ , our method first estimates backward flows ${\mathbf{f}}_{3 \rightarrow 2}$ and uses it to generate an initial future frame ${\widetilde{I}}_{3}$ . Our method then predicts the brightness change map to compensate for the pixel-wise brightness change over time. Finally, our method predicts a backward flow confidence map, uses it to calculate a rendering map to optionally select those unreliably predicted pixels to re-render using an off-the-shelf rendering engine.
8
+
9
+ ## Abstract
10
+
11
+ Monte Carlo rendering algorithms can generate high-quality images; however they need to sample many rays per pixel and thus are computationally expensive. In this paper, we present a method to speed up Monte Carlo rendering by significantly reducing the number of pixels that we need to sample rays for. Specifically, we develop a neural future frame synthesis method that quickly predicts future frames from frames that have already been rendered. In each future frame, there are pixels that cannot be predicted correctly from previous frames in challenging scenarios, such as quick camera motion, object motion, and large occlusion. Therefore, our method estimates a mask together with each future frame that indicates the subset of pixels that need ray samples to correct the prediction results. To train and evaluate our neural future frame synthesis method, we develop a large ray-tracing animation dataset. Our experiments show that our method can significantly reduce the number of pixels that we need to render while maintaining high rendering quality.
12
+
13
+ Index Terms: Computing methodologies-Computer graphics-Ray tracing
14
+
15
+ ## 1 INTRODUCTION
16
+
17
+ Monte Carlo ray tracing algorithms are widely used to generate photorealistic images for many applications, such as computer games, films, and simulations. However, these algorithms are time-consuming as they need to sample many rays to shade each pixel $\left\lbrack {9,{51}}\right\rbrack$ .
18
+
19
+ A great amount of effort has been devoted to fast Monte Carlo rendering. A popular category of approaches is to only cast a small number of rays for each pixel and then reconstruct a high-quality rendering from these few samples by denoising $\left\lbrack {8,{13},{18},{19}}\right\rbrack$ . Another category of approaches is to first reproject rays sampled when rendering previous frames to the current frame and use them to reconstruct the current frame $\left\lbrack {3,4}\right\rbrack$ . These temporal reprojection methods have difficulty in rendering view-dependent effects and filling pixels that are occluded in the previous frames.
20
+
21
+ This paper presents a future frame synthesis method for fast Monte Carlo rendering. Our method belongs to the category of reprojection algorithms and improves existing algorithms by exploiting deep neural networks to synthesize a future frame from frames that have already been rendered. Existing reprojection methods use forward warping to splat samples / pixel colors from previous frames to the future frame, which often suffers artifacts such as holes. To achieve higher rendering quality, our method uses backward warping to synthesize the future frame from previous frames. Backward warping, however, requires optical flow from the future frame to the previous frame(s), which cannot be calculated without the future frame or some of its intermediate G-buffer data. To address this problem, we train a deep neural network to learn to predict the backward flow of future frames. As the color constancy assumption for repro-jection algorithms may not always hold across neighboring frames, we employ a second neural network to predict the brightness changes from previous frames, which are then added to the synthesized future frame. Furthermore, our future frame synthesis networks may generate errors when facing challenging scenarios, such as large occlusions and significant view-dependent effects. Therefore, as an optional step, our method uses a mask neural network to generate a confidence map that indicates those unreliable pixel estimates and re-render these pixels using ray tracing.
22
+
23
+ As there are no publicly available large-scale ray tracing animation dataset, we built such a dataset by collecting or purchasing model and scene files and render them using the Unreal Engine or Blender Cycles. Our dataset contains many animation sequences with a variety of animation characters, background scenes and camera motion. This dataset allows us to train and test our neural future frame synthesis method. Our experiments show that our method is able to drastically reduce the number of rays that need to be sampled to produce frames while maintaining high rendering quality.
24
+
25
+ ## 2 RELATED WORK
26
+
27
+ Fast Monte Carlo rendering has a rich literature history. A popular approach is to reduce the total number of sampling rays that need to be cast to generate an image [51]. A large number of algorithms have been developed that only sample a small number rays per pixel and then perform denoising to reconstruct high-quality renderings [6, ${12},{15},{20} - {22},{27},{32},{34},{43}\rbrack$ . The recent learning-based denoising methods, especially those use deep neural networks, can generate very high-quality renderings with only a small number of samples [8, ${13},{18},{19},{28}\rbrack$ .
28
+
29
+ Another approach is to reuse samples from previous frames by reprojecting the samples to future frames $\left\lbrack {3,4,{44},{45}}\right\rbrack$ . These re-projected samples are often used together with new samples to reconstruct future frames $\left\lbrack {{10},{37},{39}}\right\rbrack$ . Besides ray tracing applications, reusing temporal rendering information has also been widely explored for a variety of other rendering problems. For instance, Scherzer et al. reuse past information to reduce the computation cost of shadow mapping [35]. Nehab et al. developed a reverse reprojection-based caching scheme that enables pixel shaders to reuse calculations performed for visible surface points over time [30]. Asynchronous time warp reprojects past frames to the future frame to reduce the latency in VR applications [41]. Didyk et al. warp existing frames to increase frame rates for high-refresh-rate displays [11]. Yang et al. further increased frame rates via bidirectional scene reprojection [47]. Recently, Mueller et al. reported that it is possible to apply temporal shading reuse to extended periods of time for a significant portion of samples and demonstrated that for real-time VR applications [29]. Like these methods, our work also explores temporal rendering history to speed up rendering and focuses on Monte Carlo rendering algorithms. Our method learns to predict backward flows that allow for future frame synthesis without the need of hole filling. Our method also predicts a confidence map that can be used to identify unreliable pixels in the predicted future frame and optionally re-render them using an off-the-shelf ray tracing rendering engine.
30
+
31
+ Our work is also related to deep video frame prediction methods from the Computer Vision community $\lbrack 7,{23},{24},{26},{33},{38},{38},{42}$ , ${46},{50}\rbrack$ . These methods employ a variety of deep neural network algorithms to learn to predict future frames from their previous video frames. Particularly, given its good performance in predicting future frames, our work adopts the neural network architecture of SDCNet from Reda et al. [33] to estimate the backward flows. Unlike Reda et al. that use a neural network to estimate the optical flow between the previous frames, our method uses the optical flows from the rendering engine and predicted by our future frame synthesis network. Since optical flows, even from the rendering engine, are not perfect, we further compute or predict a confidence map and feed it to the backward flow estimation network to improve the quality of the backward flows. Moreover, we further improve future frame prediction quality by estimating and compensating for the brightness change over time and predicting a confidence map to guide the rendering of unreliably predicted pixels.
32
+
33
+ Finally, in a concurrent work, Guo et al. developed ExtraNet, which also extrapolates future frames to achieve low-latency rendering [14]. In addition to fully rendered previous frames, their method renders G-buffer data of the extrapolated frames as input, which allows their method to employ a lightweight network to render high-quality extrapolated frames. In contrast, our method does not need G-buffer data of the extrapolated frames and thus requires less memory consumption. However, without the G-buffer data of the extrapolated frames, our method sometimes cannot predict future frames as high quality as ExtraNet. Nevertheless, future frame prediction is necessarily error-prone even with the target G-buffer data. Therefore, our method also predicts an error mask that identifies difficult-to-predict pixels and allows a rendering engine to optionally render these pixels to ensure the quality of the final future frames.
34
+
35
+ ## 3 Ray-tracing Animation Datasets
36
+
37
+ ### 3.1 Rendering Engines
38
+
39
+ We use Unreal Engine 4 (UE4) to render our animation dataset. Since the path tracer in UE4 is not stable for production [40], we use its hybrid ray tracer called "Real-Time Ray Tracing" (RTRT). We train and test our future frame synthesis network using the animation sequence rendered by RTRT. To examine how well our network can be generalized to examples generated by a pure path tracer, we also use Blender Cycles to render additional animation sequences and use them to test our network.
40
+
41
+ ### 3.2 Digital Assets
42
+
43
+ We purchased Unreal scene files from UE Marketplace and used each of them as the background for an animation sequence. We obtained animations with characters from Mixamo and integrated them into the background scenes to generate various animation sequences. Specifically, we bought 20 background environments from UE Marketplace. We separated them into three groups: 10 for training, 1 for validation, and the remaining 9 for testing. For each background environment, we randomly added animation characters. Then we picked good viewpoints and created camera paths to follow the main animation character for each animation scene following a recent method used to create the Creative Flow+ Dataset [36]. In this way, we could generate multiple animation sequences with different camera paths from the same animation scene. We took care to prevent the camera going through animation characters. In total, we produced 118 videos for the training set, where each video has 461 frames. For the validation and testing set, we followed the same approach but only generate one video for each animation scene. Our testing and validation set contains 10 and 1 animation sequences, respectively. When rendering these animation sequences, we individually adjusted the number of samples per pixel to avoid noticeable noise in the results. Samples of our animation sequences are shown in Figure 2.
44
+
45
+ We also created a second testing set. Specifically, we used Blender Cycles to render 6 animation sequences from resources from the Blender Open Movies dataset [1] and the Nvidia ORCA dataset [25]. When rendering animation sequences from the Blender Open Movies and Nvidia ORCA datasets, we used 1000 samples per pixel and 2000 samples per pixel, respectively.
46
+
47
+ ### 3.3 Ground Truth Optical Flow
48
+
49
+ We followed Fan et al. [16, 17] and the suggestions from the Unreal Community [2] to compute the ground-truth optical flows between two consecutive animation frames. Specifically, we used the Unreal built-in optical flow tool to compute the optical flow of still background scene induced by camera movements. We used texture coordinates to compute optical flows of moving objects. However, we were not able to compute the ground-truth optical flows for several scenarios, such as shadow regions and transparent or semi-transparent objects. Since shadows and semi-transparency are common in ray tracing renderings, we kept them in our dataset with no ground-truth optical flows for them.
50
+
51
+ ## 4 FUTURE FRAME SYNTHESIS
52
+
53
+ Given two consecutive frames ${I}_{1 : 2}$ , our method aims to predict a sequence of future frames ${\widehat{I}}_{3 : t}$ frame by frame. For instance, we first predict ${\widehat{I}}_{3}$ from ${I}_{1}$ and ${I}_{2}$ and then predict ${\widehat{I}}_{4}$ from ${I}_{2}$ and ${\widehat{I}}_{3}$ . Below we describe how our method predicts ${\widehat{I}}_{3}$ . The other future frames are generated in the same way with minor changes that will be noted in this paper.
54
+
55
+ We use a deep neural network to predict ${\widehat{I}}_{3}$ . As shown in Figure 3, our network takes as input two existing frames ${I}_{1}$ , and ${I}_{2}$ , the optical flow map ${f}_{2 \rightarrow 1}$ from ${I}_{2}$ to ${I}_{1}$ , and the optical flow confidence map ${M}_{2}$ . Following the previous video frame interpolation and extrapolation papers $\left\lbrack {{31},{33}}\right\rbrack$ , our network outputs the backward flow from ${\widehat{I}}_{3}$ to ${I}_{2}$ , denoted as ${f}_{3 \rightarrow 2}$ , and then uses it to synthesize the future frame ${\widetilde{I}}_{3}$ from ${I}_{2}$ by backward warping. Such an approach tends to generate sharper frames than estimating the future frame directly. In this paper, we adopt the network architecture from SDC-Net [33] for backward optical flow estimation.
56
+
57
+ There are necessarily errors in the predicted future frame ${\widehat{I}}_{3}$ . For example, when the camera angle or camera location changes, content that is invisible in the previous frames will be visible in the future frame. Warping the previous frames cannot generate those disoccluded content in the future frame. Significant view-dependent effects also poses challenges for future frame prediction. Therefore, we added another neural network that shares the same input as the backward flow estimation network to estimate a confidence map ${M}_{3}$ , as illustrated in Figure 3. This confidence estimation network also shares the same network architecture as the backward flow estimation network with an additional sigmoid layer at the end. Each element in this map indicates how reliable the corresponding optical flow in ${f}_{3 \rightarrow 2}$ can be used to estimate the pixel color for the future frame. This confidence map provides an optional step to improve the future frame quality by re-rendering those pixels using the rendering engine in the system. In our experiments, we re-render those pixels with the confidence values below a threshold value $\lambda$ .
58
+
59
+ ![01963e6c-a8b1-7c0f-a3bc-7294a2657e15_2_152_149_1498_609_0.jpg](images/01963e6c-a8b1-7c0f-a3bc-7294a2657e15_2_152_149_1498_609_0.jpg)
60
+
61
+ Figure 2: Samples of our ray tracing animation dataset.
62
+
63
+ Note, when estimating ${\widehat{I}}_{3},{f}_{2 \rightarrow 1}$ is directly computed from the rendering engine. As discussed in Section 3, the optical flows from the rendering engine are not perfect in many scenarios. More importantly, even when optical flows that correctly accounts for scene point motions do not lead to perfect a future frame. Occlusion and significant view-dependent effect are two common reasons. Therefore, our method computes a confidence map for optical flows. Particularly, the optical flow confidence map ${M}_{2}$ is computed by first backwardly warping ${I}_{1}$ to align with ${I}_{2}$ using ${f}_{2 \rightarrow 1}$ and then thresholding the error map against a constant $\omega$ . If the error is smaller than $\omega$ , we set the corresponding value in ${M}_{2}1$ , otherwise 0 . The default value for $\omega$ is 0.04 in our paper when the pixel value is normalized to the range of $\left\lbrack {0,1}\right\rbrack$ . When estimating the other future frames ${\widehat{I}}_{t}$ with $t > 3$ , we use ${f}_{t \rightarrow t - 1}$ and ${M}_{t - 1}$ , which are both the output from the previous step to estimate ${\widehat{I}}_{t - 1}$ .
64
+
65
+ ### 4.1 Brightness Enhancement
66
+
67
+ Our method described above synthesizes a future frame from its immediate previous frame and thus implicitly assumes the brightness constancy. Such an assumption, however, does not always hold. To address this problem, we employ a brightness enhancement network that estimates the brightness changes from the previous frames. Specifically, our method first warps ${I}_{1}$ to align with ${I}_{2}$ via backward warping and then calculates the brightness change map between them ${B}_{1,2}$ . We warp the brightness change ${B}_{1,2}$ with estimated optical flow to get initial ${B}_{2,3}$ . Our method then feeds initial ${B}_{2,3}$ together with the initial future frame ${\widetilde{I}}_{3}$ , which is created by warping ${I}_{2}$ using the estimated optical flow ${f}_{3 \rightarrow 2}$ , into the brightness enhancement network to estimate the brightness change map ${\widehat{B}}_{2,3}$ . Our method finally adds ${\widehat{B}}_{2,3}$ to the initial future frame ${\widetilde{I}}_{3}$ to generate the enhanced future frame ${\bar{I}}_{3}$ , as shown in Figure 3.
68
+
69
+ Loss functions. We train our frame synthesis network in an end-to-end fashion by computing the losses from predicting three consecutive frames ${\widehat{I}}_{3 : 5}$ from two input frames ${I}_{1,2}$ as follows.
70
+
71
+ $$
72
+ \mathcal{L} = \mathop{\sum }\limits_{{t = 3}}^{{t = 5}}{\delta }_{t}{\mathcal{L}}_{t} \tag{1}
73
+ $$
74
+
75
+ where ${\mathcal{L}}_{t}$ is the loss from predicting ${\widehat{I}}_{t},{\delta }_{3} = 1,{\delta }_{4} = {0.5},{\delta }_{5} = {0.25}$ . ${\mathcal{L}}_{t}$ has the following three components.
76
+
77
+ $$
78
+ {\mathcal{L}}_{t} = \alpha {\mathcal{L}}_{t,{l}_{1}} + \beta {\mathcal{L}}_{t, m} + \gamma {\mathcal{L}}_{t,{re}} \tag{2}
79
+ $$
80
+
81
+ where ${\mathcal{L}}_{t,{l}_{1}}$ is the ${\ell }_{1}$ loss between the ground truth ${I}_{t}$ and the synthesized future frame ${\widehat{I}}_{t},{\mathcal{L}}_{t, m}$ is the binary cross entropy loss between the predicted confidence mask ${M}_{t}$ and the ground-truth confidence mask of the enhanced ${\bar{I}}_{t}$ , which is obtained by thresholding the error $\left( {\omega = {0.04}}\right)$ between enhanced ${\bar{I}}_{t}$ and ground truth ${I}_{t}$ , and ${\mathcal{L}}_{t,{re}}$ is the percentage of pixels that needed to be rendered. We empirically set $\alpha = {0.3},\beta = {0.3}$ , and $\gamma = {0.3}$ .
82
+
83
+ Implementation details. We randomly crop training images into patches of size ${256} \times {256}$ from our training images. We use PyTorch to implement our future frame synthesis network. We use a mini-batch of 4 . We use the Adam optimizer with multiple-step learning rates. The learning rate is ${10}^{ - }4$ for the first 250 epochs. Then learning rate is set to ${10}^{ - }5$ . We train our networks for 700 epochs using one Nvidia Titan Xp. We use 2D convolution layers with a kernel size of 7 to extract features with channels of 32 from stacked inputs. The encoders are composed of ${52}\mathrm{D}$ convolution layers using a stride of 2 and ${42}\mathrm{D}$ convolution layers using a stride of 1 . The channel numbers increase from 32 to 512 during the encoder. For the decoder, we use 6 2D deconvolution layers with a stride of 2 . We use a 2D convolution layer with a stride of 1 and kernel size of 3 to predict the optical flows, masks, and brightness enhancement after the decoders in three networks.
84
+
85
+ ## 5 EXPERIMENTS
86
+
87
+ We evaluate our method by comparing to baseline reprojection methods and state-of-the-art video frame prediction methods. We also conduct ablation studies to evaluate individual components of our method. In our experiments, we train our future frame synthesis network using the training set of our Unreal animation dataset. We test our method on the corresponding testing set (UE4). As our Unreal dataset was rendered using its hybrid rendering engine, we further test our trained network on our Cycles dataset (Cycles) that was rendered using a ray tracing engine as discussed in Section 3.
88
+
89
+ ![01963e6c-a8b1-7c0f-a3bc-7294a2657e15_3_151_147_1497_738_0.jpg](images/01963e6c-a8b1-7c0f-a3bc-7294a2657e15_3_151_147_1497_738_0.jpg)
90
+
91
+ Figure 3: Our future frame synthesis framework.
92
+
93
+ ### 5.1 Comparisons
94
+
95
+ Reprojection methods. We first compare our method to a baseline reprojection method that warps the current frame ${I}_{2}$ to the future frame ${I}_{t}$ using forward warping. For such a baseline approach, we first obtain the optical flows from ${I}_{2}$ to ${I}_{t}$ , denoted as ${\mathbf{f}}_{2 \rightarrow t}$ . Assuming the linear pixel motion, ${\mathbf{f}}_{2 \rightarrow t}$ can be computed as follows.
96
+
97
+ $$
98
+ {\mathbf{f}}_{2 \rightarrow t} = \left( {t - 2}\right) * {\mathbf{f}}_{2 \rightarrow 1} \tag{3}
99
+ $$
100
+
101
+ where ${\mathbf{f}}_{2 \rightarrow 1}$ is the ground-truth optical flow computed by the rendering engine. We then forward warp ${I}_{2}$ to a future frame ${I}_{t}$ . Multiple pixels could be forwarded to the same target pixel. We blend these pixels in two ways. One is to choose the pixel that is closest to the camera and the other is to blend these pixels using weights that are computed as the inverse of their depth values [5]. We denote them as reproj-nn and reproj-blend respectively in this section. In addition, forward warping leads to holes in the future frame. We fill these holes using ground-truth pixels from the rendering engine.
102
+
103
+ As described in Section 4, we use a threshold $\lambda$ to select a subset of pixels to re-render using the rendering engine. Specifically, if the value in the predicted confidence map is smaller than $\lambda$ , we re-render that pixel. Therefore, as we increase the $\lambda$ value, more pixels are re-rendered, as shown in Figure 4. As also shown in the third column of this figure, the mask prediction accuracy of our method also improves as we increase the $\lambda$ value from 0.1 to 0.4. This is because with a small $\lambda$ value like 0.1, our method only selects to a small number of unreliably predicted pixels to re-render while leaving many more unfixed. As we increase the $\lambda$ value, more of those bad pixels are selected to fix. This is also related to the fact that when training our network, we use $\lambda = {0.4}$ to compute the mask loss in Equation 2. With $\lambda = {0.4}$ , our method needs to re-render around 12.5% for Unreal testing examples (UE4) and 10% for Cycles testing examples. While with $\lambda = {0.2}$ , our method only needs to re-render less than 4.0% for Unreal testing examples (UE4) and 3.0% for Cycles testing examples. Many of the visual examples in this paper are rendered with $\lambda = {0.2}$ .
104
+
105
+ Table 1: Comparison with video frame prediction methods. To ensure a fair comparison, we only train our network to predict one future frame using the ${l}_{1}$ loss and generate the prediction results without re-rendering unreliable pixels in this test.
106
+
107
+ <table><tr><td rowspan="2">Method</td><td colspan="2">UE4</td><td colspan="2">Cycles</td></tr><tr><td>PSNR</td><td>LPIPS</td><td>PSNR</td><td>LPIPS</td></tr><tr><td>SDC2D [33]</td><td>27.907</td><td>0.0561</td><td>31.065</td><td>0.0491</td></tr><tr><td>SDC2D-GTflow [33]</td><td>28.030</td><td>0.0573</td><td>31.033</td><td>0.0512</td></tr><tr><td>MCNet [42]</td><td>23.401</td><td>0.2363</td><td>24.943</td><td>0.2386</td></tr><tr><td>VoxelFlow [50]</td><td>25.195</td><td>0.0898</td><td>28.733</td><td>0.0896</td></tr><tr><td>ImprovedVRNN [7]</td><td>28.039</td><td>0.1241</td><td>31.009</td><td>0.1055</td></tr><tr><td>Ours</td><td>28.618</td><td>0.0557</td><td>31.534</td><td>0.0520</td></tr></table>
108
+
109
+ As we would expect, the quality of our future frame synthesis method increases as the percentages of re-rendered pixels rises. With a similar amount of re-rendered pixels $\left( {\lambda = {0.2}}\right)$ , our method significantly outperforms the above baseline reprojection approaches in terms of both PSNR ( $> {1.5}\mathrm{\;{dB}}$ ) and LPIPS ( $< {0.06}$ ) [49]. These results are consistent on both the Unreal dataset and the Cycles dataset. As shown in Figure 5, reproj-nn tends to generate results with aliasing artifacts while the results from reproj-blend suffer from the ghosting artifacts. In contrast, our results can predict higher quality future frames.
110
+
111
+ Video frame prediction. To conduct a fair comparison to state-of-the-art video frame prediction approaches, we use our future frame synthesis results without replacing pixels according to the predicted confidence map. For MCNet [42], VoxelFlow [50] and Improvednn [7], we use their official codes. For SDC2D [33], we used the code from released by [48], which is slightly different from their original version SDC-Net in that SDC2D only estimates the backward flows for future frame prediction without estimating the spatially-varying kernels and uses $2\mathrm{D}$ convolutions instead of $3\mathrm{D}$ convolutions. To examine the effect of starting future frame synthesis using the optical flows generated by the rendering engine, we also extended the original SDC2D method by using the rendered optical flows instead of the optical flows estimated from the input frames. We denote this version of SDC2D-GTflow. Note, the rendered optical flows are only used to predict the first future frame in both our method and SDC2D-GTflow. We train all these methods using our training set and validation set. As reported in Table 1, our method achieves better quantitative results than those video frame prediction methods with a large margin $\left( { > {0.5}\mathrm{\;{dB}}}\right)$ . Compared to those video frame prediction methods, our method also generates qualitatively better results, as shown in Figure 6.
112
+
113
+ ![01963e6c-a8b1-7c0f-a3bc-7294a2657e15_4_141_102_1509_877_0.jpg](images/01963e6c-a8b1-7c0f-a3bc-7294a2657e15_4_141_102_1509_877_0.jpg)
114
+
115
+ Figure 4: Test results on both UE4 testset (a) and Cycles testset (b). $\lambda$ is the threshold used to select unreliably predicted pixels according to the predicted confidence mask. A larger $\lambda$ selects more pixels to re-render. These results show that our method produces higher-quality renderings in terms of both PSNR and LPIPS while re-rendering a similar amount of pixels to the two baseline reprojection methods.
116
+
117
+ ![01963e6c-a8b1-7c0f-a3bc-7294a2657e15_4_142_1073_1513_1015_0.jpg](images/01963e6c-a8b1-7c0f-a3bc-7294a2657e15_4_142_1073_1513_1015_0.jpg)
118
+
119
+ Figure 5: Examples of predicting three continuous future frames. We select $\lambda = {0.2}$ for our method, which requires similar percentages of re-rendered pixels to other methods.
120
+
121
+ ![01963e6c-a8b1-7c0f-a3bc-7294a2657e15_5_138_127_1521_2011_0.jpg](images/01963e6c-a8b1-7c0f-a3bc-7294a2657e15_5_138_127_1521_2011_0.jpg)
122
+
123
+ Figure 6: Visual comparisons with video frame prediction methods. The top two examples are from UE4. The bottom two examples are from Cycles. To ensure a fair comparison, we did not re-render pixels in this test when producing our prediction results.
124
+
125
+ ![01963e6c-a8b1-7c0f-a3bc-7294a2657e15_6_137_135_1511_796_0.jpg](images/01963e6c-a8b1-7c0f-a3bc-7294a2657e15_6_137_135_1511_796_0.jpg)
126
+
127
+ Figure 7: Ablation studies on our UE4 and Cycles testing sets. "-en" denotes our results without brightness enhancement and "-en-mask" denotes our results without brightness enhancement and without confidence mask.
128
+
129
+ ### 5.2 Ablation Study
130
+
131
+ We examine the effect of two components on our future frame synthesis quality. The first is our brightness enhancement network that compensates for the brightness change. The second is the optical flow confidence mask, which is used to as a part of the input to the backward flow estimation network. When estimating the first future frame, this map is calculated by assessing the quality of the optical flow generated by the rendering engine. When predicting more future frames, it is predicted using the confidence mask estimation network as described in Section 4. In our ablation studies, we compare three versions of our method, our full method (ours), our method without brightness enhancement (-en), and our method without brightness enhancement and without inputting the confidence map to the backward flow estimation network (-en-mask). As shown in Figure 7, these two components both help our method predict future frames.
132
+
133
+ ### 5.3 Discussions
134
+
135
+ We observed that our future frame synthesis method still cannot handle several challenging scenarios. As shown in Figure 8 (a), our method as well as other methods fail to preserve the fine structure (the silver thread). Our method also cannot deal with significant view-dependent effects. Figure 8 (b) shows such an example where the reflection in the mirror is not predicted accurately in the pointed area indicated by the orange arrow.
136
+
137
+ It takes our PyTorch implementation about 0.02 seconds to predict a ${1024} \times {1024}$ frame using one Nvidia 3090 GPU. The reported duration includes all the stages of our method except running the rendering engine to replace the unreliable pixels with rendered pixels. The peak GPU memory is about ${5400}\mathrm{{MB}}$ .
138
+
139
+ In the future, we would like to extend our work by utilizing G-buffer data like many other recent rendering papers $\left\lbrack {8,{13},{14},{18},{19}}\right\rbrack$ . We hope to overcome existing artifacts by adopting a more powerful neural network. We would also like to optimize our network architectures to further speed it up.
140
+
141
+ #### 5.3.1 Use cases
142
+
143
+ We envision two different usage scenarios i) network deployment on the same system to predict subsequent frames to reduce rendering compute needed and ii) in usages such as cloud gaming, where we may need to predict subsequent frames due to network inconsistencies or frame drops. In case of deploying the network concurrent with the rendering, the confidence mask can be used to selectively re-render the pixels. Based on the dataset and content, the rendering engine will need to re-render a magnitude less pixels compared to re-rendering the whole frame. For high quality rendering such as ray or path traced content, neural frame prediction could be applied to extrapolate a majority of pixels in the frame, and the limited ray-tracing budget available could be focused on the pixels as determined by the confidence mask. Even for real-time content such as rasterized games, this method could be applied in addition to the geometry processing step to reduce the number of pixels that need to be shaded in the pixels shader stage of the pipeline. Given the limited dependency on the input buffers ( 2 past RGB frames), compared to concurrent work by Gu et al. [14], our system needs less memory footprint with output quality tradeoff.
144
+
145
+ With gaming and interactive content increasing moving to cloud-based delivery, we envision neural frame extrapolation to be helpful in delivering a compelling user experience across varying compute and network conditions. For example, in cloud gaming scenarios, the game is rendered on a server and streamed to the client over public networks, while the user input is delivered to the server to render the next frame. Given the limited bandwidth and network congestion, dropped or stalled frames could lead to game stutters and unplayable experience. As most client systems do have specific capabilities to run deep neural networks, it is possible to use the neural network engine in the client system to infer the future frame using our approach, while its rendering engine could use the confidence map to re-render limited amount of the frame. One drawback of such a method is that the instance of the game of rendering content will have to be running simultaneously in both the client and the cloud (although the client only renders a small number of difficult-to-predict pixels), with any updates being reflected across both. An alternative approach would be to use the extrapolated frames directly without re-rendering of the lower confidence pixels, with the next rendered frame following shortly thereafter. User studies to gauge the effect of re-rendering $\mathrm{v}/\mathrm{s}$ utilizing the extrapolated pixels (i.e: not using the confidence map) are future work.
146
+
147
+ ![01963e6c-a8b1-7c0f-a3bc-7294a2657e15_7_149_142_1504_511_0.jpg](images/01963e6c-a8b1-7c0f-a3bc-7294a2657e15_7_149_142_1504_511_0.jpg)
148
+
149
+ Figure 8: Failure examples for future frame prediction.
150
+
151
+ ## 6 CONCLUSION
152
+
153
+ In this paper, we described a method to speed up Monte Carlo rendering algorithms by solving it as a frame prediction problem. To get high quality results, we designed a neural network that not only predict flows to warp future frame, but also predict masks to efficiently rendering pixels that are hard for frame prediction problem. We also propose an enhancement part to strengthen our predictions.
154
+
155
+ ## REFERENCES
156
+
157
+ [1] Blender Open Movies Projects. https://www.blender.org/ about/projects/. Accessed: 2021-08-21.
158
+
159
+ [2] Unrealengine. https://forums.unrealengine.com/t/ scenetexture-velocity-data-integrity-issues/222812. Accessed: 2021-08-21.
160
+
161
+ [3] S. J. Adelson and L. F. Hodges. Generating exact ray-traced animation frames by reprojection. IEEE Comput. Graph. Appl., 15(3):43-52, May 1995.
162
+
163
+ [4] S. Badt. Two algorithms for taking advantage of temporal coherence in ray tracing. The Visual Computer, 4(3):123-132, May 1988.
164
+
165
+ [5] W. Bao, W.-S. Lai, C. Ma, X. Zhang, Z. Gao, and M.-H. Yang. Depth-aware video frame interpolation. In IEEE Conference on Computer Vision and Pattern Recognition, 2019.
166
+
167
+ [6] M. R. Bolin and G. W. Meyer. A perceptually based adaptive sampling algorithm. In Proceedings of the 25th annual conference on Computer graphics and interactive techniques, pp. 299-309. ACM, 1998.
168
+
169
+ [7] L. Castrejon, N. Ballas, and A. Courville. Improved conditional vrnns for video prediction. In The IEEE International Conference on Computer Vision (ICCV), October 2019.
170
+
171
+ [8] C. R. A. Chaitanya, A. S. Kaplanyan, C. Schied, M. Salvi, A. Lefohn, D. Nowrouzezahrai, and T. Aila. Interactive reconstruction of monte carlo image sequences using a recurrent denoising autoencoder. ${ACM}$ Transactions on Graphics (TOG), 36(4):98, 2017.
172
+
173
+ [9] R. L. Cook, T. Porter, and L. Carpenter. Distributed ray tracing. In ACM SIGGRAPH computer graphics, vol. 18, pp. 137-145. ACM, 1984.
174
+
175
+ [10] A. Dayal, C. Woolley, B. Watson, and D. Luebke. Adaptive frameless rendering. In ACM SIGGRAPH 2005 Courses, SIGGRAPH '05, p. 24-es, 2005.
176
+
177
+ [11] P. Didyk, T. Ritschel, E. Eisemann, K. Myszkowski, and H.-P. Seidel. Adaptive image-space stereo view synthesis. In Vision, Modeling and Visualization Workshop, pp. 299-306. Siegen, Germany, 2010.
178
+
179
+ [12] K. Egan, Y.-T. Tseng, N. Holzschuch, F. Durand, and R. Ramamoorthi. Frequency analysis and sheared reconstruction for rendering motion blur. In ACM Transactions on Graphics (TOG), vol. 28, p. 93. ACM, 2009.
180
+
181
+ [13] M. Gharbi, T.-M. Li, M. Aittala, J. Lehtinen, and F. Durand. Sample-based monte carlo denoising using a kernel-splatting network. ACM Transactions on Graphics (TOG), 38(4):1-12, 2019.
182
+
183
+ [14] J. Guo, X. Fu, L. Lin, H. Ma, Y. Guo, S. E. Liu, and L.-Q. Yan. Extranet: Real-time extrapolated rendering for low-latency temporal supersam-pling. ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2021), 2021.
184
+
185
+ [15] H. W. Jensen. Realistic image synthesis using photon mapping. AK Peters/CRC Press, 2001.
186
+
187
+ [16] F. Jiang. Unreal optical flow demo, Aug. 2018. doi: 10.5281/zenodo. 1345482
188
+
189
+ [17] F. Jiang and Q. Hao. Pavilion: Bridging photo-realism and robotics. In Robotics and Automation (ICRA), 2019 IEEE International Conference on, May 2019.
190
+
191
+ [18] N. K. Kalantari, S. Bako, and P. Sen. A machine learning approach for filtering monte carlo noise. ACM Trans. Graph., 34(4):122-1, 2015.
192
+
193
+ [19] A. Kuznetsov, N. K. Kalantari, and R. Ramamoorthi. Deep adaptive sampling for low sample count rendering. In Computer Graphics Forum, vol. 37, pp. 35-44. Wiley Online Library, 2018.
194
+
195
+ [20] J. Lehtinen, T. Aila, J. Chen, S. Laine, and F. Durand. Temporal light field reconstruction for rendering distribution effects. 30(4), July 2011.
196
+
197
+ [21] J. Lehtinen, T. Aila, S. Laine, and F. Durand. Reconstructing the indirect light field for global illumination. ACM Trans. Graph., 31(4), July 2012.
198
+
199
+ [22] T.-M. Li, Y.-T. Wu, and Y.-Y. Chuang. Sure-based optimization for adaptive sampling and reconstruction. ACM Transactions on Graphics (TOG), 31(6):194, 2012.
200
+
201
+ [23] X. Liang, L. Lee, W. Dai, and E. P. Xing. Dual motion gan for future-flow embedded video prediction. 2017 IEEE International Conference on Computer Vision (ICCV), Oct 2017.
202
+
203
+ [24] W. Lotter, G. Kreiman, and D. Cox. Deep predictive coding networks for video prediction and unsupervised learning, 2016.
204
+
205
+ [25] A. Lumberyard. Amazon lumberyard bistro, open research content archive (orca), July 2017. http://developer.nvidia.com/orca/amazon-lumberyard-bistro.
206
+
207
+ [26] M. Mathieu, C. Couprie, and Y. LeCun. Deep multi-scale video prediction beyond mean square error, 2015.
208
+
209
+ [27] M. Meyer and J. Anderson. Statistical acceleration for animated global illumination. ACM Transactions on Graphics (TOG), 25(3):1075-1080, 2006.
210
+
211
+ [28] B. Moon, S. McDonagh, K. Mitchell, and M. Gross. Adaptive polynomial
212
+
213
+ rendering. ACM Transactions on Graphics (TOG), 35(4):40, 2016.
214
+
215
+ [29] J. H. Mueller, T. Neff, P. Voglreiter, M. Steinberger, and D. Schmalstieg. Temporally adaptive shading reuse for real-time rendering and virtual reality. ${ACM}$ Trans. Graph., 40(2), Apr. 2021.
216
+
217
+ [30] D. Nehab, P. V. Sander, J. Lawrence, N. Tatarchuk, and J. R. Isidoro. Accelerating real-time shading with reverse reprojection caching. In Proceedings of the 22nd ACM SIGGRAPH/EUROGRAPHICS Symposium on Graphics Hardware, p. 25-35. Eurographics Association, Goslar, DEU, 2007.
218
+
219
+ [31] S. Niklaus, L. Mai, and F. Liu. Video frame interpolation via adaptive convolution. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
220
+
221
+ [32] R. S. Overbeck, C. Donner, and R. Ramamoorthi. Adaptive wavelet rendering. ACM Trans. Graph., 28(5):140, 2009.
222
+
223
+ [33] F. A. Reda, G. Liu, K. J. Shih, R. Kirby, J. Barker, D. Tarjan, A. Tao, and B. Catanzaro. Sdc-net: Video prediction using spatially-displaced convolution. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 718-733, 2018.
224
+
225
+ [34] F. Rousselle, C. Knaus, and M. Zwicker. Adaptive sampling and reconstruction using greedy error minimization. In ACM Transactions on Graphics (TOG), vol. 30, p. 159. ACM, 2011.
226
+
227
+ [35] D. Scherzer, S. Jeschke, and M. Wimmer. Pixel-correct shadow maps with temporal reprojection and shadow test confidence. In Proceedings of the 18th Eurographics Conference on Rendering Techniques, EGSR'07, p. 45-50. Euro-graphics Association, Goslar, DEU, 2007.
228
+
229
+ [36] M. Shugrina, Z. Liang, A. Kar, J. Li, A. Singh, K. Singh, and S. Fidler. Creative flow+ dataset. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
230
+
231
+ [37] M. Simmons and C. H. Séquin. Tapestry: A dynamic mesh-based display representation for interactive rendering. In Proceedings of the Eurographics Workshop on Rendering Techniques 2000, p. 329-340, 2000.
232
+
233
+ [38] N. Srivastava, E. Mansimov, and R. Salakhutdinov. Unsupervised learning of video representations using lstms, 2015.
234
+
235
+ [39] P. Tole, F. Pellacini, B. Walter, and D. P. Greenberg. Interactive global illumination in dynamic scenes. SIGGRAPH '02, p. 537-546, 2002.
236
+
237
+ [40] Unreal. Unreal unreal engine. https://docs.unrealengine.com/en-US/ RenderingAndGraphics/RayTracing/index.html, 2021. Accessed: 2021- 05-09.
238
+
239
+ [41] J. M. P. van Waveren. The asynchronous time warp for virtual reality on consumer hardware. In Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology, VRST '16, p. 37-46. New York, NY, USA, 2016.
240
+
241
+ [42] R. Villegas, J. Yang, S. Hong, X. Lin, and H. Lee. Decomposing motion and content for natural video sequence prediction, 2017.
242
+
243
+ [43] B. Walter, A. Arbree, K. Bala, and D. P. Greenberg. Multidimensional lightcuts. ACM Transactions on graphics (TOG), 25(3):1081-1088, 2006.
244
+
245
+ [44] B. Walter, G. Drettakis, and S. Parker. Interactive Rendering using the Render Cache. In Eurographics Workshop on Rendering. The Eurographics Association, 1999.
246
+
247
+ [45] G. Ward and M. Simmons. The holodeck ray cache: An interactive rendering system for global illumination in nondiffuse environments. ACM Trans. Graph., 18(4):361-368, Oct. 1999.
248
+
249
+ [46] H. Wei, X. Yin, and P. Lin. Novel video prediction for large-scale scene using optical flow, 2018.
250
+
251
+ [47] L. Yang, Y.-C. Tse, P. V. Sander, J. Lawrence, D. Nehab, H. Hoppe, and C. L. Wilkins. Image-based bidirectional scene reprojection. ACM Trans. Graph., 30(6):1-10, Dec. 2011.
252
+
253
+ [48] Yi Zhu*, Karan Sapra*, Fitsum A. Reda, Kevin J. Shih, Shawn Newsam, Andrew Tao, Bryan Catanzaro. Improving semantic segmentation via video propagation and label relaxation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
254
+
255
+ [49] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang. The unreasonable effectiveness of deep features as a perceptual metric, 2018.
256
+
257
+ [50] Ziwei Liu, Raymond Yeh, Xiaoou Tang, Yiming Liu, and Aseem Agarwala. Video frame synthesis using deep voxel flow. In Proceedings of International Conference on Computer Vision (ICCV), October 2017.
258
+
259
+ [51] M. Zwicker, W. Jarosz, J. Lehtinen, B. Moon, R. Ramamoorthi, F. Rousselle, P. Sen, C. Soler, and S.-E. Yoon. Recent advances in adaptive sampling and reconstruction for Monte Carlo rendering. Computer Graphics Forum (Proceedings of Eurographics - State of the Art Reports), 34(2):667-681, May 2015.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/HI9zjeYVaG9/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § FUTURE FRAME SYNTHESIS FOR FAST MONTE CARLO RENDERING
2
+
3
+ Category: Research
4
+
5
+ < g r a p h i c s >
6
+
7
+ Figure 1: Given two input frames ${I}_{1}$ and ${I}_{2}$ together with the optical flows between them ${\mathbf{f}}_{2 \rightarrow 1}$ and the flow confidence map ${M}_{2}$ , our method first estimates backward flows ${\mathbf{f}}_{3 \rightarrow 2}$ and uses it to generate an initial future frame ${\widetilde{I}}_{3}$ . Our method then predicts the brightness change map to compensate for the pixel-wise brightness change over time. Finally, our method predicts a backward flow confidence map, uses it to calculate a rendering map to optionally select those unreliably predicted pixels to re-render using an off-the-shelf rendering engine.
8
+
9
+ § ABSTRACT
10
+
11
+ Monte Carlo rendering algorithms can generate high-quality images; however they need to sample many rays per pixel and thus are computationally expensive. In this paper, we present a method to speed up Monte Carlo rendering by significantly reducing the number of pixels that we need to sample rays for. Specifically, we develop a neural future frame synthesis method that quickly predicts future frames from frames that have already been rendered. In each future frame, there are pixels that cannot be predicted correctly from previous frames in challenging scenarios, such as quick camera motion, object motion, and large occlusion. Therefore, our method estimates a mask together with each future frame that indicates the subset of pixels that need ray samples to correct the prediction results. To train and evaluate our neural future frame synthesis method, we develop a large ray-tracing animation dataset. Our experiments show that our method can significantly reduce the number of pixels that we need to render while maintaining high rendering quality.
12
+
13
+ Index Terms: Computing methodologies-Computer graphics-Ray tracing
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ Monte Carlo ray tracing algorithms are widely used to generate photorealistic images for many applications, such as computer games, films, and simulations. However, these algorithms are time-consuming as they need to sample many rays to shade each pixel $\left\lbrack {9,{51}}\right\rbrack$ .
18
+
19
+ A great amount of effort has been devoted to fast Monte Carlo rendering. A popular category of approaches is to only cast a small number of rays for each pixel and then reconstruct a high-quality rendering from these few samples by denoising $\left\lbrack {8,{13},{18},{19}}\right\rbrack$ . Another category of approaches is to first reproject rays sampled when rendering previous frames to the current frame and use them to reconstruct the current frame $\left\lbrack {3,4}\right\rbrack$ . These temporal reprojection methods have difficulty in rendering view-dependent effects and filling pixels that are occluded in the previous frames.
20
+
21
+ This paper presents a future frame synthesis method for fast Monte Carlo rendering. Our method belongs to the category of reprojection algorithms and improves existing algorithms by exploiting deep neural networks to synthesize a future frame from frames that have already been rendered. Existing reprojection methods use forward warping to splat samples / pixel colors from previous frames to the future frame, which often suffers artifacts such as holes. To achieve higher rendering quality, our method uses backward warping to synthesize the future frame from previous frames. Backward warping, however, requires optical flow from the future frame to the previous frame(s), which cannot be calculated without the future frame or some of its intermediate G-buffer data. To address this problem, we train a deep neural network to learn to predict the backward flow of future frames. As the color constancy assumption for repro-jection algorithms may not always hold across neighboring frames, we employ a second neural network to predict the brightness changes from previous frames, which are then added to the synthesized future frame. Furthermore, our future frame synthesis networks may generate errors when facing challenging scenarios, such as large occlusions and significant view-dependent effects. Therefore, as an optional step, our method uses a mask neural network to generate a confidence map that indicates those unreliable pixel estimates and re-render these pixels using ray tracing.
22
+
23
+ As there are no publicly available large-scale ray tracing animation dataset, we built such a dataset by collecting or purchasing model and scene files and render them using the Unreal Engine or Blender Cycles. Our dataset contains many animation sequences with a variety of animation characters, background scenes and camera motion. This dataset allows us to train and test our neural future frame synthesis method. Our experiments show that our method is able to drastically reduce the number of rays that need to be sampled to produce frames while maintaining high rendering quality.
24
+
25
+ § 2 RELATED WORK
26
+
27
+ Fast Monte Carlo rendering has a rich literature history. A popular approach is to reduce the total number of sampling rays that need to be cast to generate an image [51]. A large number of algorithms have been developed that only sample a small number rays per pixel and then perform denoising to reconstruct high-quality renderings [6, ${12},{15},{20} - {22},{27},{32},{34},{43}\rbrack$ . The recent learning-based denoising methods, especially those use deep neural networks, can generate very high-quality renderings with only a small number of samples [8, ${13},{18},{19},{28}\rbrack$ .
28
+
29
+ Another approach is to reuse samples from previous frames by reprojecting the samples to future frames $\left\lbrack {3,4,{44},{45}}\right\rbrack$ . These re-projected samples are often used together with new samples to reconstruct future frames $\left\lbrack {{10},{37},{39}}\right\rbrack$ . Besides ray tracing applications, reusing temporal rendering information has also been widely explored for a variety of other rendering problems. For instance, Scherzer et al. reuse past information to reduce the computation cost of shadow mapping [35]. Nehab et al. developed a reverse reprojection-based caching scheme that enables pixel shaders to reuse calculations performed for visible surface points over time [30]. Asynchronous time warp reprojects past frames to the future frame to reduce the latency in VR applications [41]. Didyk et al. warp existing frames to increase frame rates for high-refresh-rate displays [11]. Yang et al. further increased frame rates via bidirectional scene reprojection [47]. Recently, Mueller et al. reported that it is possible to apply temporal shading reuse to extended periods of time for a significant portion of samples and demonstrated that for real-time VR applications [29]. Like these methods, our work also explores temporal rendering history to speed up rendering and focuses on Monte Carlo rendering algorithms. Our method learns to predict backward flows that allow for future frame synthesis without the need of hole filling. Our method also predicts a confidence map that can be used to identify unreliable pixels in the predicted future frame and optionally re-render them using an off-the-shelf ray tracing rendering engine.
30
+
31
+ Our work is also related to deep video frame prediction methods from the Computer Vision community $\lbrack 7,{23},{24},{26},{33},{38},{38},{42}$ , ${46},{50}\rbrack$ . These methods employ a variety of deep neural network algorithms to learn to predict future frames from their previous video frames. Particularly, given its good performance in predicting future frames, our work adopts the neural network architecture of SDCNet from Reda et al. [33] to estimate the backward flows. Unlike Reda et al. that use a neural network to estimate the optical flow between the previous frames, our method uses the optical flows from the rendering engine and predicted by our future frame synthesis network. Since optical flows, even from the rendering engine, are not perfect, we further compute or predict a confidence map and feed it to the backward flow estimation network to improve the quality of the backward flows. Moreover, we further improve future frame prediction quality by estimating and compensating for the brightness change over time and predicting a confidence map to guide the rendering of unreliably predicted pixels.
32
+
33
+ Finally, in a concurrent work, Guo et al. developed ExtraNet, which also extrapolates future frames to achieve low-latency rendering [14]. In addition to fully rendered previous frames, their method renders G-buffer data of the extrapolated frames as input, which allows their method to employ a lightweight network to render high-quality extrapolated frames. In contrast, our method does not need G-buffer data of the extrapolated frames and thus requires less memory consumption. However, without the G-buffer data of the extrapolated frames, our method sometimes cannot predict future frames as high quality as ExtraNet. Nevertheless, future frame prediction is necessarily error-prone even with the target G-buffer data. Therefore, our method also predicts an error mask that identifies difficult-to-predict pixels and allows a rendering engine to optionally render these pixels to ensure the quality of the final future frames.
34
+
35
+ § 3 RAY-TRACING ANIMATION DATASETS
36
+
37
+ § 3.1 RENDERING ENGINES
38
+
39
+ We use Unreal Engine 4 (UE4) to render our animation dataset. Since the path tracer in UE4 is not stable for production [40], we use its hybrid ray tracer called "Real-Time Ray Tracing" (RTRT). We train and test our future frame synthesis network using the animation sequence rendered by RTRT. To examine how well our network can be generalized to examples generated by a pure path tracer, we also use Blender Cycles to render additional animation sequences and use them to test our network.
40
+
41
+ § 3.2 DIGITAL ASSETS
42
+
43
+ We purchased Unreal scene files from UE Marketplace and used each of them as the background for an animation sequence. We obtained animations with characters from Mixamo and integrated them into the background scenes to generate various animation sequences. Specifically, we bought 20 background environments from UE Marketplace. We separated them into three groups: 10 for training, 1 for validation, and the remaining 9 for testing. For each background environment, we randomly added animation characters. Then we picked good viewpoints and created camera paths to follow the main animation character for each animation scene following a recent method used to create the Creative Flow+ Dataset [36]. In this way, we could generate multiple animation sequences with different camera paths from the same animation scene. We took care to prevent the camera going through animation characters. In total, we produced 118 videos for the training set, where each video has 461 frames. For the validation and testing set, we followed the same approach but only generate one video for each animation scene. Our testing and validation set contains 10 and 1 animation sequences, respectively. When rendering these animation sequences, we individually adjusted the number of samples per pixel to avoid noticeable noise in the results. Samples of our animation sequences are shown in Figure 2.
44
+
45
+ We also created a second testing set. Specifically, we used Blender Cycles to render 6 animation sequences from resources from the Blender Open Movies dataset [1] and the Nvidia ORCA dataset [25]. When rendering animation sequences from the Blender Open Movies and Nvidia ORCA datasets, we used 1000 samples per pixel and 2000 samples per pixel, respectively.
46
+
47
+ § 3.3 GROUND TRUTH OPTICAL FLOW
48
+
49
+ We followed Fan et al. [16, 17] and the suggestions from the Unreal Community [2] to compute the ground-truth optical flows between two consecutive animation frames. Specifically, we used the Unreal built-in optical flow tool to compute the optical flow of still background scene induced by camera movements. We used texture coordinates to compute optical flows of moving objects. However, we were not able to compute the ground-truth optical flows for several scenarios, such as shadow regions and transparent or semi-transparent objects. Since shadows and semi-transparency are common in ray tracing renderings, we kept them in our dataset with no ground-truth optical flows for them.
50
+
51
+ § 4 FUTURE FRAME SYNTHESIS
52
+
53
+ Given two consecutive frames ${I}_{1 : 2}$ , our method aims to predict a sequence of future frames ${\widehat{I}}_{3 : t}$ frame by frame. For instance, we first predict ${\widehat{I}}_{3}$ from ${I}_{1}$ and ${I}_{2}$ and then predict ${\widehat{I}}_{4}$ from ${I}_{2}$ and ${\widehat{I}}_{3}$ . Below we describe how our method predicts ${\widehat{I}}_{3}$ . The other future frames are generated in the same way with minor changes that will be noted in this paper.
54
+
55
+ We use a deep neural network to predict ${\widehat{I}}_{3}$ . As shown in Figure 3, our network takes as input two existing frames ${I}_{1}$ , and ${I}_{2}$ , the optical flow map ${f}_{2 \rightarrow 1}$ from ${I}_{2}$ to ${I}_{1}$ , and the optical flow confidence map ${M}_{2}$ . Following the previous video frame interpolation and extrapolation papers $\left\lbrack {{31},{33}}\right\rbrack$ , our network outputs the backward flow from ${\widehat{I}}_{3}$ to ${I}_{2}$ , denoted as ${f}_{3 \rightarrow 2}$ , and then uses it to synthesize the future frame ${\widetilde{I}}_{3}$ from ${I}_{2}$ by backward warping. Such an approach tends to generate sharper frames than estimating the future frame directly. In this paper, we adopt the network architecture from SDC-Net [33] for backward optical flow estimation.
56
+
57
+ There are necessarily errors in the predicted future frame ${\widehat{I}}_{3}$ . For example, when the camera angle or camera location changes, content that is invisible in the previous frames will be visible in the future frame. Warping the previous frames cannot generate those disoccluded content in the future frame. Significant view-dependent effects also poses challenges for future frame prediction. Therefore, we added another neural network that shares the same input as the backward flow estimation network to estimate a confidence map ${M}_{3}$ , as illustrated in Figure 3. This confidence estimation network also shares the same network architecture as the backward flow estimation network with an additional sigmoid layer at the end. Each element in this map indicates how reliable the corresponding optical flow in ${f}_{3 \rightarrow 2}$ can be used to estimate the pixel color for the future frame. This confidence map provides an optional step to improve the future frame quality by re-rendering those pixels using the rendering engine in the system. In our experiments, we re-render those pixels with the confidence values below a threshold value $\lambda$ .
58
+
59
+ < g r a p h i c s >
60
+
61
+ Figure 2: Samples of our ray tracing animation dataset.
62
+
63
+ Note, when estimating ${\widehat{I}}_{3},{f}_{2 \rightarrow 1}$ is directly computed from the rendering engine. As discussed in Section 3, the optical flows from the rendering engine are not perfect in many scenarios. More importantly, even when optical flows that correctly accounts for scene point motions do not lead to perfect a future frame. Occlusion and significant view-dependent effect are two common reasons. Therefore, our method computes a confidence map for optical flows. Particularly, the optical flow confidence map ${M}_{2}$ is computed by first backwardly warping ${I}_{1}$ to align with ${I}_{2}$ using ${f}_{2 \rightarrow 1}$ and then thresholding the error map against a constant $\omega$ . If the error is smaller than $\omega$ , we set the corresponding value in ${M}_{2}1$ , otherwise 0 . The default value for $\omega$ is 0.04 in our paper when the pixel value is normalized to the range of $\left\lbrack {0,1}\right\rbrack$ . When estimating the other future frames ${\widehat{I}}_{t}$ with $t > 3$ , we use ${f}_{t \rightarrow t - 1}$ and ${M}_{t - 1}$ , which are both the output from the previous step to estimate ${\widehat{I}}_{t - 1}$ .
64
+
65
+ § 4.1 BRIGHTNESS ENHANCEMENT
66
+
67
+ Our method described above synthesizes a future frame from its immediate previous frame and thus implicitly assumes the brightness constancy. Such an assumption, however, does not always hold. To address this problem, we employ a brightness enhancement network that estimates the brightness changes from the previous frames. Specifically, our method first warps ${I}_{1}$ to align with ${I}_{2}$ via backward warping and then calculates the brightness change map between them ${B}_{1,2}$ . We warp the brightness change ${B}_{1,2}$ with estimated optical flow to get initial ${B}_{2,3}$ . Our method then feeds initial ${B}_{2,3}$ together with the initial future frame ${\widetilde{I}}_{3}$ , which is created by warping ${I}_{2}$ using the estimated optical flow ${f}_{3 \rightarrow 2}$ , into the brightness enhancement network to estimate the brightness change map ${\widehat{B}}_{2,3}$ . Our method finally adds ${\widehat{B}}_{2,3}$ to the initial future frame ${\widetilde{I}}_{3}$ to generate the enhanced future frame ${\bar{I}}_{3}$ , as shown in Figure 3.
68
+
69
+ Loss functions. We train our frame synthesis network in an end-to-end fashion by computing the losses from predicting three consecutive frames ${\widehat{I}}_{3 : 5}$ from two input frames ${I}_{1,2}$ as follows.
70
+
71
+ $$
72
+ \mathcal{L} = \mathop{\sum }\limits_{{t = 3}}^{{t = 5}}{\delta }_{t}{\mathcal{L}}_{t} \tag{1}
73
+ $$
74
+
75
+ where ${\mathcal{L}}_{t}$ is the loss from predicting ${\widehat{I}}_{t},{\delta }_{3} = 1,{\delta }_{4} = {0.5},{\delta }_{5} = {0.25}$ . ${\mathcal{L}}_{t}$ has the following three components.
76
+
77
+ $$
78
+ {\mathcal{L}}_{t} = \alpha {\mathcal{L}}_{t,{l}_{1}} + \beta {\mathcal{L}}_{t,m} + \gamma {\mathcal{L}}_{t,{re}} \tag{2}
79
+ $$
80
+
81
+ where ${\mathcal{L}}_{t,{l}_{1}}$ is the ${\ell }_{1}$ loss between the ground truth ${I}_{t}$ and the synthesized future frame ${\widehat{I}}_{t},{\mathcal{L}}_{t,m}$ is the binary cross entropy loss between the predicted confidence mask ${M}_{t}$ and the ground-truth confidence mask of the enhanced ${\bar{I}}_{t}$ , which is obtained by thresholding the error $\left( {\omega = {0.04}}\right)$ between enhanced ${\bar{I}}_{t}$ and ground truth ${I}_{t}$ , and ${\mathcal{L}}_{t,{re}}$ is the percentage of pixels that needed to be rendered. We empirically set $\alpha = {0.3},\beta = {0.3}$ , and $\gamma = {0.3}$ .
82
+
83
+ Implementation details. We randomly crop training images into patches of size ${256} \times {256}$ from our training images. We use PyTorch to implement our future frame synthesis network. We use a mini-batch of 4 . We use the Adam optimizer with multiple-step learning rates. The learning rate is ${10}^{ - }4$ for the first 250 epochs. Then learning rate is set to ${10}^{ - }5$ . We train our networks for 700 epochs using one Nvidia Titan Xp. We use 2D convolution layers with a kernel size of 7 to extract features with channels of 32 from stacked inputs. The encoders are composed of ${52}\mathrm{D}$ convolution layers using a stride of 2 and ${42}\mathrm{D}$ convolution layers using a stride of 1 . The channel numbers increase from 32 to 512 during the encoder. For the decoder, we use 6 2D deconvolution layers with a stride of 2 . We use a 2D convolution layer with a stride of 1 and kernel size of 3 to predict the optical flows, masks, and brightness enhancement after the decoders in three networks.
84
+
85
+ § 5 EXPERIMENTS
86
+
87
+ We evaluate our method by comparing to baseline reprojection methods and state-of-the-art video frame prediction methods. We also conduct ablation studies to evaluate individual components of our method. In our experiments, we train our future frame synthesis network using the training set of our Unreal animation dataset. We test our method on the corresponding testing set (UE4). As our Unreal dataset was rendered using its hybrid rendering engine, we further test our trained network on our Cycles dataset (Cycles) that was rendered using a ray tracing engine as discussed in Section 3.
88
+
89
+ < g r a p h i c s >
90
+
91
+ Figure 3: Our future frame synthesis framework.
92
+
93
+ § 5.1 COMPARISONS
94
+
95
+ Reprojection methods. We first compare our method to a baseline reprojection method that warps the current frame ${I}_{2}$ to the future frame ${I}_{t}$ using forward warping. For such a baseline approach, we first obtain the optical flows from ${I}_{2}$ to ${I}_{t}$ , denoted as ${\mathbf{f}}_{2 \rightarrow t}$ . Assuming the linear pixel motion, ${\mathbf{f}}_{2 \rightarrow t}$ can be computed as follows.
96
+
97
+ $$
98
+ {\mathbf{f}}_{2 \rightarrow t} = \left( {t - 2}\right) * {\mathbf{f}}_{2 \rightarrow 1} \tag{3}
99
+ $$
100
+
101
+ where ${\mathbf{f}}_{2 \rightarrow 1}$ is the ground-truth optical flow computed by the rendering engine. We then forward warp ${I}_{2}$ to a future frame ${I}_{t}$ . Multiple pixels could be forwarded to the same target pixel. We blend these pixels in two ways. One is to choose the pixel that is closest to the camera and the other is to blend these pixels using weights that are computed as the inverse of their depth values [5]. We denote them as reproj-nn and reproj-blend respectively in this section. In addition, forward warping leads to holes in the future frame. We fill these holes using ground-truth pixels from the rendering engine.
102
+
103
+ As described in Section 4, we use a threshold $\lambda$ to select a subset of pixels to re-render using the rendering engine. Specifically, if the value in the predicted confidence map is smaller than $\lambda$ , we re-render that pixel. Therefore, as we increase the $\lambda$ value, more pixels are re-rendered, as shown in Figure 4. As also shown in the third column of this figure, the mask prediction accuracy of our method also improves as we increase the $\lambda$ value from 0.1 to 0.4. This is because with a small $\lambda$ value like 0.1, our method only selects to a small number of unreliably predicted pixels to re-render while leaving many more unfixed. As we increase the $\lambda$ value, more of those bad pixels are selected to fix. This is also related to the fact that when training our network, we use $\lambda = {0.4}$ to compute the mask loss in Equation 2. With $\lambda = {0.4}$ , our method needs to re-render around 12.5% for Unreal testing examples (UE4) and 10% for Cycles testing examples. While with $\lambda = {0.2}$ , our method only needs to re-render less than 4.0% for Unreal testing examples (UE4) and 3.0% for Cycles testing examples. Many of the visual examples in this paper are rendered with $\lambda = {0.2}$ .
104
+
105
+ Table 1: Comparison with video frame prediction methods. To ensure a fair comparison, we only train our network to predict one future frame using the ${l}_{1}$ loss and generate the prediction results without re-rendering unreliable pixels in this test.
106
+
107
+ max width=
108
+
109
+ 2*Method 2|c|UE4 2|c|Cycles
110
+
111
+ 2-5
112
+ PSNR LPIPS PSNR LPIPS
113
+
114
+ 1-5
115
+ SDC2D [33] 27.907 0.0561 31.065 0.0491
116
+
117
+ 1-5
118
+ SDC2D-GTflow [33] 28.030 0.0573 31.033 0.0512
119
+
120
+ 1-5
121
+ MCNet [42] 23.401 0.2363 24.943 0.2386
122
+
123
+ 1-5
124
+ VoxelFlow [50] 25.195 0.0898 28.733 0.0896
125
+
126
+ 1-5
127
+ ImprovedVRNN [7] 28.039 0.1241 31.009 0.1055
128
+
129
+ 1-5
130
+ Ours 28.618 0.0557 31.534 0.0520
131
+
132
+ 1-5
133
+
134
+ As we would expect, the quality of our future frame synthesis method increases as the percentages of re-rendered pixels rises. With a similar amount of re-rendered pixels $\left( {\lambda = {0.2}}\right)$ , our method significantly outperforms the above baseline reprojection approaches in terms of both PSNR ( $> {1.5}\mathrm{\;{dB}}$ ) and LPIPS ( $< {0.06}$ ) [49]. These results are consistent on both the Unreal dataset and the Cycles dataset. As shown in Figure 5, reproj-nn tends to generate results with aliasing artifacts while the results from reproj-blend suffer from the ghosting artifacts. In contrast, our results can predict higher quality future frames.
135
+
136
+ Video frame prediction. To conduct a fair comparison to state-of-the-art video frame prediction approaches, we use our future frame synthesis results without replacing pixels according to the predicted confidence map. For MCNet [42], VoxelFlow [50] and Improvednn [7], we use their official codes. For SDC2D [33], we used the code from released by [48], which is slightly different from their original version SDC-Net in that SDC2D only estimates the backward flows for future frame prediction without estimating the spatially-varying kernels and uses $2\mathrm{D}$ convolutions instead of $3\mathrm{D}$ convolutions. To examine the effect of starting future frame synthesis using the optical flows generated by the rendering engine, we also extended the original SDC2D method by using the rendered optical flows instead of the optical flows estimated from the input frames. We denote this version of SDC2D-GTflow. Note, the rendered optical flows are only used to predict the first future frame in both our method and SDC2D-GTflow. We train all these methods using our training set and validation set. As reported in Table 1, our method achieves better quantitative results than those video frame prediction methods with a large margin $\left( { > {0.5}\mathrm{\;{dB}}}\right)$ . Compared to those video frame prediction methods, our method also generates qualitatively better results, as shown in Figure 6.
137
+
138
+ < g r a p h i c s >
139
+
140
+ Figure 4: Test results on both UE4 testset (a) and Cycles testset (b). $\lambda$ is the threshold used to select unreliably predicted pixels according to the predicted confidence mask. A larger $\lambda$ selects more pixels to re-render. These results show that our method produces higher-quality renderings in terms of both PSNR and LPIPS while re-rendering a similar amount of pixels to the two baseline reprojection methods.
141
+
142
+ < g r a p h i c s >
143
+
144
+ Figure 5: Examples of predicting three continuous future frames. We select $\lambda = {0.2}$ for our method, which requires similar percentages of re-rendered pixels to other methods.
145
+
146
+ < g r a p h i c s >
147
+
148
+ Figure 6: Visual comparisons with video frame prediction methods. The top two examples are from UE4. The bottom two examples are from Cycles. To ensure a fair comparison, we did not re-render pixels in this test when producing our prediction results.
149
+
150
+ < g r a p h i c s >
151
+
152
+ Figure 7: Ablation studies on our UE4 and Cycles testing sets. "-en" denotes our results without brightness enhancement and "-en-mask" denotes our results without brightness enhancement and without confidence mask.
153
+
154
+ § 5.2 ABLATION STUDY
155
+
156
+ We examine the effect of two components on our future frame synthesis quality. The first is our brightness enhancement network that compensates for the brightness change. The second is the optical flow confidence mask, which is used to as a part of the input to the backward flow estimation network. When estimating the first future frame, this map is calculated by assessing the quality of the optical flow generated by the rendering engine. When predicting more future frames, it is predicted using the confidence mask estimation network as described in Section 4. In our ablation studies, we compare three versions of our method, our full method (ours), our method without brightness enhancement (-en), and our method without brightness enhancement and without inputting the confidence map to the backward flow estimation network (-en-mask). As shown in Figure 7, these two components both help our method predict future frames.
157
+
158
+ § 5.3 DISCUSSIONS
159
+
160
+ We observed that our future frame synthesis method still cannot handle several challenging scenarios. As shown in Figure 8 (a), our method as well as other methods fail to preserve the fine structure (the silver thread). Our method also cannot deal with significant view-dependent effects. Figure 8 (b) shows such an example where the reflection in the mirror is not predicted accurately in the pointed area indicated by the orange arrow.
161
+
162
+ It takes our PyTorch implementation about 0.02 seconds to predict a ${1024} \times {1024}$ frame using one Nvidia 3090 GPU. The reported duration includes all the stages of our method except running the rendering engine to replace the unreliable pixels with rendered pixels. The peak GPU memory is about ${5400}\mathrm{{MB}}$ .
163
+
164
+ In the future, we would like to extend our work by utilizing G-buffer data like many other recent rendering papers $\left\lbrack {8,{13},{14},{18},{19}}\right\rbrack$ . We hope to overcome existing artifacts by adopting a more powerful neural network. We would also like to optimize our network architectures to further speed it up.
165
+
166
+ § 5.3.1 USE CASES
167
+
168
+ We envision two different usage scenarios i) network deployment on the same system to predict subsequent frames to reduce rendering compute needed and ii) in usages such as cloud gaming, where we may need to predict subsequent frames due to network inconsistencies or frame drops. In case of deploying the network concurrent with the rendering, the confidence mask can be used to selectively re-render the pixels. Based on the dataset and content, the rendering engine will need to re-render a magnitude less pixels compared to re-rendering the whole frame. For high quality rendering such as ray or path traced content, neural frame prediction could be applied to extrapolate a majority of pixels in the frame, and the limited ray-tracing budget available could be focused on the pixels as determined by the confidence mask. Even for real-time content such as rasterized games, this method could be applied in addition to the geometry processing step to reduce the number of pixels that need to be shaded in the pixels shader stage of the pipeline. Given the limited dependency on the input buffers ( 2 past RGB frames), compared to concurrent work by Gu et al. [14], our system needs less memory footprint with output quality tradeoff.
169
+
170
+ With gaming and interactive content increasing moving to cloud-based delivery, we envision neural frame extrapolation to be helpful in delivering a compelling user experience across varying compute and network conditions. For example, in cloud gaming scenarios, the game is rendered on a server and streamed to the client over public networks, while the user input is delivered to the server to render the next frame. Given the limited bandwidth and network congestion, dropped or stalled frames could lead to game stutters and unplayable experience. As most client systems do have specific capabilities to run deep neural networks, it is possible to use the neural network engine in the client system to infer the future frame using our approach, while its rendering engine could use the confidence map to re-render limited amount of the frame. One drawback of such a method is that the instance of the game of rendering content will have to be running simultaneously in both the client and the cloud (although the client only renders a small number of difficult-to-predict pixels), with any updates being reflected across both. An alternative approach would be to use the extrapolated frames directly without re-rendering of the lower confidence pixels, with the next rendered frame following shortly thereafter. User studies to gauge the effect of re-rendering $\mathrm{v}/\mathrm{s}$ utilizing the extrapolated pixels (i.e: not using the confidence map) are future work.
171
+
172
+ < g r a p h i c s >
173
+
174
+ Figure 8: Failure examples for future frame prediction.
175
+
176
+ § 6 CONCLUSION
177
+
178
+ In this paper, we described a method to speed up Monte Carlo rendering algorithms by solving it as a frame prediction problem. To get high quality results, we designed a neural network that not only predict flows to warp future frame, but also predict masks to efficiently rendering pixels that are hard for frame prediction problem. We also propose an enhancement part to strengthen our predictions.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/HLcgsgKEpMq/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,371 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # It's Over There: Designing an Intelligent Virtual Agent That Can Point Accurately into the Real World
2
+
3
+ Anonymous
4
+
5
+ ![01963e81-107e-7cf1-b0e7-f49b3442d312_0_220_395_1359_647_0.jpg](images/01963e81-107e-7cf1-b0e7-f49b3442d312_0_220_395_1359_647_0.jpg)
6
+
7
+ Figure 1: We investigated how accurately users can perceive where an Intelligent Virtual Agent (IVA) rendered in a 3D display is pointing in the real world.
8
+
9
+ ## Abstract
10
+
11
+ It is a challenge to design an intelligent virtual agent (IVA) that can point to the real world and have users accurately recognize where it is pointing. We designed an IVA with factors including: a situated display, appearance, and pointing gesture strategy to establish whether it is possible to have an IVA point accurately into the real world. With a real person pointing as a baseline, we performed an empirical study using our designed IVA and demonstrated that participants perceived the IVA's pointing to a physical location with comparable accuracy to the real person baseline. Specifically, we found that the IVA outperformed the real person vertically (28.8% less error) and yielded comparable accuracy horizontally. Our integrated design choices provide a foundation for design factors to consider when designing IVAs for pointing and pave the way for future studies and systems in providing accurate pointing perception.
12
+
13
+ Index Terms: Human-centered computing-Human computer interaction (HCI)-Empirical studies in HCI; Human-centered computing-Interaction design-Empirical studies in interaction design;
14
+
15
+ ## 1 INTRODUCTION
16
+
17
+ Many researchers have studied natural human communication cues such as voice and hand gestures $\left\lbrack {8,{23},{44}}\right\rbrack$ . One important aspect of human communication is deictic pointing $\left\lbrack {{18},{53}}\right\rbrack$ , a hand gesture that complements or replaces the verbal communication to indicate a point of interest in a shared environment [38]. As a pioneer work that investigates deictic pointing into the $2\mathrm{D}$ virtual world, Put that there [9] demonstrated how an intelligent virtual agent (IVA) can recognize and interpret a person's pointing gestures at objects in the virtual world to facilitate natural human-computer interaction. However, this raises the reverse question, "Can an IVA point back to the real world?" More recently, with the advances in voice-based IVAs, such as Amazon Alexa®, the emerging 3D display technologies provide opportunities for IVAs to perform deictic pointing to objects in the real world. We believe that enabling IVAs with pointing gestures can enrich the communication channel and promote efficient human-like interactions [34].
18
+
19
+ To enable IVAs to point effectively, we seek to answer how accurately users can interpret the direction of an IVA's pointing, to establish a fundamental building block for designing deictic interactions between users and IVAs. However, it remains unclear whether it is feasible to design an IVA that have users accurately recognize where the IVA is pointing into the real world. Optimally, users should be able to interpret an IVA's pointing to the real world as well as, or even better than a real person's pointing.
20
+
21
+ To explore this potential, we introduce design factors that may improve the chances that users would be able to accurately perceive the IVA's pointing to demonstrate feasibility. These include the situated display, IVA appearance and pointing gesture strategies. For the situated display, we used a spherical Fish Tank Virtual Reality (FTVR) display in our IVA design. Unlike immersive displays, the spherical FTVR display is calibrated to be viewer-aware in the real-world coordinate system, enabling the IVA to point from the virtual world to objects in the real world. It also offers effective 3D depth cues for pointing perception (i.e., stereoscopic cue and motion parallax) [35]. Besides, spherical displays have been found to provide better gaze $\left\lbrack {{28},{50}}\right\rbrack$ , size and depth $\left\lbrack {62}\right\rbrack$ perception compared to the flat displays. For the IVA appearance, we used an animated cartoon character that was not photo-realistic but offered natural, easy-to-control pointing affordances to avoid the Uncanny Valley [47] effect. For the pointing gesture, we designed our IVA to point following the arm vector instead of the eye-fingertip alignment commonly found in human pointing $\left\lbrack {7,{30} - {32},{37}}\right\rbrack$ as it has been shown to provide a more accurate cue.
22
+
23
+ With a real person pointing as the baseline for comparison, we conducted an empirical experiment to investigate how accurately users can perceive our IVA's pointing. As an IVA is usually smaller than a real person due to the size constraint of typical displays, we controlled for retinal size in the experimental design. Our results demonstrated that it is feasible to have an IVA accurately point to locations in the real world. Further, the IVA's pointing location was perceived as accurately as a real person in our configuration. Specifically, the IVA outperformed the real person in the vertical dimension and yielded the same level of accuracy horizontally. We also discuss how the set of design factors may have contributed to the result and suggest design implications. Thus, the design factors we suggest provide a foundation for future studies on exploring the relative importance of each factor to consider for the design of IVA with pointing gestures. We believe our study and IVA design help pave the way for research on users' perception of pointing either in the virtual environment or in the real world.
24
+
25
+ ## 2 RELATED WORK
26
+
27
+ ### 2.1 Pointing in Intelligent Virtual Agents (IVAs)
28
+
29
+ Pointing is a fundamental building block of human communication [36]. The ubiquity of pointing drives research on incorporating it for intelligent virtual agents (IVAs) in virtual environments. Although our study did not directly relate to the intelligent part, we ended up with Intelligent Virtual Agents instead of Embodied Virtual Agents $\left\lbrack {{20},{22}}\right\rbrack$ . This is not only because that our study is motivated by and applicable to virtual agents that should be intelligent enough to interact with users with deictic gestures, but also we thought that the design of pointing gestures of a virtual agent implied that the agents should be visually embodied.
30
+
31
+ Most prior studies on IVAs with pointing focus on its benefits in drawing users' attention to some content in the virtual world where the IVA is situated. For example, the Persona agent [2] could point to images on web pages and Jack, as a virtual meteorologist, can give a weather report by pointing to the weather images [49]. Atkinson [4] showed an animated virtual agent serving as a tutor in a knowledge-based learning environment and demonstrated the benefits of pointing in directing the learners' attention. When combined with speech and context, using the Behavior Expression Animation Toolkit (BEAT), an agent was created to generate correlated gestures by extracting the linguistic and contextual information from the input text [13]. To achieve deictic believability, Lester et al. [39] designed COSMO agent, using a deictic planner to determine the generation of pointing guided by the spatial deixis framework. Rather than pointing to the virtual environment, an agent called MACK, in mixed reality, could point to a physical paper map shared with users in reality along with speech [12]. However, an unanswered question is how accurately can an IVA point to the real world.
32
+
33
+ ### 2.2 Perception of Pointing in the Real World
34
+
35
+ When humans perform pointing naturally, without instructions, instead of pointing using their arm vector, Bangerter & Oppenheimer [7] and Henriques & Crawford [30] observed that humans commonly orient their arm so that the fingertip intersects the line joining the target and their dominant eye while gazing at the target. This is called eye-fingertip alignment as illustrated in Figure 2 c- 2. This mechanism was further shown in the estimation of human pointing direction. With various methods proposed, the head-hand line $\left\lbrack {{16},{41},{46}}\right\rbrack$ (also known as the eye-fingertip line) was found to be the most reliable $\left( {{90}\% }\right)$ compared to forearm direction and head orientation [48]. Mayer et al. [43] demonstrated that it yielded the lowest offset among four other ray cast techniques. As our study is to find factors that enable an IVA to point into the real world accurately, the impact of different alignment strategies is considered in our IVA design.
36
+
37
+ Pointing behavior during interpersonal interaction typically involves the movement of eye gaze, head and arm [30]. Considerable research has been done targeting gaze perception. People can accurately discern their mutual gaze with another person [3, 26] and the direction of the other person's gaze [25]. By contrast, research on the perception of pointing accuracy is scant. By evaluating the detection accuracy for different combinations of head, eye and hand pointing cues, Butterworth and Itakura [10] showed that pointing can improve spatial localization of targets when compared to head and gaze cues but suggested that pointing had limited accuracy. Bangerter & Oppenheimer [7] contested their findings with a more precise measurement technique. The results revealed that the detection accuracy was comparable to the accuracy level for eye gaze and it was unaffected by the exclusion of eye gaze and head orientation. Despite the good accuracy, they observed a perceptual bias towards the side of the pointer's arm away from the observer in the horizontal dimension and above the target in the vertical direction. It was illustrated that the ambiguity introduced by the deviation between the eye-fingertip line and arm line might account for it. A study by Cooney et al. [19] evaluated the pointing accuracy in the horizontal direction and replicated Bangerter & Oppenheimer's results. Considering the ambiguity shown in human pointing and exploiting the fact that we have explicit control over the IVA's head, eye and finger positioning, we designed our IVA to use arm vector pointing rather than eye-fingertip alignment as an approach to improve its pointing accuracy as illustrated in Figure 2.
38
+
39
+ Finally, during interpersonal interactions, the accuracy with which observers can detect the pointed targets based on another person's pointing gestures has been a key issue. Because if a person cannot accurately interpret the other's pointing direction, it would be difficult to establish joint attention within a conversation [10]. Prior research shows that the distance between users and targets can affect users’ interpretation of the pointing direction $\left\lbrack {5,{16},{60}}\right\rbrack$ . To study this effect, we configured the distance as an independent variable to investigate how the accuracy changes in different distances.
40
+
41
+ ### 2.3 Perception of Pointing in Virtual Environments
42
+
43
+ While pointing is ubiquitous in daily interactions within the real world, it is difficult for users to precisely interpret the pointing direction in virtual environments. Wong and Gutwin [60] compared users' accuracy in a collaborative virtual environment (CVE) with the real world and observed worse performance in CVE although the difference was smaller than expected. The immersive head-mounted displays (HMDs) and virtual reality (VR) systems (e.g., CAVE) only support pointing within the virtual environment where the IVA is situated. By merging the real world with the virtual environment, FTVR [59] displays enable the IVA to point from the virtual world to the real world and provide a mixed reality experience. Our experiment used a spherical FTVR display because it has advantages over other VR/AR displays and planar displays as we will discuss in Section 3.1.
44
+
45
+ Regarding the evaluation of users' perception of pointing in FTVR, previous research focused on the assessment of pointing cues. Kim et al. [35] classified the pointing cue factors to consist of three levels: gaze, hand, and gaze+hand. They found no significant difference among the three levels with an experiment in a cylindrical 3D display. Using gaze to convey pointing direction within a spherical display has also been shown to be effective $\left\lbrack {{27},{35},{50}}\right\rbrack$ .
46
+
47
+ The research listed above is mostly concerned with telepresence. That is, the remote person is represented by an avatar or captured using cameras to realize remote collaboration. By contrast, we are using the IVA to perform pointing. In this context, the IVA is regarded as a social entity to mimic human intelligence [34] and work with a person. Unlike pointing in telepresence, designing the IVA's pointing gestures provides more possibilities to improve users' perception of pointing as the pointing behaviours do not have to be exactly human-like. Thus, for our design, we have the opportunity to design the IVA's pointing gestures not completely the same as humans. This enables us to remove the eye-fingertip alignment in the IVA as suggested in Section 2.2. The complete IVA design is discussed in the following Section 3.
48
+
49
+ ## 3 DESIGN FACTORS
50
+
51
+ This section elaborates on the design factors to enable our IVA to point as accurately as possible, including the situated display, IVA appearance and pointing gesture strategies.
52
+
53
+ ### 3.1 Situated Display
54
+
55
+ We used a spherical FTVR display for IVA due to the following reasons. First, FTVR displays are situated in the real world which enables the IVA to point from the virtual environment to locations in the real world. Alternative approaches, such as immersive headset displays, only support pointing within the virtual environment where the IVA is situated. Though AR displays provide the see-through feature that can get similar effects, these systems lack the tangible nature of having a volumetric display that is part of the real world. FTVR displays also provide motion parallax and stereoscopic cues, which are important in interpreting pointing gestures [35]. The spherical shape has been found to provide better depth and size perception compared to a planar counterpart [62]. Spherical screens also showed better task performance in perceiving gaze direction compared to planar screens $\left\lbrack {{28},{50}}\right\rbrack$ . As perceiving pointing gestures depends on multiple aspects of visual perception such as depth and orientation perception, it is promising to use spherical FTVR displays to improve the pointing perception.
56
+
57
+ ### 3.2 IVA Appearance
58
+
59
+ The state of the art in photo-realistic representations for IVAs is subject to the Uncanny Valley [47]: a high degree of realism does not necessarily lead to positive evaluations. Considering this effect, Schneider et al. [55] suggest to use a non-human appearance with the ability to behave like a human. Following this suggestion, we chose a Japanese female cartoon character as our IVA to avoid the negative feelings caused by a human-like appearance while supporting humanlike behaviors. Our IVA's appearance is designed with large eyes and small nose (Figure 3) to provide the characteristic of the baby schema [42], which could induce a pleasurable feeling [29].
60
+
61
+ ### 3.3 Pointing Gestures
62
+
63
+ We designed our IVA to point following the arm vector (Figure 2 c-1) instead of the eye-fingertip alignment (Figure 2 c-2) to avoid potential perceptual ambiguity. As discussed in Section 2.2, humans commonly point to where they are looking by aligning their fingertip with the gaze of their dominant eye $\left\lbrack {7,{30}}\right\rbrack$ (Figure 2 c-2). When it comes to perceiving others' pointing, this might introduce ambiguity because the location followed by the arm vector is different from the actual target location followed by the eye-fingertip line. Previous work [7] found that participants exhibited a perceptual bias towards the upside of the target, potentially because of this ambiguity. Therefore, rather than design IVAs to point the same way as humans commonly do (i.e., eye-fingertip alignment), we instead remove the eye-fingertip alignment in the design of IVA's pointing gestures, that is, the arm vector directly points at targets (Figure 2 c-1). Our expectation is that this approach will mitigate the perceptual errors of eye-fingertip alignment and result in a perceptually accurate IVA pointing gesture.
64
+
65
+ For pointing cues, previous research has found that the orientation of the pointer's eyes, head and hand are used as visual cues for an observer to interpret a pointing gesture $\left\lbrack {{30},{35}}\right\rbrack$ . Prior work $\left\lbrack {61}\right\rbrack$ has found that the hand cue alone provides accurate pointing perception but with a loss of naturalness. In our study, we decide to include all the pointing cues, i.e., eyes, head and hand orientations, to promote accurate and natural perception. In summary, we design our IVA to point with an outstretched arm, eyes and head facing the target without the eye-fingertip alignment; thus, all cues are consistently directing attention to the same location.
66
+
67
+ ![01963e81-107e-7cf1-b0e7-f49b3442d312_2_941_151_687_502_0.jpg](images/01963e81-107e-7cf1-b0e7-f49b3442d312_2_941_151_687_502_0.jpg)
68
+
69
+ Figure 2: We consider three design factors to enable accurate perception of the pointing performed by IVA, including situated display, IVA appearance, and pointing gesture. (a) We used a situated spherical 3D display as it offers effective depth cues for pointing perception. (b) We used an animated cartoon character that offers natural, easy-to-control pointing affordance. (c) We designed our IVA to point following the arm vector (c-1) instead of the eye-fingertip alignment (c-2) to avoid potential perceptual ambiguity.
70
+
71
+ ## 4 EXPERIMENT
72
+
73
+ The goal of our experiment is to assess how accurately our IVA can point into the real world. With a real person's natural pointing as the baseline, we measured how accurately a human observer can interpret the pointing of our IVA or the real person to a physical location. In doing so, we lay the foundation for the design of IVAs with pointing and shed light on future studies about the contribution of the individual design factor.
74
+
75
+ ### 4.1 Participants
76
+
77
+ Thirty-six participants (19 females and 17 males) aged between 21 and 30 were recruited from a local university to participate in the study with compensation of a $\$ {10}$ gift card. All had normal or corrected-to-normal vision.
78
+
79
+ ### 4.2 Apparatus
80
+
81
+ We set up the experiment using a situated 24-inch spherical display (Figure 3) which renders the IVA, and a flat fabric projector screen which renders target area. With four stereo projectors rear projecting onto the spherical surface, we adopted an automated camera-based multi-projector calibration technique [64] to enable a 360-degree seamless image with 1-2 millimeter accuracy. The projectors are Optoma ML750ST with ${1024} \times {768}$ pixel resolution and a frame rate of ${120}\mathrm{\;{Hz}}$ . With an NVIDIA Quadro K5200 graphics card, a host computer sends rendering content to the projectors. Our IVA was rendered using Unity3D and MikuMikuDance [33] model from DeviantArt [52]. We used an OptiTrack ${}^{\mathrm{{TM}}}$ to track the passive markers attached to the shutter glasses for head tracking. We used a pattern-based viewpoint calibration [57] that computed a viewpoint registration with an average angular error of less than one degree. Viewers gain perspective-corrected images with stereo rendering coupled with the synchronized shutter glasses. The total latency is between 10-20 msec [24]. With a resolution of 34.58ppi, the display provides various $3\mathrm{D}$ depth cues such as motion parallax and stereoscopic cues [63]. An Optoma ML750ST projector with ${1024} \times {768}$ pixel resolution was used to display an ${80}\mathrm{\;{cm}} \times {80}\mathrm{\;{cm}}$ target area on a flat fabric projector screen. The grid content and target indicator were created by Unity3D.
82
+
83
+ ### 4.3 Human and IVA Pointing
84
+
85
+ As a baseline, an independent real person (RP) was hired to be the pointer. The dominant hand and eye of our RP are both on the right side. To capture the specific natural human pointing as the baseline, RP was not instructed about the specific manner about pointing gestures but just asked to point as accurately as possible with head, eyes and outstretched arm. Both RP and IVA used the left arm to point to the targets in the left region and the right arm for targets in the center or right region.
86
+
87
+ In practice, most IVAs are rendered in relatively small displays such as home assistant systems $\left\lbrack {1,{11},{21}}\right\rbrack$ . The size difference between IVA and RP makes it challenging to make a fair comparison on the pointing perception. To characterize the potential effect of the size difference, we include two viewing conditions in our study: SameDis and SameRet (Figure 4). In SameDis, the IVA and RP are placed at the same observation distance from the participant. In this condition, the retina image of IVA is smaller than RP with the arm length of IVA and RP as ${30.5}\mathrm{\;{cm}}$ and ${68}\mathrm{\;{cm}}$ respectively. In SameRet, the retina sizes of IVA and RP are the same by moving RP ${56}\mathrm{\;{cm}}$ further away from the participant, resulting in the same angular size of the arm length in IVA and RP. The viewing condition is designed based on previous study that found the task performance of visual reasoning did not vary as long as the retinal image is unchanged, demonstrated through an experiment with a larger display placed farther than a smaller display [14]. However, moving RP away may introduce potential experimental bias by increasing the viewing distance. We included both conditions (same retinal size & same viewing distance) to see what impact the size factor has.
88
+
89
+ ### 4.4 Experimental Design
90
+
91
+ We followed a $2 \times 2 \times 2$ mixed design with one between-subjects variable(C1)and two within-subjects variables(C2, C3):
92
+
93
+ - C1 The Viewing condition, which could be Same Retinal Size (SameRet) or Same Distance (SameDis). In SameDis, the viewing distances in RP and IVA are the same. In SameRet, the retinal sizes are the same by placing RP ${56}\mathrm{\;{cm}}$ farther from the participant compared to IVA (Figure 4).
94
+
95
+ - C2 Pointer which could be Intelligent Virtual Agent (IVA) or Real Person (RP).
96
+
97
+ - C3 Distance which could be near or far. The distance between the participant and target area is ${70}\mathrm{\;{cm}}$ in near and ${210}\mathrm{\;{cm}}$ in far.
98
+
99
+ We designed $\mathbf{{C1}}$ as a between-subjects variable to avoid learning and transfer across different viewing conditions. We randomly and equally divided 36 participants into 2 groups. One group went through SameRet and the other did in SameDis. Each group went through the levels of $\mathbf{{C2}} \times \mathbf{{C3}}$ . The order of $\mathbf{{C2}}$ and $\mathbf{{C3}}$ was fully counterbalanced.
100
+
101
+ We measured error and error bias in the horizontal and vertical dimensions, suggested by prior study that has found systematic bias particularly in the vertical direction [7, 19]. We collected subjective data through a post-study interview. The quantitative metrics are as follows:
102
+
103
+ - Horizontal & Vertical Error, defined as the Euclidean distance between the actual target location and participants' perceived location along the corresponding axis.
104
+
105
+ - Horizontal & Vertical Error Bias, computed by subtracting the actual position from the perceived location. A positive value indicates an estimation to the right or above the true locations. respectively.
106
+
107
+ ### 4.5 Task
108
+
109
+ In each trial, participants observed the pointing performed by IVA or RP and reported the pointing position by clicking where they believe the IVA or RP was pointing using a mouse. They were asked to prioritize accuracy over speed. The pointing positions are located within an ${80}\mathrm{\;{cm}} \times {80}\mathrm{\;{cm}}$ square projected onto a fabric projector screen as the target area placed beside participants (Figure 4). Early pilot of this task has shown that the task might be too difficult due to the lack of reference in a blank background. Therefore we provided a relatively dense ${40} \times {40}$ line grid as the target background (Figure 4)
110
+
111
+ ### 4.6 Procedure
112
+
113
+ Participants started by filling out a consent form followed by verbal explanations of the experiment. Participants were asked to sit on an adjustable chair (Figure 3) to ensure the horizontal alignment of their shoulder and the pointer's shoulder in both IVA and RP. They were seated by the right side of the pointer (Figure 4). The distance between the participant and the target area is ${70}\mathrm{\;{cm}}$ in near, and 210 $\mathrm{{cm}}$ in far, which are chosen to represent the proximal pointing in the near distance and approximate the distal pointing [54] constrained by the experimental room.
114
+
115
+ Each participant was provided with a mouse and a clipboard to hold it. They were instructed to click where they believe the IVA or RP was pointing by prioritizing accuracy over speed. With a total of 4 conditions (IVA vs RP, Near vs Far), each contains 20 trials at different locations with the first 5 provided as practice located at left middle, right middle, top middle, bottom middle and center to illustrate the entire region. Participants were told the actual location in the practice trials. In the formal trials, the locations of targets were randomly generated and can be any location inside the target area. To avoid cross-talk with previous targets serving as a reference participants were instructed to close their eyes between trials.
116
+
117
+ When the participant was ready, the IVA pointed to random locations inside the target area, controlled by the experimenter using a keyboard, whereas RP performed the pointing gesture using a visible random target while the participant had their eyes closed. The dominant hand and eye of our RP are both on the right side. Both IVA and RP used the left arm to point to the targets located in the left region and the right arm for targets in the center or right region When the gesture was ready, the reference target for RP disappeared and participants were asked to open their eyes to perform the task The IVA and RP held the gesture until the participant had finished the click and said "okay." No other communication was provided between participants and RP. Once all conditions were completed, a follow-up interview was conducted to collect participants' subjective preference between IVA and RP on the easiness to perceive pointing and the difference between the perceived and actual pointed location We also asked the pointing cues that they referred to in the task. The study took approximately ${30} - {40}\mathrm{\;{min}}$ to complete.
118
+
119
+ ### 4.7 Data Analysis
120
+
121
+ We conducted a mixed ANOVA with C1 Viewing as a between-subjects factor, C2 Pointer and as C3 Distance as within-subjects factors. Significance values were reported in brackets for $p <$ ${.05}\left( *\right) , p < {.01}\left( {* * }\right)$ , and $p < {.001}\left( {* * * }\right)$ respectively. Numbers in brackets indicate mean(M)and standard error(SE)for each respective measurement. The post-hoc analysis was conducted using pairwise t-tests with Bonferroni corrections.
122
+
123
+ ![01963e81-107e-7cf1-b0e7-f49b3442d312_4_151_147_1491_420_0.jpg](images/01963e81-107e-7cf1-b0e7-f49b3442d312_4_151_147_1491_420_0.jpg)
124
+
125
+ Figure 3: (Left) The Intelligent Virtual Agent (IVA) in a spherical Fish Tank Virtual Reality (FTVR) display that enables the IVA to point from the virtual world to the real world. (Middle) Experimental setup with IVA as the pointer. A participant wears tracked shutter-glasses to perceive the perspective-corrected stereoscopic IVA on the spherical FTVR display. (Right) Experimental setup with a real person (RP) as the pointer. A RP was hired to perform natural pointing as a baseline for the comparison with the IVA's pointing.
126
+
127
+ ![01963e81-107e-7cf1-b0e7-f49b3442d312_4_151_756_713_576_0.jpg](images/01963e81-107e-7cf1-b0e7-f49b3442d312_4_151_756_713_576_0.jpg)
128
+
129
+ Figure 4: Experimental layout in the viewing conditions. (Top) Same Distance (SameDis): Real person (RP) and IVA have the same viewing distance with respect to the participant. The retina size of IVA is smaller than RP. (Bottom) Same Retinal Size (SameRet): RP and IVA have different viewing distances with respect to the participant to keep the same retinal size.
130
+
131
+ ### 4.8 Results
132
+
133
+ #### 4.8.1 Error
134
+
135
+ With all assumptions met, we used a $2 \times 2 \times 2$ mixed-model ANOVA (Viewing $\times$ Pointer $\times$ Distance) on the Horizontal Error and Vertical Error respectively (Figure 5(a)).
136
+
137
+ Horizontal Error: We found main effect of Distance for the horizontal error $\left( {F\left( {1,{34}}\right) = {69.16}, p < {0.001}}\right)$ . The mean horizontal error in near Distance $\left( {M = {9.98}\mathrm{\;{cm}},{SE} = {0.35}\mathrm{\;{cm}}}\right)$ was ${26.5}\%$ lower (***) than in far $\left( {M = {13.58}\mathrm{\;{cm}},{SE} = {0.52}\mathrm{\;{cm}}}\right)$ . We did not find main effects of Viewing $\left( {F\left( {1,{34}}\right) = {0.44}, p > {0.05}}\right)$ and Pointer $\left( {F\left( {1,{34}}\right) = {0.44}, p > {0.05}}\right)$ . No interaction effects were found among factors.
138
+
139
+ Vertical Error: We found main effects of Pointer $(F\left( {1,{34}}\right) =$ ${29.42}, p < {0.001})$ and Distance $\left( {F\left( {1,{34}}\right) = {31.74}, p < {0.001}}\right)$ for the vertical error. The mean vertical error in IVA $(M =$ ${10.22}\mathrm{\;{cm}},{SE} = {0.37}\mathrm{\;{cm}}$ ) was ${28.8}\%$ lower (***) than in $\mathrm{{RP}}(M =$ ${14.35}\mathrm{\;{cm}},{SE} = {0.56}\mathrm{\;{cm}}$ ). The mean vertical error in near $D$ is-tance $\left( {M = {11.11}\mathrm{\;{cm}},{SE} = {0.47}\mathrm{\;{cm}}}\right)$ was ${17.5}\%$ lower $\left( {* * * }\right)$ than in far $\left( {M = {13.46}\mathrm{\;{cm}},{SE} = {0.52}\mathrm{\;{cm}}}\right)$ . We did not find main effect of Viewing $\left( {F\left( {1,{34}}\right) = {2.66}, p > {0.05}}\right)$ . A two-way interaction effect was observed between Viewing and Pointer $(F\left( {1,{34}}\right) = {5.05}, p <$ 0.05).
140
+
141
+ A post-hoc analysis of the two-way interaction effect Viewing $\times$ Pointer (Figure 5(d)) shows significant difference in vertical error between RP and IVA in both SameRet $\left( {p < {0.05}}\right)$ and SameDis $\left( {p < {0.001}}\right)$ . When viewing in SameRet, the mean vertical error in IVA $\left( {M = {10.47}\mathrm{\;{cm}},\mathrm{{SE}} = {0.48}\mathrm{\;{cm}}}\right)$ was ${18.7}\%$ lower (*) than in RP $\left( {M = {12.88}\mathrm{\;{cm}},\mathrm{{SE}} = {0.77}\mathrm{\;{cm}}}\right)$ . When viewing in SameDis, the mean vertical error in IVA $\left( {M = {9.98}\mathrm{\;{cm}},\mathrm{{SE}} = {0.57}\mathrm{\;{cm}}}\right)$ was ${36.9}\%$ lower (***) than in $\operatorname{RP}\left( {M = {15.81}\mathrm{\;{cm}},{SE} = {0.75}\mathrm{\;{cm}}}\right)$ . The mean vertical error was significantly lower (*) in SameRet $(M = {12.88}\mathrm{\;{cm}},\mathrm{{SE}} =$ ${0.77}\mathrm{\;{cm}}$ ) than in SameDis $\left( {M = {15.81}\mathrm{\;{cm}},{SE} = {0.75}\mathrm{\;{cm}}}\right)$ in RP $(p <$ ${0.05})$ , but not $\left( {p > {0.05}}\right)$ in IVA.
142
+
143
+ #### 4.8.2 Error Bias
144
+
145
+ With all assumptions met, a mixed-model ANOVA was conducted on the Horizontal and Vertical Error Bias respectively. The means and ${95}\%$ CIs and a scatter plot showing the error bias for all participants can be found in Figure 5(b)(c).
146
+
147
+ Horizontal Error Bias: We did not find main effects of Pointer $\left( {F\left( {1,{34}}\right) = {3.17}, p > {0.05}}\right)$ , Distance $(F\left( {1,{34}}\right) = {2.47}, p >$ ${0.05})$ , Viewing $\left( {F\left( {1,{34}}\right) = {0.14}, p > {0.05}}\right)$ for the horizontal error bias, or any interaction effects among three factors.
148
+
149
+ Vertical Error Bias: We found main effect of Pointer $(F\left( {1,{34}}\right) =$ ${284.84}, p < {0.001})$ for the vertical error bias. The mean vertical error bias in IVA $\left( {M = - {2.41}\mathrm{\;{cm}},\mathrm{{SE}} = {0.73}\mathrm{\;{cm}}}\right)$ was significantly lower $\left( {p < {0.001}}\right)$ than in $\operatorname{RP}\left( {M = {13.47}\mathrm{\;{cm}},{SE} = {0.64}\mathrm{\;{cm}}}\right)$ . We did not find main effects of Distance $\left( {F\left( {1,{34}}\right) = {1.66}, p > {0.05}}\right)$ , Viewing $\left( {F\left( {1,{34}}\right) = {2.20}, p > {0.05}}\right)$ , or any interaction effects among three factors.
150
+
151
+ #### 4.8.3 Interview Responses
152
+
153
+ In the post-study interview, we found that most participants took Hand as the major cue in determining the pointing direction of both IVA (55.6%) and RP (61.1%) as shown in Figure 6(a). We collected participants' subjective preference between IVA and RP on the ease with which to perceive pointing (Figure 6(b)). We found that IVA was voted as the easier one by the majority (55.6%) especially when Viewing is SameRet (72.2%). Since participants were told the actual locations pointed to in the practice trials before each session, we collected subjective data about whether their expected locations are close to the actual ones to understand their perception of pointing
154
+
155
+ ![01963e81-107e-7cf1-b0e7-f49b3442d312_5_155_156_1484_318_0.jpg](images/01963e81-107e-7cf1-b0e7-f49b3442d312_5_155_156_1484_318_0.jpg)
156
+
157
+ Figure 5: Study results on (a) error, (b-c) error bias, and (d) interaction effect Viewing $\times$ Pointer on the vertical error. (a) Mean error and 95% CIs of perceived pointing locations in IVA and RP. IVA yielded significantly lower error (28.8% less) in vertical dimension and comparable horizontal accuracy as RP. (b) Mean error bias and 95% CIs in IVA and RP. Participants showed a systematic upward bias in perceiving RP’s pointing, which is demonstrated in (c) the scatter plot of all participants’ average error bias. Data points above the horizontal axis indicate upward bias. Significance values were reported for $p < {0.05}\left( *\right) , p < {0.01}\left( {* * }\right)$ , and $p < {0.001}\left( {* * * }\right)$ .
158
+
159
+ ![01963e81-107e-7cf1-b0e7-f49b3442d312_5_150_720_718_326_0.jpg](images/01963e81-107e-7cf1-b0e7-f49b3442d312_5_150_720_718_326_0.jpg)
160
+
161
+ Figure 6: Participants' preference between IVA and RP for (a) pointing cues, (b) easiness of judging the pointing, and (c) deviation between the actual and perceived pointing direction. (a) Most participants took Hand as the major cue in determining the pointing direction of both IVA (55.6%, 20/36) and RP (61.1%, 22/36). (b) IVA was voted as the easier one by the majority $\left( {{55.6}\% ,{20}/{36}}\right)$ especially when viewing in SameRet (72.2%, 13/18). (c) Most participants (77.8%, 28/36) found RP exhibited a larger deviation between the actual and perceived pointing in the practice trials than IVA.
162
+
163
+ (Figure 6(c)). 77.8% participants found that the difference between the actual and perceived location was larger in RP compared to IVA, with 72.2% in SameRet and 83.3% in SameDis.
164
+
165
+ ## 5 Discussion
166
+
167
+ Based on the results, we summarize the following major findings. We found that participants can perceive accurately where the IVA was pointing in the real world:
168
+
169
+ - IVA achieved accurate pointing perception with the horizontal error of ${11.58}\mathrm{\;{cm}}$ comparable to ${11.99}\mathrm{\;{cm}}$ of RP and the vertical error of ${10.22}\mathrm{\;{cm}}$ significantly lower than ${14.35}\mathrm{\;{cm}}$ of RP.
170
+
171
+ - Participants showed a systematic upward bias of ${13.47}\mathrm{\;{cm}}$ regardless of Distance in RP but not in IVA.
172
+
173
+ - The Viewing condition did not appear to affect the accuracy difference between IVA and RP.
174
+
175
+ ### 5.1 Reflections on Design Factors
176
+
177
+ In this section, we discuss the three design factors (pointing gesture, situated display, and IVA appearance) to provide interpretations of our findings in relation to RP as well as suggest future directions.
178
+
179
+ #### 5.1.1 Pointing Gestures
180
+
181
+ In our study, RP was asked to point as accurately as possible by naturally moving their head, eye-gaze, and outstretched arm towards the target. After the experiment, we asked the RP about their pointing and found the RP used eye-fingertip alignment. This is not surprising, as it is commonly observed in natural human pointing [7, 30]. However, we found that participants might perceive the pointing gesture in a way different from how RP performed it. This difference between perceiving and performing the pointing gesture may potentially explain the strong upward bias observed in the RP results (Figure 5(b)).
182
+
183
+ To illustrate this bias, consider Figure 7(a): a pointer outstretches its arm to point to a target (green cube) by placing the fingertip on the line joining the dominant eye and the target. If the viewer perceives the pointing direction by following the arm vector extended from the fingertip, there will be a vertical error causing an incorrect position (blue cube) with an upward bias deviating from the actual pointed position (green cube). Note that regardless of the actual target position, the vertical error will always be positive (upwards) since the pointer's shoulder (the origin of the arm vector) is always below their eyes (the origin of eye-fingertip vector). Similarly, there will also be horizontal bias as shown in Figure 7(b). Different from the vertical bias, horizontal bias can be both positive (on the right) and negative (on the left), which could potentially explain that we only observed systematic positive error bias in the vertical direction but not in the horizontal direction in RP (Figure 5(b)). The systematic upward bias of RP (13.47 cm) is consistent with prior work [7] in which they found a mean angular error bias of 2.5 degree above the target equivalent to a mean vertical error bias of ${11.26}\mathrm{\;{cm}}$ averaged across the viewing distances used in our setup.
184
+
185
+ In the post-study interview, the majority of participants (61.1% in RP and 55.6% in IVA) reported that they mainly focused on the hand/arm cue as the reference to find the pointing direction (Figure 6(a)). This is consistent with prior work [40] as they found users might employ an imaginary ray extending from a fingertip to perceive the pointing in a similar referencing task. In addition, ${77.8}\% \left( {28}\right)$ participants (Figure 6(c)) reported a large deviation in RP between the actual and perceived location with 16 participants commenting that it was confusing to find that RP pointed higher vertically than expected. By contrast, 19.4% reported the deviation and confusion in IVA. It indicates that the IVA's pointing gesture, which is the arm-vector pointing rather than the eye-fingertip alignment, is likely to be more aligned with how the majority perceived the pointing direction.
186
+
187
+ Besides the error bias, we also found the vertical error in IVA (10.22cm)was significantly lower than $\mathrm{{RP}}\left( {{14.35}\mathrm{\;{cm}}}\right)$ as shown in Figure 5(a), also reflected in participants' interview responses on the ease of perceiving the pointing of IVA (Figure 6(b)). Without the eye-fingertip alignment, the correct target location would be reached directly by following the arm vector of IVA. Therefore, we suggest using the arm-vector as the primary cue when designing the pointing gesture of IVA for higher accuracy of pointing perception.
188
+
189
+ ![01963e81-107e-7cf1-b0e7-f49b3442d312_6_151_147_720_529_0.jpg](images/01963e81-107e-7cf1-b0e7-f49b3442d312_6_151_147_720_529_0.jpg)
190
+
191
+ Figure 7: Illustration of the error bias when using a pointing posture with eye-fingertip alignment. A pointer outstretches the arm to point to a target (green cube) by placing the fingertip on the line joining the dominant eye and the target. (a) In the side-view, the perceived location (blue cube) is systematically higher than the actual location (green cube). (b) In the top-view, left and right arm pointing with eye-fingertip alignment results in the deviation in both directions.
192
+
193
+ Our results showed that participants were more accurate horizontally than vertically (Figure 5). One potential cause is the difference between horizontal and vertical visual acuity. Previous work found $\left\lbrack {{15},{17}}\right\rbrack$ that users have better horizontal visual acuity to perceive gaze directions compared to vertical acuity. Another possible explanation is that the arm switch of the pointer might provide a visual cue as to which side the target is located. The left/right arm inherently implies the left/right region, potentially making the task easier horizontally than vertically. Further experiments are needed to investigate it.
194
+
195
+ #### 5.1.2 Situated Display
196
+
197
+ Our study was conducted using a spherical FTVR display. It is an open question as to how the findings can be applied to other display devices such as non-FTVR 2D monitors and HMDs. Conventional flat displays, like monitors, without using FTVR lack depth cues such as motion parallax and stereopsis, which are essential for pointing perception [35]. In addition, on a flat surface, the Mona Lisa effect, which describes a phenomenon in which a character's eyes seem to follow the user irrespective of the user's position [45], could negatively affect users' perception of pointing with eye gaze as a pointing cue.
198
+
199
+ Flat FTVR displays provide additional depth cues compared to traditional 2D displays. The major difference between a flat and spherical FTVR display is the shape factor. We expect it would be difficult to achieve similar results on a flat FTVR display due to two reasons. First, existing studies have found that spherical FTVR displays provided better gaze, depth and size perception than a flat counterpart $\left\lbrack {{28},{62}}\right\rbrack$ . Perceiving pointing direction depends on the depth and size perception. Another related issue is the vergence-accommodation conflict (VAC) [58]. Although participants in our experiments were required to sit on a fixed chair, we did not constrain their head motion. With a spherical display, they could keep a relatively constant screen distance following a curvature [62]. While for the flat counterpart, users' viewing distance to the screen surface would change while moving their head, which might result in a more pronounced VAC. Future studies are required to investigate these issues and evaluate IVAs' pointing accuracy within the flat FTVR displays.
200
+
201
+ For other 3D displays with perspective-corrected and stereo rendering, such as a CAVE and HMDs used in AR, we anticipate that similar results may be found depending on the relative importance of each of the other design factors we investigated, i.e. IVA appearance and pointing gesture type. Future studies of controlled experiments would be required to understand the effect of individual factors and potentially associate the result with the display factor.
202
+
203
+ #### 5.1.3 IVA Appearance
204
+
205
+ The different appearances between RP and IVA, such as gender, realism and eyes may have some influence on participants' perception. Prior research on user preferences for agents' gender presents contradictory findings and trends, which may be due to user characteristics or context [51]. Regarding realism, RP was reported by four participants to be more familiar and common. Two participants commented that IVA's bigger eyes were helpful to judge the direction. In contrast, RP's eye gaze cue was reported to be subtle by three participants, with one indicating it was even harder to discern the change in the horizontal direction. Moreover, two participants said that they tried to avoid eye contact in RP, while there was no such concern in IVA. Besides, previous research showed that users exposed to images of animals with baby schema were more physically tender in their motor behavior and performed better on a task that demanded extreme carefulness [56]. The baby schema of the IVA might have some effect on participants' performance. Future studies could determine the extent to which each aspect contributes to the pointing perception.
206
+
207
+ ### 5.2 Distance
208
+
209
+ Not surprisingly, participants perceived pointing more accurately when targets were closer than farther, no matter whether it is in SameRet or SameDis. With the same target area, farther distance results in a subtler angular change for all the pointing cues (head, eyes and hand). Three participants also commented that it was hard to extend the arm line to locate the target when farther away. However, despite the higher level of difficulty for farther distances, our IVA can still point more accurately than the real person, indicating the effectiveness of our IVA design. It also suggested that users are able to know where an IVA is pointing within a range of distance. Practically, this implies that should an IVA be used as a home assistant or a virtual tutor, it can be situated in a single location and still be able to point to near and far objects while indicating, "It's over there." to provide deictic indications with users.
210
+
211
+ ### 5.3 Viewing Condition
212
+
213
+ We introduced the Viewing condition as a between-subjects factor due to the size difference between IVA and RP. Our main finding that IVA provides better pointing perception than RP holds both in SameRet and SameDis. Therefore, incorporating the Viewing condition in the study design helps to validate our results. Besides, we found the Viewing condition plays a role in the pointing perception with the interaction effect Viewing $\times$ Pointer (Figure 5(d)). Adjusting the distance between RP and the participant leads to different retinal sizes and causes the difference in the vertical error between SameRet and SameDis in RP. Note that the difference is only in the vertical error but not in the vertical error bias, indicating that changing the Viewing condition did not introduce systematic bias but affected the precision of the pointing perception. One possible explanation is that there might exist an optimal viewing distance and retinal size to perceive the pointing direction by observing the pointer's posture. Our study focused on the difference between pointers and evaluated one fixed distance or retinal size. Future studies are needed to investigate the potential effect of different viewing distances and retinal sizes on the pointing perception.
214
+
215
+ Participants' comments on the size difference are quite divergent. In SameDis where the retinal size of IVA is approximately half the size of RP, 5 out of 18 participants commented that IVA's pointing was easier. They explained that the smaller size of IVA allowed them to perceive a more noticeable change of eyes, hand and head orientation. Conversely, 4 out of 18 participants who found RP easier commented that RP's life-size was more natural to perceive the pointing. Similarly in SameRet their comments are also divergent. Four participants preferred IVA's smaller size whereas two preferred the life-size of RP. While future studies can quantify individuals' sensitivity to this factor, we also note that from a practical perspective, our study shows that there is unlikely a one-size-fits-all solution to optimize the size and visual representation of an IVA. Thus, allowing users to tailor their IVA's appearance would be advisable.
216
+
217
+ ## 6 DESIGN IMPLICATIONS
218
+
219
+ The main design implication from our study is that with a set of design factors determined, it is feasible to have an IVA point with comparable accuracy to a real person. In our IVA design, we used a spherical FTVR display, rendered a 3D cartoon IVA with humanlike behaviors and applied arm vector pointing instead of the eye-fingertip alignment, which collectively contributes to our IVA's high pointing accuracy. As the appearance and pointing gesture strategy are not dependent on the display factor, we expect these design choices could be considered in other display devices. The findings serve as a foundation for designing an IVA to point to the physical world accurately and provide pathways for future studies to precisely quantify the relative contribution of each factor.
220
+
221
+ We also suggest to provide more cues for perceiving pointing to objects farther away. According to our results, when participants were farther from the target, the accuracy of the pointing perception decreased significantly. Visual cues such as the orientation of the head, hand and eye gaze might not be sufficient to accurately indicate the target. Additional verbal cues, such as the location or feature description, should be considered to convey the pointing direction efficiently, which better resembles human pointing behaviour. For example, a combination of verbal description, i.e., "it's on the table over there", with a pointing gesture can be implemented with IVAs. A future study could investigate the natural communication mechanisms combining voice and deictic gestures.
222
+
223
+ ## 7 LIMITATIONS AND FUTURE WORK
224
+
225
+ We discuss four limitations of our work along with opportunities they present for future research. First, we hired one single RP as the baseline pointer. Without explicit instructions, the RP pointed naturally in a way with eye-fingertip alignment commonly found in human natural pointing $\left\lbrack {7,{30}}\right\rbrack$ . Our primary goal for the study was to establish that users can perceive where a carefully designed IVA is pointing. While using an RP baseline illustrated some potential avenues of research to quantify the differences in pointing between IVAs and real people, our study was not designed to do so. For example, natural human pointing has a range of variations of pointing gestures and strategies that are employed. We believe future studies can pursue with multiple RPs spanning a range of strategies, which would help establish the robustness of IVA pointing relative to human pointing, and define the lower and upper bounds of IVA/RP differences to provide additional insight into different design approaches for IVA pointing gestures and appearance.
226
+
227
+ Second, despite providing head tracking and depth cues, FTVR displays still have many technical and perceptual limitations, e.g., lower resolution and fewer depth cues than in reality. These constraints may affect participants' accuracy. Two participants pointed out that IVA lacked depth information (e.g., shadows and lighting). However, all of their quantitative data still suggests a higher accuracy in IVA than in the RP baseline. This indicates that our display's constraints did not appear to have a notable negative impact on participants' performance. The effect of display quality characteristics on the perception of pointing should be identified with further user studies.
228
+
229
+ Third, the design of an IVA involves many factors. In this paper, we focused on using a situated spherical FTVR display, a cartoon IVA appearance and arm-vector pointing gestures. We demonstrated that these factors were sufficient for the IVA to point with comparable accuracy to a real person. Future work will draw attention to controlled experiments for each of the design factors to demonstrate their effects and the degree of individual's sensitivity to the cues that we observed. For example, to precisely quantify the effect of the eye-fingertip alignment, we can have an IVA point with eye-fingertip alignment to compare with the current design. Through studying different pointing configurations, we can create a set of configurable IVA characters that individuals can personalize to optimize their interactions with the IVA.
230
+
231
+ Last, gesture and language are highly integrated components in interpersonal conversation $\left\lbrack {8,{23},{44}}\right\rbrack$ . Our study provides a foundation for designing IVAs that can point accurately to the real world. However, during a conversation, people do not rely on pointing gestures exclusively [7]. Typically, they will rely differently and flexibly on gestural or verbal means [6]. Thus, a future step will concentrate on the role of pointing gestures with verbal cues given to establish joint attention with the IVA.
232
+
233
+ ## 8 CONCLUSION
234
+
235
+ In this paper, we proposed an IVA with design factors including a situated display, appearance, and pointing gesture strategy to investigate whether it is possible to have an IVA point accurately into the real world. Using a spherical FTVR display, we conducted a study to measure the IVA's pointing accuracy while comparing to a natural human pointing baseline. In the study, the IVA's pointing accuracy was determined by having participants estimate where they perceive the IVA is pointing in the real world. The participants also estimated where a real person was pointing using the same experimental setup for comparison and discussion of the different design factors.
236
+
237
+ Our results show that participants perceived the IVA's pointing into the real world with comparable accuracy to the real person. Specifically, the IVA outperformed the real person in the vertical dimension and yielded the same level of accuracy horizontally. We discussed design factors that likely contributed to the success of the IVA pointing accuracy, and suggested directions for future studies to provide accurate pointing perception. Our results for the human pointing baseline are consistent with previous literature, showing that participants mainly focus on the pointer's hand, which leads to a bias when interpreting a real person's pointing direction. Particularly, we found participants exhibited a systematic upward bias in the vertical dimension when perceiving the human pointer, which we suspect is due to the ambiguity associated with the eye-fingertip alignment that is commonly employed by people when they point in the real world. The adjustment afforded by the IVA design to use arm vector pointing is helpful to improve IVA pointing accuracy.
238
+
239
+ As voice and visual interfaces for home assistants and other digital assistants are becoming commonly used in daily life, an embodied IVA that can provide gesture cues is expected to enable a more human-like interaction. We demonstrated that a well-designed 3D visual representation of an IVA can be endowed with the capability to point to the real world with comparable accuracy to a real person. Our work shows how an IVA rendered in a 3D display can provide effective pointing gestures, which could be used in conjunction with a voice interface for natural communication bridging the virtual and the real world.
240
+
241
+ [1] M. Alexa. Amazon Alexa: 2017 User Guide + 200 Ester Eggs. Inde-
242
+
243
+ pendently published, 2017.
244
+
245
+ [2] E. André, T. Rist, and J. Müller. Webpersona: a lifelike presentation agent for the world-wide web. Knowledge-Based Systems, 11(1):25-36, 1998.
246
+
247
+ [3] M. Argyle and M. Cook. Gaze and mutual gaze. 1976.
248
+
249
+ [4] R. K. Atkinson. Optimizing learning from examples using animated pedagogical agents. Journal of Educational Psychology, 94(2):416, 2002.
250
+
251
+ [5] I. Avellino, C. Fleury, and M. Beaudouin-Lafon. Accuracy of deictic gestures to support telepresence on wall-sized displays. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 2393-2396, 2015.
252
+
253
+ [6] A. Bangerter. Using pointing and describing to achieve joint focus of attention in dialogue. Psychological Science, 15(6):415-419, 2004.
254
+
255
+ [7] A. Bangerter and D. Oppenheimer. Accuracy in detecting referents of pointing gestures unaccompanied by language. Gesture, 6:85-102, 01 2006. doi: 10.1075/gest.6.1.05ban
256
+
257
+ [8] J. B. Bavelas and N. Chovil. Visible acts of meaning: An integrated message model of language in face-to-face dialogue. Journal of Language and social Psychology, 19(2):163-194, 2000.
258
+
259
+ [9] R. A. Bolt. "Put-that-there": Voice and gesture at the graphics interface, vol. 14. ACM, 1980.
260
+
261
+ [10] G. Butterworth and S. Itakura. How the eyes, head and hand serve definite reference. British Journal of Developmental Psychology, 18(1):25- 50, 2000.
262
+
263
+ [11] M. Calore. Facebook made you a smart-home device with a camera on it, Aug 2018.
264
+
265
+ [12] J. Cassell, T. Stocky, T. Bickmore, Y. Gao, Y. Nakano, K. Ryokai, D. Tversky, C. Vaucelle, and H. Vilhjálmsson. Mack: Media lab autonomous conversational kiosk. In Proc. of Imagina, vol. 2, pp. 12-15, 2002.
266
+
267
+ [13] J. Cassell, H. H. Vilhjálmsson, and T. Bickmore. Beat: the behavior expression animation toolkit. In Life-Like Characters, pp. 163-185. Springer, 2004.
268
+
269
+ [14] J. Chen, H. Cai, A. P. Auchus, and D. H. Laidlaw. Effects of stereo and screen size on the legibility of three-dimensional streamtube visualization. IEEE transactions on Visualization and Computer Graphics, ${18}\left( {12}\right) : {2130},{2012}$ .
270
+
271
+ [15] M. Chen. Leveraging the asymmetric sensitivity of eye contact for videoconference. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 49-56. ACM, 2002.
272
+
273
+ [16] K. Cheng and M. Takatsuka. Hand pointing accuracy for vision-based interactive systems. In IFIP Conference on Human-Computer Interaction, pp. 13-16. Springer, 2009.
274
+
275
+ [17] M. G. Cline. The perception of where a person is looking. The American Journal of Psychology, 80(1):41-50, 1967.
276
+
277
+ [18] H. Cochet and J. Vauclair. Deictic gestures and symbolic gestures produced by adults in an experimental context: Hand shapes and hand preferences. Laterality, 19:278-301, 01 2014. doi: 10.1080/1357650X .2013.804079
278
+
279
+ [19] S. Cooney, N. Brady, and A. Mckinney. Pointing perception is precise. Cognition, 177, 04 2018. doi: 10.1016/j.cognition. 2018.04.021
280
+
281
+ [20] P. L. de Diesbach and D. F. Midgley. Embodied virtual agents: an affective and attitudinal approach of the effects on man-machine stickiness in a product/service discovery. In International Conference on Engineering Psychology and Cognitive Ergonomics, pp. 42-51. Springer, 2007.
282
+
283
+ [21] D. Delfino. 'what is a google home hub?': Everything you need to know about the google smart device that can help you navigate daily life, Sep 2019.
284
+
285
+ [22] J. Dinerstein, P. K. Egbert, and D. Ventura. Learning policies for embodied virtual agents through demonstration. In IJCAI, pp. 1257- 1262, 2007.
286
+
287
+ [23] R. A. Engle. Not channels but composite signals: Speech, gesture, diagrams and object demonstrations are integrated in multimodal explanations. In Proceedings of the twentieth annual conference of the cognitive science society, pp. 321-326, 1998.
288
+
289
+ [24] D. B. Fafard, Q. Zhou, C. Chamberlain, G. Hagemann, S. Fels, and I. Stavness. Design and implementation of a multi-person fish-tank virtual reality display. In Proceedings of the 24th ACM Symposium on
290
+
291
+ Virtual Reality Software and Technology, p. 5. ACM, 2018.
292
+
293
+ [25] C. Gale and A. F. Monk. Where am i looking? the accuracy of video-mediated gaze awareness. Perception & psychophysics, 62(3):586-595, 2000.
294
+
295
+ [26] J. J. Gibson and A. D. Pick. Perception of another person's looking behavior. The American journal of psychology, 1963.
296
+
297
+ [27] G. Hagemann, Q. Zhou, I. Stavness, O. Dicky Ardiansyah Prima, and S. S. Fels. Here's looking at you: A spherical ftvr display for realistic eye-contact. pp. 357-362, 11 2018. doi: 10.1145/3279778.3281456
298
+
299
+ [28] G. Hagemann, Q. Zhou, I. Stavness, and S. Fels. Investigating spherical fish tank virtual reality displays for establishing realistic eye-contact. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 950-951. IEEE, 2019.
300
+
301
+ [29] Y. Hayashi and M. Marutschke. Designing Pedagogical Agents to Evoke Emotional States in Online Tutoring Investigating the Influence of Animated Characters, vol. 9192, pp. 372-383. 08 2015. doi: 10. 1007/978-3-319-20609-7_35
302
+
303
+ [30] D. Henriques and J. Crawford. Role of eye, head, and shoulder geometry in the planning of accurate arm movements. Journal of neurophysiology, 87:1677-85, 05 2002. doi: 10.1152/jn.00509.2001
304
+
305
+ [31] O. Herbort, L.-M. Krause, and W. Kunde. Perspective determines the production and interpretation of pointing gestures. Psychonomic Bulletin & Review, 28(2):641-648, 2021.
306
+
307
+ [32] O. Herbort and W. Kunde. Spatial (mis-) interpretation of pointing gestures to distal referents. Journal of Experimental Psychology: Human Perception and Performance, 42(1):78, 2016.
308
+
309
+ [33] Y. HIGUCHI. Mikumikudance, 2010.
310
+
311
+ [34] K. Kim, L. Boelling, S. Haesler, J. Bailenson, G. Bruder, and G. F. Welch. Does a digital assistant need a body? the influence of visual embodiment and social behavior on the perception of intelligent virtual agents in ar. In 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 105-114. IEEE, 2018.
312
+
313
+ [35] K. Kim, J. Bolton, A. Girouard, J. Cooperstock, and R. Vertegaal. Telehuman: Effects of $3\mathrm{\;d}$ perspective on gaze and pose estimation with a life-size cylindrical telepresence pod. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '12, pp. 2531-2540. ACM, New York, NY, USA, 2012. doi: 10.1145/2207676. 2208640
314
+
315
+ [36] S. Kita. Pointing: Where language, culture, and cognition meet. Psychology Press, 2003.
316
+
317
+ [37] L.-M. Krause and O. Herbort. The observer's perspective determines which cues are used when interpreting pointing gestures. Journal of Experimental Psychology: Human Perception and Performance, 47(9):1209, 2021.
318
+
319
+ [38] R. M. Krauss, Y. Chen, and R. F. Gottesman. Lexical gestures and lexical access: a process model, p. 261-283. Language Culture and Cognition. Cambridge University Press, 2000. doi: 10.1017/ CBO9780511620850.017
320
+
321
+ [39] J. C. Lester, J. L. Voerman, S. G. Towns, and C. B. Callaway. Deictic believability: Coordinated gesture, locomotion, and speech in lifelike pedagogical agents. Applied Artificial Intelligence, 13(4-5):383-414, 1999.
322
+
323
+ [40] Y. Li, D. Hu, B. Wang, D. A. Bowman, and S. W. Lee. The effects of incorrect occlusion cues on the understanding of barehanded referencing in collaborative augmented reality. 2021.
324
+
325
+ [41] Z. Li and R. Jarvis. Visual interpretation of natural pointing gestures in 3d space for human-robot interaction. In 2010 11th International Conference on Control Automation Robotics Vision, pp. 2513-2518, Dec 2010. doi: 10.1109/ICARCV.2010.5707377
326
+
327
+ [42] K. Lorenz. Die angeborenen formen möglicher erfahrung. Zeitschrift für Tierpsychologie, 5(2):235-409, 1943.
328
+
329
+ [43] S. Mayer, V. Schwind, R. Schweigert, and N. Henze. The effect of offset correction and cursor on mid-air pointing in real and virtual environments. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10.1145/3173574. 3174227
330
+
331
+ [44] D. McNeill. So you think gestures are nonverbal? Psychological review, 92(3):350, 1985.
332
+
333
+ [45] H. Mitake, T. Ichii, K. Tateishi, and S. Hasegawa. Wide viewing angle fine planar image display without the mona lisa effect.
334
+
335
+ [46] T. Moeslund, M. Orring, and E. Granum. Vision-based user interface for interacting with a virtual environment. 102000.
336
+
337
+ [47] M. Mori, K. F. MacDorman, and N. Kageki. The uncanny valley [from the field]. IEEE Robotics & Automation Magazine, 19(2):98-100, 2012.
338
+
339
+ [48] K. Nickel and R. Stiefelhagen. Pointing gesture recognition based on 3d-tracking of face, hands and head orientation. pp. 140-146, 01 2003. doi: 10.1145/958432.958460
340
+
341
+ [49] T. Noma, L. Zhao, and N. I. Badler. Design of a virtual human presenter. IEEE Computer Graphics and Applications, 20(4):79-85, 2000.
342
+
343
+ [50] Y. Pan and A. Steed. Preserving gaze direction in teleconferencing using a camera array and a spherical display. In 2012 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), pp. 1-4, Oct 2012. doi: 10.1109/3DTV.2012.6365433
344
+
345
+ [51] J. Payne, A. Szymkowiak, P. Robertson, and G. Johnson. Gendering the machine: Preferred virtual assistant gender and realism in self-service. In International Workshop on Intelligent Virtual Agents, pp. 106-115. Springer, 2013.
346
+
347
+ [52] D. Perkel. Share wars: Sharing, theft, and the everyday production of web 2.0 on deviantart. First Monday, 21(6), 2016.
348
+
349
+ [53] K. J. Rohlfing, A. Grimminger, and C. Lüke. An interactive view on the development of deictic pointing in infancy. Frontiers in psychology, 8:1319, 2017.
350
+
351
+ [54] C. L. Schmidt. Adult understanding of spontaneous attention-directing events: What does gesture contribute? Ecological Psychology, 11(2):139-174, 1999.
352
+
353
+ [55] E. Schneider, Y. Wang, and S. Yang. Exploring the uncanny valley with japanese video game characters. In DiGRA Conference, 2007.
354
+
355
+ [56] G. D. Sherman, J. Haidt, and J. A. Coan. Viewing cute images increases behavioral carefulness. Emotion, 9(2):282, 2009.
356
+
357
+ [57] A. J. Wagemakers, D. B. Fafard, and I. Stavness. Interactive visual calibration of volumetric head-tracked 3d displays. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17, pp. 3943-3953. ACM, New York, NY, USA, 2017. doi: 10. 1145/3025453.3025685
358
+
359
+ [58] C. Ware. Information visualization: perception for design. Morgan Kaufmann, 2019.
360
+
361
+ [59] C. Ware, K. Arthur, and K. S. Booth. Fish tank virtual reality. In Proceedings of the INTERACT'93 and CHI'93 conference on Human factors in computing systems, pp. 37-42, 1993.
362
+
363
+ [60] N. Wong and C. Gutwin. Where are you pointing? the accuracy of deictic pointing in cves. vol. 2, pp. 1029-1038, 01 2010. doi: 10. 1145/1753326.1753480
364
+
365
+ [61] F. Wu, Q. Zhou, K. Seo, T. Kashiwaqi, and S. Fels. I got your point: An investigation of pointing cues in a spherical fish tank virtual reality display. pp. 1237-1238, 03 2019. doi: 10.1109/VR.2019.8798063
366
+
367
+ [62] Q. Zhou, G. Hagemann, D. Fafard, I. Stavness, and S. Fels. An evaluation of depth and size perception on a spherical fish tank virtual reality display. IEEE transactions on visualization and computer graphics, 25(5):2040-2049, 2019.
368
+
369
+ [63] Q. Zhou, F. Wu, S. Fels, and I. Stavness. Closer object looks smaller: Investigating the duality of size perception in a spherical fish tank vr display. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1-9, 2020.
370
+
371
+ [64] Q. Zhou, K. Wu, G. Miller, I. Stavness, and S. Fels. 3dps: An auto-calibrated three-dimensional perspective-corrected spherical display. In 2017 IEEE Virtual Reality (VR), pp. 455-456. IEEE, 2017.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/HLcgsgKEpMq/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,239 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § IT'S OVER THERE: DESIGNING AN INTELLIGENT VIRTUAL AGENT THAT CAN POINT ACCURATELY INTO THE REAL WORLD
2
+
3
+ Anonymous
4
+
5
+ < g r a p h i c s >
6
+
7
+ Figure 1: We investigated how accurately users can perceive where an Intelligent Virtual Agent (IVA) rendered in a 3D display is pointing in the real world.
8
+
9
+ § ABSTRACT
10
+
11
+ It is a challenge to design an intelligent virtual agent (IVA) that can point to the real world and have users accurately recognize where it is pointing. We designed an IVA with factors including: a situated display, appearance, and pointing gesture strategy to establish whether it is possible to have an IVA point accurately into the real world. With a real person pointing as a baseline, we performed an empirical study using our designed IVA and demonstrated that participants perceived the IVA's pointing to a physical location with comparable accuracy to the real person baseline. Specifically, we found that the IVA outperformed the real person vertically (28.8% less error) and yielded comparable accuracy horizontally. Our integrated design choices provide a foundation for design factors to consider when designing IVAs for pointing and pave the way for future studies and systems in providing accurate pointing perception.
12
+
13
+ Index Terms: Human-centered computing-Human computer interaction (HCI)-Empirical studies in HCI; Human-centered computing-Interaction design-Empirical studies in interaction design;
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ Many researchers have studied natural human communication cues such as voice and hand gestures $\left\lbrack {8,{23},{44}}\right\rbrack$ . One important aspect of human communication is deictic pointing $\left\lbrack {{18},{53}}\right\rbrack$ , a hand gesture that complements or replaces the verbal communication to indicate a point of interest in a shared environment [38]. As a pioneer work that investigates deictic pointing into the $2\mathrm{D}$ virtual world, Put that there [9] demonstrated how an intelligent virtual agent (IVA) can recognize and interpret a person's pointing gestures at objects in the virtual world to facilitate natural human-computer interaction. However, this raises the reverse question, "Can an IVA point back to the real world?" More recently, with the advances in voice-based IVAs, such as Amazon Alexa®, the emerging 3D display technologies provide opportunities for IVAs to perform deictic pointing to objects in the real world. We believe that enabling IVAs with pointing gestures can enrich the communication channel and promote efficient human-like interactions [34].
18
+
19
+ To enable IVAs to point effectively, we seek to answer how accurately users can interpret the direction of an IVA's pointing, to establish a fundamental building block for designing deictic interactions between users and IVAs. However, it remains unclear whether it is feasible to design an IVA that have users accurately recognize where the IVA is pointing into the real world. Optimally, users should be able to interpret an IVA's pointing to the real world as well as, or even better than a real person's pointing.
20
+
21
+ To explore this potential, we introduce design factors that may improve the chances that users would be able to accurately perceive the IVA's pointing to demonstrate feasibility. These include the situated display, IVA appearance and pointing gesture strategies. For the situated display, we used a spherical Fish Tank Virtual Reality (FTVR) display in our IVA design. Unlike immersive displays, the spherical FTVR display is calibrated to be viewer-aware in the real-world coordinate system, enabling the IVA to point from the virtual world to objects in the real world. It also offers effective 3D depth cues for pointing perception (i.e., stereoscopic cue and motion parallax) [35]. Besides, spherical displays have been found to provide better gaze $\left\lbrack {{28},{50}}\right\rbrack$ , size and depth $\left\lbrack {62}\right\rbrack$ perception compared to the flat displays. For the IVA appearance, we used an animated cartoon character that was not photo-realistic but offered natural, easy-to-control pointing affordances to avoid the Uncanny Valley [47] effect. For the pointing gesture, we designed our IVA to point following the arm vector instead of the eye-fingertip alignment commonly found in human pointing $\left\lbrack {7,{30} - {32},{37}}\right\rbrack$ as it has been shown to provide a more accurate cue.
22
+
23
+ With a real person pointing as the baseline for comparison, we conducted an empirical experiment to investigate how accurately users can perceive our IVA's pointing. As an IVA is usually smaller than a real person due to the size constraint of typical displays, we controlled for retinal size in the experimental design. Our results demonstrated that it is feasible to have an IVA accurately point to locations in the real world. Further, the IVA's pointing location was perceived as accurately as a real person in our configuration. Specifically, the IVA outperformed the real person in the vertical dimension and yielded the same level of accuracy horizontally. We also discuss how the set of design factors may have contributed to the result and suggest design implications. Thus, the design factors we suggest provide a foundation for future studies on exploring the relative importance of each factor to consider for the design of IVA with pointing gestures. We believe our study and IVA design help pave the way for research on users' perception of pointing either in the virtual environment or in the real world.
24
+
25
+ § 2 RELATED WORK
26
+
27
+ § 2.1 POINTING IN INTELLIGENT VIRTUAL AGENTS (IVAS)
28
+
29
+ Pointing is a fundamental building block of human communication [36]. The ubiquity of pointing drives research on incorporating it for intelligent virtual agents (IVAs) in virtual environments. Although our study did not directly relate to the intelligent part, we ended up with Intelligent Virtual Agents instead of Embodied Virtual Agents $\left\lbrack {{20},{22}}\right\rbrack$ . This is not only because that our study is motivated by and applicable to virtual agents that should be intelligent enough to interact with users with deictic gestures, but also we thought that the design of pointing gestures of a virtual agent implied that the agents should be visually embodied.
30
+
31
+ Most prior studies on IVAs with pointing focus on its benefits in drawing users' attention to some content in the virtual world where the IVA is situated. For example, the Persona agent [2] could point to images on web pages and Jack, as a virtual meteorologist, can give a weather report by pointing to the weather images [49]. Atkinson [4] showed an animated virtual agent serving as a tutor in a knowledge-based learning environment and demonstrated the benefits of pointing in directing the learners' attention. When combined with speech and context, using the Behavior Expression Animation Toolkit (BEAT), an agent was created to generate correlated gestures by extracting the linguistic and contextual information from the input text [13]. To achieve deictic believability, Lester et al. [39] designed COSMO agent, using a deictic planner to determine the generation of pointing guided by the spatial deixis framework. Rather than pointing to the virtual environment, an agent called MACK, in mixed reality, could point to a physical paper map shared with users in reality along with speech [12]. However, an unanswered question is how accurately can an IVA point to the real world.
32
+
33
+ § 2.2 PERCEPTION OF POINTING IN THE REAL WORLD
34
+
35
+ When humans perform pointing naturally, without instructions, instead of pointing using their arm vector, Bangerter & Oppenheimer [7] and Henriques & Crawford [30] observed that humans commonly orient their arm so that the fingertip intersects the line joining the target and their dominant eye while gazing at the target. This is called eye-fingertip alignment as illustrated in Figure 2 c- 2. This mechanism was further shown in the estimation of human pointing direction. With various methods proposed, the head-hand line $\left\lbrack {{16},{41},{46}}\right\rbrack$ (also known as the eye-fingertip line) was found to be the most reliable $\left( {{90}\% }\right)$ compared to forearm direction and head orientation [48]. Mayer et al. [43] demonstrated that it yielded the lowest offset among four other ray cast techniques. As our study is to find factors that enable an IVA to point into the real world accurately, the impact of different alignment strategies is considered in our IVA design.
36
+
37
+ Pointing behavior during interpersonal interaction typically involves the movement of eye gaze, head and arm [30]. Considerable research has been done targeting gaze perception. People can accurately discern their mutual gaze with another person [3, 26] and the direction of the other person's gaze [25]. By contrast, research on the perception of pointing accuracy is scant. By evaluating the detection accuracy for different combinations of head, eye and hand pointing cues, Butterworth and Itakura [10] showed that pointing can improve spatial localization of targets when compared to head and gaze cues but suggested that pointing had limited accuracy. Bangerter & Oppenheimer [7] contested their findings with a more precise measurement technique. The results revealed that the detection accuracy was comparable to the accuracy level for eye gaze and it was unaffected by the exclusion of eye gaze and head orientation. Despite the good accuracy, they observed a perceptual bias towards the side of the pointer's arm away from the observer in the horizontal dimension and above the target in the vertical direction. It was illustrated that the ambiguity introduced by the deviation between the eye-fingertip line and arm line might account for it. A study by Cooney et al. [19] evaluated the pointing accuracy in the horizontal direction and replicated Bangerter & Oppenheimer's results. Considering the ambiguity shown in human pointing and exploiting the fact that we have explicit control over the IVA's head, eye and finger positioning, we designed our IVA to use arm vector pointing rather than eye-fingertip alignment as an approach to improve its pointing accuracy as illustrated in Figure 2.
38
+
39
+ Finally, during interpersonal interactions, the accuracy with which observers can detect the pointed targets based on another person's pointing gestures has been a key issue. Because if a person cannot accurately interpret the other's pointing direction, it would be difficult to establish joint attention within a conversation [10]. Prior research shows that the distance between users and targets can affect users’ interpretation of the pointing direction $\left\lbrack {5,{16},{60}}\right\rbrack$ . To study this effect, we configured the distance as an independent variable to investigate how the accuracy changes in different distances.
40
+
41
+ § 2.3 PERCEPTION OF POINTING IN VIRTUAL ENVIRONMENTS
42
+
43
+ While pointing is ubiquitous in daily interactions within the real world, it is difficult for users to precisely interpret the pointing direction in virtual environments. Wong and Gutwin [60] compared users' accuracy in a collaborative virtual environment (CVE) with the real world and observed worse performance in CVE although the difference was smaller than expected. The immersive head-mounted displays (HMDs) and virtual reality (VR) systems (e.g., CAVE) only support pointing within the virtual environment where the IVA is situated. By merging the real world with the virtual environment, FTVR [59] displays enable the IVA to point from the virtual world to the real world and provide a mixed reality experience. Our experiment used a spherical FTVR display because it has advantages over other VR/AR displays and planar displays as we will discuss in Section 3.1.
44
+
45
+ Regarding the evaluation of users' perception of pointing in FTVR, previous research focused on the assessment of pointing cues. Kim et al. [35] classified the pointing cue factors to consist of three levels: gaze, hand, and gaze+hand. They found no significant difference among the three levels with an experiment in a cylindrical 3D display. Using gaze to convey pointing direction within a spherical display has also been shown to be effective $\left\lbrack {{27},{35},{50}}\right\rbrack$ .
46
+
47
+ The research listed above is mostly concerned with telepresence. That is, the remote person is represented by an avatar or captured using cameras to realize remote collaboration. By contrast, we are using the IVA to perform pointing. In this context, the IVA is regarded as a social entity to mimic human intelligence [34] and work with a person. Unlike pointing in telepresence, designing the IVA's pointing gestures provides more possibilities to improve users' perception of pointing as the pointing behaviours do not have to be exactly human-like. Thus, for our design, we have the opportunity to design the IVA's pointing gestures not completely the same as humans. This enables us to remove the eye-fingertip alignment in the IVA as suggested in Section 2.2. The complete IVA design is discussed in the following Section 3.
48
+
49
+ § 3 DESIGN FACTORS
50
+
51
+ This section elaborates on the design factors to enable our IVA to point as accurately as possible, including the situated display, IVA appearance and pointing gesture strategies.
52
+
53
+ § 3.1 SITUATED DISPLAY
54
+
55
+ We used a spherical FTVR display for IVA due to the following reasons. First, FTVR displays are situated in the real world which enables the IVA to point from the virtual environment to locations in the real world. Alternative approaches, such as immersive headset displays, only support pointing within the virtual environment where the IVA is situated. Though AR displays provide the see-through feature that can get similar effects, these systems lack the tangible nature of having a volumetric display that is part of the real world. FTVR displays also provide motion parallax and stereoscopic cues, which are important in interpreting pointing gestures [35]. The spherical shape has been found to provide better depth and size perception compared to a planar counterpart [62]. Spherical screens also showed better task performance in perceiving gaze direction compared to planar screens $\left\lbrack {{28},{50}}\right\rbrack$ . As perceiving pointing gestures depends on multiple aspects of visual perception such as depth and orientation perception, it is promising to use spherical FTVR displays to improve the pointing perception.
56
+
57
+ § 3.2 IVA APPEARANCE
58
+
59
+ The state of the art in photo-realistic representations for IVAs is subject to the Uncanny Valley [47]: a high degree of realism does not necessarily lead to positive evaluations. Considering this effect, Schneider et al. [55] suggest to use a non-human appearance with the ability to behave like a human. Following this suggestion, we chose a Japanese female cartoon character as our IVA to avoid the negative feelings caused by a human-like appearance while supporting humanlike behaviors. Our IVA's appearance is designed with large eyes and small nose (Figure 3) to provide the characteristic of the baby schema [42], which could induce a pleasurable feeling [29].
60
+
61
+ § 3.3 POINTING GESTURES
62
+
63
+ We designed our IVA to point following the arm vector (Figure 2 c-1) instead of the eye-fingertip alignment (Figure 2 c-2) to avoid potential perceptual ambiguity. As discussed in Section 2.2, humans commonly point to where they are looking by aligning their fingertip with the gaze of their dominant eye $\left\lbrack {7,{30}}\right\rbrack$ (Figure 2 c-2). When it comes to perceiving others' pointing, this might introduce ambiguity because the location followed by the arm vector is different from the actual target location followed by the eye-fingertip line. Previous work [7] found that participants exhibited a perceptual bias towards the upside of the target, potentially because of this ambiguity. Therefore, rather than design IVAs to point the same way as humans commonly do (i.e., eye-fingertip alignment), we instead remove the eye-fingertip alignment in the design of IVA's pointing gestures, that is, the arm vector directly points at targets (Figure 2 c-1). Our expectation is that this approach will mitigate the perceptual errors of eye-fingertip alignment and result in a perceptually accurate IVA pointing gesture.
64
+
65
+ For pointing cues, previous research has found that the orientation of the pointer's eyes, head and hand are used as visual cues for an observer to interpret a pointing gesture $\left\lbrack {{30},{35}}\right\rbrack$ . Prior work $\left\lbrack {61}\right\rbrack$ has found that the hand cue alone provides accurate pointing perception but with a loss of naturalness. In our study, we decide to include all the pointing cues, i.e., eyes, head and hand orientations, to promote accurate and natural perception. In summary, we design our IVA to point with an outstretched arm, eyes and head facing the target without the eye-fingertip alignment; thus, all cues are consistently directing attention to the same location.
66
+
67
+ < g r a p h i c s >
68
+
69
+ Figure 2: We consider three design factors to enable accurate perception of the pointing performed by IVA, including situated display, IVA appearance, and pointing gesture. (a) We used a situated spherical 3D display as it offers effective depth cues for pointing perception. (b) We used an animated cartoon character that offers natural, easy-to-control pointing affordance. (c) We designed our IVA to point following the arm vector (c-1) instead of the eye-fingertip alignment (c-2) to avoid potential perceptual ambiguity.
70
+
71
+ § 4 EXPERIMENT
72
+
73
+ The goal of our experiment is to assess how accurately our IVA can point into the real world. With a real person's natural pointing as the baseline, we measured how accurately a human observer can interpret the pointing of our IVA or the real person to a physical location. In doing so, we lay the foundation for the design of IVAs with pointing and shed light on future studies about the contribution of the individual design factor.
74
+
75
+ § 4.1 PARTICIPANTS
76
+
77
+ Thirty-six participants (19 females and 17 males) aged between 21 and 30 were recruited from a local university to participate in the study with compensation of a $\$ {10}$ gift card. All had normal or corrected-to-normal vision.
78
+
79
+ § 4.2 APPARATUS
80
+
81
+ We set up the experiment using a situated 24-inch spherical display (Figure 3) which renders the IVA, and a flat fabric projector screen which renders target area. With four stereo projectors rear projecting onto the spherical surface, we adopted an automated camera-based multi-projector calibration technique [64] to enable a 360-degree seamless image with 1-2 millimeter accuracy. The projectors are Optoma ML750ST with ${1024} \times {768}$ pixel resolution and a frame rate of ${120}\mathrm{\;{Hz}}$ . With an NVIDIA Quadro K5200 graphics card, a host computer sends rendering content to the projectors. Our IVA was rendered using Unity3D and MikuMikuDance [33] model from DeviantArt [52]. We used an OptiTrack ${}^{\mathrm{{TM}}}$ to track the passive markers attached to the shutter glasses for head tracking. We used a pattern-based viewpoint calibration [57] that computed a viewpoint registration with an average angular error of less than one degree. Viewers gain perspective-corrected images with stereo rendering coupled with the synchronized shutter glasses. The total latency is between 10-20 msec [24]. With a resolution of 34.58ppi, the display provides various $3\mathrm{D}$ depth cues such as motion parallax and stereoscopic cues [63]. An Optoma ML750ST projector with ${1024} \times {768}$ pixel resolution was used to display an ${80}\mathrm{\;{cm}} \times {80}\mathrm{\;{cm}}$ target area on a flat fabric projector screen. The grid content and target indicator were created by Unity3D.
82
+
83
+ § 4.3 HUMAN AND IVA POINTING
84
+
85
+ As a baseline, an independent real person (RP) was hired to be the pointer. The dominant hand and eye of our RP are both on the right side. To capture the specific natural human pointing as the baseline, RP was not instructed about the specific manner about pointing gestures but just asked to point as accurately as possible with head, eyes and outstretched arm. Both RP and IVA used the left arm to point to the targets in the left region and the right arm for targets in the center or right region.
86
+
87
+ In practice, most IVAs are rendered in relatively small displays such as home assistant systems $\left\lbrack {1,{11},{21}}\right\rbrack$ . The size difference between IVA and RP makes it challenging to make a fair comparison on the pointing perception. To characterize the potential effect of the size difference, we include two viewing conditions in our study: SameDis and SameRet (Figure 4). In SameDis, the IVA and RP are placed at the same observation distance from the participant. In this condition, the retina image of IVA is smaller than RP with the arm length of IVA and RP as ${30.5}\mathrm{\;{cm}}$ and ${68}\mathrm{\;{cm}}$ respectively. In SameRet, the retina sizes of IVA and RP are the same by moving RP ${56}\mathrm{\;{cm}}$ further away from the participant, resulting in the same angular size of the arm length in IVA and RP. The viewing condition is designed based on previous study that found the task performance of visual reasoning did not vary as long as the retinal image is unchanged, demonstrated through an experiment with a larger display placed farther than a smaller display [14]. However, moving RP away may introduce potential experimental bias by increasing the viewing distance. We included both conditions (same retinal size & same viewing distance) to see what impact the size factor has.
88
+
89
+ § 4.4 EXPERIMENTAL DESIGN
90
+
91
+ We followed a $2 \times 2 \times 2$ mixed design with one between-subjects variable(C1)and two within-subjects variables(C2, C3):
92
+
93
+ * C1 The Viewing condition, which could be Same Retinal Size (SameRet) or Same Distance (SameDis). In SameDis, the viewing distances in RP and IVA are the same. In SameRet, the retinal sizes are the same by placing RP ${56}\mathrm{\;{cm}}$ farther from the participant compared to IVA (Figure 4).
94
+
95
+ * C2 Pointer which could be Intelligent Virtual Agent (IVA) or Real Person (RP).
96
+
97
+ * C3 Distance which could be near or far. The distance between the participant and target area is ${70}\mathrm{\;{cm}}$ in near and ${210}\mathrm{\;{cm}}$ in far.
98
+
99
+ We designed $\mathbf{{C1}}$ as a between-subjects variable to avoid learning and transfer across different viewing conditions. We randomly and equally divided 36 participants into 2 groups. One group went through SameRet and the other did in SameDis. Each group went through the levels of $\mathbf{{C2}} \times \mathbf{{C3}}$ . The order of $\mathbf{{C2}}$ and $\mathbf{{C3}}$ was fully counterbalanced.
100
+
101
+ We measured error and error bias in the horizontal and vertical dimensions, suggested by prior study that has found systematic bias particularly in the vertical direction [7, 19]. We collected subjective data through a post-study interview. The quantitative metrics are as follows:
102
+
103
+ * Horizontal & Vertical Error, defined as the Euclidean distance between the actual target location and participants' perceived location along the corresponding axis.
104
+
105
+ * Horizontal & Vertical Error Bias, computed by subtracting the actual position from the perceived location. A positive value indicates an estimation to the right or above the true locations. respectively.
106
+
107
+ § 4.5 TASK
108
+
109
+ In each trial, participants observed the pointing performed by IVA or RP and reported the pointing position by clicking where they believe the IVA or RP was pointing using a mouse. They were asked to prioritize accuracy over speed. The pointing positions are located within an ${80}\mathrm{\;{cm}} \times {80}\mathrm{\;{cm}}$ square projected onto a fabric projector screen as the target area placed beside participants (Figure 4). Early pilot of this task has shown that the task might be too difficult due to the lack of reference in a blank background. Therefore we provided a relatively dense ${40} \times {40}$ line grid as the target background (Figure 4)
110
+
111
+ § 4.6 PROCEDURE
112
+
113
+ Participants started by filling out a consent form followed by verbal explanations of the experiment. Participants were asked to sit on an adjustable chair (Figure 3) to ensure the horizontal alignment of their shoulder and the pointer's shoulder in both IVA and RP. They were seated by the right side of the pointer (Figure 4). The distance between the participant and the target area is ${70}\mathrm{\;{cm}}$ in near, and 210 $\mathrm{{cm}}$ in far, which are chosen to represent the proximal pointing in the near distance and approximate the distal pointing [54] constrained by the experimental room.
114
+
115
+ Each participant was provided with a mouse and a clipboard to hold it. They were instructed to click where they believe the IVA or RP was pointing by prioritizing accuracy over speed. With a total of 4 conditions (IVA vs RP, Near vs Far), each contains 20 trials at different locations with the first 5 provided as practice located at left middle, right middle, top middle, bottom middle and center to illustrate the entire region. Participants were told the actual location in the practice trials. In the formal trials, the locations of targets were randomly generated and can be any location inside the target area. To avoid cross-talk with previous targets serving as a reference participants were instructed to close their eyes between trials.
116
+
117
+ When the participant was ready, the IVA pointed to random locations inside the target area, controlled by the experimenter using a keyboard, whereas RP performed the pointing gesture using a visible random target while the participant had their eyes closed. The dominant hand and eye of our RP are both on the right side. Both IVA and RP used the left arm to point to the targets located in the left region and the right arm for targets in the center or right region When the gesture was ready, the reference target for RP disappeared and participants were asked to open their eyes to perform the task The IVA and RP held the gesture until the participant had finished the click and said "okay." No other communication was provided between participants and RP. Once all conditions were completed, a follow-up interview was conducted to collect participants' subjective preference between IVA and RP on the easiness to perceive pointing and the difference between the perceived and actual pointed location We also asked the pointing cues that they referred to in the task. The study took approximately ${30} - {40}\mathrm{\;{min}}$ to complete.
118
+
119
+ § 4.7 DATA ANALYSIS
120
+
121
+ We conducted a mixed ANOVA with C1 Viewing as a between-subjects factor, C2 Pointer and as C3 Distance as within-subjects factors. Significance values were reported in brackets for $p <$ ${.05}\left( *\right) ,p < {.01}\left( {* * }\right)$ , and $p < {.001}\left( {* * * }\right)$ respectively. Numbers in brackets indicate mean(M)and standard error(SE)for each respective measurement. The post-hoc analysis was conducted using pairwise t-tests with Bonferroni corrections.
122
+
123
+ < g r a p h i c s >
124
+
125
+ Figure 3: (Left) The Intelligent Virtual Agent (IVA) in a spherical Fish Tank Virtual Reality (FTVR) display that enables the IVA to point from the virtual world to the real world. (Middle) Experimental setup with IVA as the pointer. A participant wears tracked shutter-glasses to perceive the perspective-corrected stereoscopic IVA on the spherical FTVR display. (Right) Experimental setup with a real person (RP) as the pointer. A RP was hired to perform natural pointing as a baseline for the comparison with the IVA's pointing.
126
+
127
+ < g r a p h i c s >
128
+
129
+ Figure 4: Experimental layout in the viewing conditions. (Top) Same Distance (SameDis): Real person (RP) and IVA have the same viewing distance with respect to the participant. The retina size of IVA is smaller than RP. (Bottom) Same Retinal Size (SameRet): RP and IVA have different viewing distances with respect to the participant to keep the same retinal size.
130
+
131
+ § 4.8 RESULTS
132
+
133
+ § 4.8.1 ERROR
134
+
135
+ With all assumptions met, we used a $2 \times 2 \times 2$ mixed-model ANOVA (Viewing $\times$ Pointer $\times$ Distance) on the Horizontal Error and Vertical Error respectively (Figure 5(a)).
136
+
137
+ Horizontal Error: We found main effect of Distance for the horizontal error $\left( {F\left( {1,{34}}\right) = {69.16},p < {0.001}}\right)$ . The mean horizontal error in near Distance $\left( {M = {9.98}\mathrm{\;{cm}},{SE} = {0.35}\mathrm{\;{cm}}}\right)$ was ${26.5}\%$ lower (***) than in far $\left( {M = {13.58}\mathrm{\;{cm}},{SE} = {0.52}\mathrm{\;{cm}}}\right)$ . We did not find main effects of Viewing $\left( {F\left( {1,{34}}\right) = {0.44},p > {0.05}}\right)$ and Pointer $\left( {F\left( {1,{34}}\right) = {0.44},p > {0.05}}\right)$ . No interaction effects were found among factors.
138
+
139
+ Vertical Error: We found main effects of Pointer $(F\left( {1,{34}}\right) =$ ${29.42},p < {0.001})$ and Distance $\left( {F\left( {1,{34}}\right) = {31.74},p < {0.001}}\right)$ for the vertical error. The mean vertical error in IVA $(M =$ ${10.22}\mathrm{\;{cm}},{SE} = {0.37}\mathrm{\;{cm}}$ ) was ${28.8}\%$ lower (***) than in $\mathrm{{RP}}(M =$ ${14.35}\mathrm{\;{cm}},{SE} = {0.56}\mathrm{\;{cm}}$ ). The mean vertical error in near $D$ is-tance $\left( {M = {11.11}\mathrm{\;{cm}},{SE} = {0.47}\mathrm{\;{cm}}}\right)$ was ${17.5}\%$ lower $\left( {* * * }\right)$ than in far $\left( {M = {13.46}\mathrm{\;{cm}},{SE} = {0.52}\mathrm{\;{cm}}}\right)$ . We did not find main effect of Viewing $\left( {F\left( {1,{34}}\right) = {2.66},p > {0.05}}\right)$ . A two-way interaction effect was observed between Viewing and Pointer $(F\left( {1,{34}}\right) = {5.05},p <$ 0.05).
140
+
141
+ A post-hoc analysis of the two-way interaction effect Viewing $\times$ Pointer (Figure 5(d)) shows significant difference in vertical error between RP and IVA in both SameRet $\left( {p < {0.05}}\right)$ and SameDis $\left( {p < {0.001}}\right)$ . When viewing in SameRet, the mean vertical error in IVA $\left( {M = {10.47}\mathrm{\;{cm}},\mathrm{{SE}} = {0.48}\mathrm{\;{cm}}}\right)$ was ${18.7}\%$ lower (*) than in RP $\left( {M = {12.88}\mathrm{\;{cm}},\mathrm{{SE}} = {0.77}\mathrm{\;{cm}}}\right)$ . When viewing in SameDis, the mean vertical error in IVA $\left( {M = {9.98}\mathrm{\;{cm}},\mathrm{{SE}} = {0.57}\mathrm{\;{cm}}}\right)$ was ${36.9}\%$ lower (***) than in $\operatorname{RP}\left( {M = {15.81}\mathrm{\;{cm}},{SE} = {0.75}\mathrm{\;{cm}}}\right)$ . The mean vertical error was significantly lower (*) in SameRet $(M = {12.88}\mathrm{\;{cm}},\mathrm{{SE}} =$ ${0.77}\mathrm{\;{cm}}$ ) than in SameDis $\left( {M = {15.81}\mathrm{\;{cm}},{SE} = {0.75}\mathrm{\;{cm}}}\right)$ in RP $(p <$ ${0.05})$ , but not $\left( {p > {0.05}}\right)$ in IVA.
142
+
143
+ § 4.8.2 ERROR BIAS
144
+
145
+ With all assumptions met, a mixed-model ANOVA was conducted on the Horizontal and Vertical Error Bias respectively. The means and ${95}\%$ CIs and a scatter plot showing the error bias for all participants can be found in Figure 5(b)(c).
146
+
147
+ Horizontal Error Bias: We did not find main effects of Pointer $\left( {F\left( {1,{34}}\right) = {3.17},p > {0.05}}\right)$ , Distance $(F\left( {1,{34}}\right) = {2.47},p >$ ${0.05})$ , Viewing $\left( {F\left( {1,{34}}\right) = {0.14},p > {0.05}}\right)$ for the horizontal error bias, or any interaction effects among three factors.
148
+
149
+ Vertical Error Bias: We found main effect of Pointer $(F\left( {1,{34}}\right) =$ ${284.84},p < {0.001})$ for the vertical error bias. The mean vertical error bias in IVA $\left( {M = - {2.41}\mathrm{\;{cm}},\mathrm{{SE}} = {0.73}\mathrm{\;{cm}}}\right)$ was significantly lower $\left( {p < {0.001}}\right)$ than in $\operatorname{RP}\left( {M = {13.47}\mathrm{\;{cm}},{SE} = {0.64}\mathrm{\;{cm}}}\right)$ . We did not find main effects of Distance $\left( {F\left( {1,{34}}\right) = {1.66},p > {0.05}}\right)$ , Viewing $\left( {F\left( {1,{34}}\right) = {2.20},p > {0.05}}\right)$ , or any interaction effects among three factors.
150
+
151
+ § 4.8.3 INTERVIEW RESPONSES
152
+
153
+ In the post-study interview, we found that most participants took Hand as the major cue in determining the pointing direction of both IVA (55.6%) and RP (61.1%) as shown in Figure 6(a). We collected participants' subjective preference between IVA and RP on the ease with which to perceive pointing (Figure 6(b)). We found that IVA was voted as the easier one by the majority (55.6%) especially when Viewing is SameRet (72.2%). Since participants were told the actual locations pointed to in the practice trials before each session, we collected subjective data about whether their expected locations are close to the actual ones to understand their perception of pointing
154
+
155
+ < g r a p h i c s >
156
+
157
+ Figure 5: Study results on (a) error, (b-c) error bias, and (d) interaction effect Viewing $\times$ Pointer on the vertical error. (a) Mean error and 95% CIs of perceived pointing locations in IVA and RP. IVA yielded significantly lower error (28.8% less) in vertical dimension and comparable horizontal accuracy as RP. (b) Mean error bias and 95% CIs in IVA and RP. Participants showed a systematic upward bias in perceiving RP’s pointing, which is demonstrated in (c) the scatter plot of all participants’ average error bias. Data points above the horizontal axis indicate upward bias. Significance values were reported for $p < {0.05}\left( *\right) ,p < {0.01}\left( {* * }\right)$ , and $p < {0.001}\left( {* * * }\right)$ .
158
+
159
+ < g r a p h i c s >
160
+
161
+ Figure 6: Participants' preference between IVA and RP for (a) pointing cues, (b) easiness of judging the pointing, and (c) deviation between the actual and perceived pointing direction. (a) Most participants took Hand as the major cue in determining the pointing direction of both IVA (55.6%, 20/36) and RP (61.1%, 22/36). (b) IVA was voted as the easier one by the majority $\left( {{55.6}\% ,{20}/{36}}\right)$ especially when viewing in SameRet (72.2%, 13/18). (c) Most participants (77.8%, 28/36) found RP exhibited a larger deviation between the actual and perceived pointing in the practice trials than IVA.
162
+
163
+ (Figure 6(c)). 77.8% participants found that the difference between the actual and perceived location was larger in RP compared to IVA, with 72.2% in SameRet and 83.3% in SameDis.
164
+
165
+ § 5 DISCUSSION
166
+
167
+ Based on the results, we summarize the following major findings. We found that participants can perceive accurately where the IVA was pointing in the real world:
168
+
169
+ * IVA achieved accurate pointing perception with the horizontal error of ${11.58}\mathrm{\;{cm}}$ comparable to ${11.99}\mathrm{\;{cm}}$ of RP and the vertical error of ${10.22}\mathrm{\;{cm}}$ significantly lower than ${14.35}\mathrm{\;{cm}}$ of RP.
170
+
171
+ * Participants showed a systematic upward bias of ${13.47}\mathrm{\;{cm}}$ regardless of Distance in RP but not in IVA.
172
+
173
+ * The Viewing condition did not appear to affect the accuracy difference between IVA and RP.
174
+
175
+ § 5.1 REFLECTIONS ON DESIGN FACTORS
176
+
177
+ In this section, we discuss the three design factors (pointing gesture, situated display, and IVA appearance) to provide interpretations of our findings in relation to RP as well as suggest future directions.
178
+
179
+ § 5.1.1 POINTING GESTURES
180
+
181
+ In our study, RP was asked to point as accurately as possible by naturally moving their head, eye-gaze, and outstretched arm towards the target. After the experiment, we asked the RP about their pointing and found the RP used eye-fingertip alignment. This is not surprising, as it is commonly observed in natural human pointing [7, 30]. However, we found that participants might perceive the pointing gesture in a way different from how RP performed it. This difference between perceiving and performing the pointing gesture may potentially explain the strong upward bias observed in the RP results (Figure 5(b)).
182
+
183
+ To illustrate this bias, consider Figure 7(a): a pointer outstretches its arm to point to a target (green cube) by placing the fingertip on the line joining the dominant eye and the target. If the viewer perceives the pointing direction by following the arm vector extended from the fingertip, there will be a vertical error causing an incorrect position (blue cube) with an upward bias deviating from the actual pointed position (green cube). Note that regardless of the actual target position, the vertical error will always be positive (upwards) since the pointer's shoulder (the origin of the arm vector) is always below their eyes (the origin of eye-fingertip vector). Similarly, there will also be horizontal bias as shown in Figure 7(b). Different from the vertical bias, horizontal bias can be both positive (on the right) and negative (on the left), which could potentially explain that we only observed systematic positive error bias in the vertical direction but not in the horizontal direction in RP (Figure 5(b)). The systematic upward bias of RP (13.47 cm) is consistent with prior work [7] in which they found a mean angular error bias of 2.5 degree above the target equivalent to a mean vertical error bias of ${11.26}\mathrm{\;{cm}}$ averaged across the viewing distances used in our setup.
184
+
185
+ In the post-study interview, the majority of participants (61.1% in RP and 55.6% in IVA) reported that they mainly focused on the hand/arm cue as the reference to find the pointing direction (Figure 6(a)). This is consistent with prior work [40] as they found users might employ an imaginary ray extending from a fingertip to perceive the pointing in a similar referencing task. In addition, ${77.8}\% \left( {28}\right)$ participants (Figure 6(c)) reported a large deviation in RP between the actual and perceived location with 16 participants commenting that it was confusing to find that RP pointed higher vertically than expected. By contrast, 19.4% reported the deviation and confusion in IVA. It indicates that the IVA's pointing gesture, which is the arm-vector pointing rather than the eye-fingertip alignment, is likely to be more aligned with how the majority perceived the pointing direction.
186
+
187
+ Besides the error bias, we also found the vertical error in IVA (10.22cm)was significantly lower than $\mathrm{{RP}}\left( {{14.35}\mathrm{\;{cm}}}\right)$ as shown in Figure 5(a), also reflected in participants' interview responses on the ease of perceiving the pointing of IVA (Figure 6(b)). Without the eye-fingertip alignment, the correct target location would be reached directly by following the arm vector of IVA. Therefore, we suggest using the arm-vector as the primary cue when designing the pointing gesture of IVA for higher accuracy of pointing perception.
188
+
189
+ < g r a p h i c s >
190
+
191
+ Figure 7: Illustration of the error bias when using a pointing posture with eye-fingertip alignment. A pointer outstretches the arm to point to a target (green cube) by placing the fingertip on the line joining the dominant eye and the target. (a) In the side-view, the perceived location (blue cube) is systematically higher than the actual location (green cube). (b) In the top-view, left and right arm pointing with eye-fingertip alignment results in the deviation in both directions.
192
+
193
+ Our results showed that participants were more accurate horizontally than vertically (Figure 5). One potential cause is the difference between horizontal and vertical visual acuity. Previous work found $\left\lbrack {{15},{17}}\right\rbrack$ that users have better horizontal visual acuity to perceive gaze directions compared to vertical acuity. Another possible explanation is that the arm switch of the pointer might provide a visual cue as to which side the target is located. The left/right arm inherently implies the left/right region, potentially making the task easier horizontally than vertically. Further experiments are needed to investigate it.
194
+
195
+ § 5.1.2 SITUATED DISPLAY
196
+
197
+ Our study was conducted using a spherical FTVR display. It is an open question as to how the findings can be applied to other display devices such as non-FTVR 2D monitors and HMDs. Conventional flat displays, like monitors, without using FTVR lack depth cues such as motion parallax and stereopsis, which are essential for pointing perception [35]. In addition, on a flat surface, the Mona Lisa effect, which describes a phenomenon in which a character's eyes seem to follow the user irrespective of the user's position [45], could negatively affect users' perception of pointing with eye gaze as a pointing cue.
198
+
199
+ Flat FTVR displays provide additional depth cues compared to traditional 2D displays. The major difference between a flat and spherical FTVR display is the shape factor. We expect it would be difficult to achieve similar results on a flat FTVR display due to two reasons. First, existing studies have found that spherical FTVR displays provided better gaze, depth and size perception than a flat counterpart $\left\lbrack {{28},{62}}\right\rbrack$ . Perceiving pointing direction depends on the depth and size perception. Another related issue is the vergence-accommodation conflict (VAC) [58]. Although participants in our experiments were required to sit on a fixed chair, we did not constrain their head motion. With a spherical display, they could keep a relatively constant screen distance following a curvature [62]. While for the flat counterpart, users' viewing distance to the screen surface would change while moving their head, which might result in a more pronounced VAC. Future studies are required to investigate these issues and evaluate IVAs' pointing accuracy within the flat FTVR displays.
200
+
201
+ For other 3D displays with perspective-corrected and stereo rendering, such as a CAVE and HMDs used in AR, we anticipate that similar results may be found depending on the relative importance of each of the other design factors we investigated, i.e. IVA appearance and pointing gesture type. Future studies of controlled experiments would be required to understand the effect of individual factors and potentially associate the result with the display factor.
202
+
203
+ § 5.1.3 IVA APPEARANCE
204
+
205
+ The different appearances between RP and IVA, such as gender, realism and eyes may have some influence on participants' perception. Prior research on user preferences for agents' gender presents contradictory findings and trends, which may be due to user characteristics or context [51]. Regarding realism, RP was reported by four participants to be more familiar and common. Two participants commented that IVA's bigger eyes were helpful to judge the direction. In contrast, RP's eye gaze cue was reported to be subtle by three participants, with one indicating it was even harder to discern the change in the horizontal direction. Moreover, two participants said that they tried to avoid eye contact in RP, while there was no such concern in IVA. Besides, previous research showed that users exposed to images of animals with baby schema were more physically tender in their motor behavior and performed better on a task that demanded extreme carefulness [56]. The baby schema of the IVA might have some effect on participants' performance. Future studies could determine the extent to which each aspect contributes to the pointing perception.
206
+
207
+ § 5.2 DISTANCE
208
+
209
+ Not surprisingly, participants perceived pointing more accurately when targets were closer than farther, no matter whether it is in SameRet or SameDis. With the same target area, farther distance results in a subtler angular change for all the pointing cues (head, eyes and hand). Three participants also commented that it was hard to extend the arm line to locate the target when farther away. However, despite the higher level of difficulty for farther distances, our IVA can still point more accurately than the real person, indicating the effectiveness of our IVA design. It also suggested that users are able to know where an IVA is pointing within a range of distance. Practically, this implies that should an IVA be used as a home assistant or a virtual tutor, it can be situated in a single location and still be able to point to near and far objects while indicating, "It's over there." to provide deictic indications with users.
210
+
211
+ § 5.3 VIEWING CONDITION
212
+
213
+ We introduced the Viewing condition as a between-subjects factor due to the size difference between IVA and RP. Our main finding that IVA provides better pointing perception than RP holds both in SameRet and SameDis. Therefore, incorporating the Viewing condition in the study design helps to validate our results. Besides, we found the Viewing condition plays a role in the pointing perception with the interaction effect Viewing $\times$ Pointer (Figure 5(d)). Adjusting the distance between RP and the participant leads to different retinal sizes and causes the difference in the vertical error between SameRet and SameDis in RP. Note that the difference is only in the vertical error but not in the vertical error bias, indicating that changing the Viewing condition did not introduce systematic bias but affected the precision of the pointing perception. One possible explanation is that there might exist an optimal viewing distance and retinal size to perceive the pointing direction by observing the pointer's posture. Our study focused on the difference between pointers and evaluated one fixed distance or retinal size. Future studies are needed to investigate the potential effect of different viewing distances and retinal sizes on the pointing perception.
214
+
215
+ Participants' comments on the size difference are quite divergent. In SameDis where the retinal size of IVA is approximately half the size of RP, 5 out of 18 participants commented that IVA's pointing was easier. They explained that the smaller size of IVA allowed them to perceive a more noticeable change of eyes, hand and head orientation. Conversely, 4 out of 18 participants who found RP easier commented that RP's life-size was more natural to perceive the pointing. Similarly in SameRet their comments are also divergent. Four participants preferred IVA's smaller size whereas two preferred the life-size of RP. While future studies can quantify individuals' sensitivity to this factor, we also note that from a practical perspective, our study shows that there is unlikely a one-size-fits-all solution to optimize the size and visual representation of an IVA. Thus, allowing users to tailor their IVA's appearance would be advisable.
216
+
217
+ § 6 DESIGN IMPLICATIONS
218
+
219
+ The main design implication from our study is that with a set of design factors determined, it is feasible to have an IVA point with comparable accuracy to a real person. In our IVA design, we used a spherical FTVR display, rendered a 3D cartoon IVA with humanlike behaviors and applied arm vector pointing instead of the eye-fingertip alignment, which collectively contributes to our IVA's high pointing accuracy. As the appearance and pointing gesture strategy are not dependent on the display factor, we expect these design choices could be considered in other display devices. The findings serve as a foundation for designing an IVA to point to the physical world accurately and provide pathways for future studies to precisely quantify the relative contribution of each factor.
220
+
221
+ We also suggest to provide more cues for perceiving pointing to objects farther away. According to our results, when participants were farther from the target, the accuracy of the pointing perception decreased significantly. Visual cues such as the orientation of the head, hand and eye gaze might not be sufficient to accurately indicate the target. Additional verbal cues, such as the location or feature description, should be considered to convey the pointing direction efficiently, which better resembles human pointing behaviour. For example, a combination of verbal description, i.e., "it's on the table over there", with a pointing gesture can be implemented with IVAs. A future study could investigate the natural communication mechanisms combining voice and deictic gestures.
222
+
223
+ § 7 LIMITATIONS AND FUTURE WORK
224
+
225
+ We discuss four limitations of our work along with opportunities they present for future research. First, we hired one single RP as the baseline pointer. Without explicit instructions, the RP pointed naturally in a way with eye-fingertip alignment commonly found in human natural pointing $\left\lbrack {7,{30}}\right\rbrack$ . Our primary goal for the study was to establish that users can perceive where a carefully designed IVA is pointing. While using an RP baseline illustrated some potential avenues of research to quantify the differences in pointing between IVAs and real people, our study was not designed to do so. For example, natural human pointing has a range of variations of pointing gestures and strategies that are employed. We believe future studies can pursue with multiple RPs spanning a range of strategies, which would help establish the robustness of IVA pointing relative to human pointing, and define the lower and upper bounds of IVA/RP differences to provide additional insight into different design approaches for IVA pointing gestures and appearance.
226
+
227
+ Second, despite providing head tracking and depth cues, FTVR displays still have many technical and perceptual limitations, e.g., lower resolution and fewer depth cues than in reality. These constraints may affect participants' accuracy. Two participants pointed out that IVA lacked depth information (e.g., shadows and lighting). However, all of their quantitative data still suggests a higher accuracy in IVA than in the RP baseline. This indicates that our display's constraints did not appear to have a notable negative impact on participants' performance. The effect of display quality characteristics on the perception of pointing should be identified with further user studies.
228
+
229
+ Third, the design of an IVA involves many factors. In this paper, we focused on using a situated spherical FTVR display, a cartoon IVA appearance and arm-vector pointing gestures. We demonstrated that these factors were sufficient for the IVA to point with comparable accuracy to a real person. Future work will draw attention to controlled experiments for each of the design factors to demonstrate their effects and the degree of individual's sensitivity to the cues that we observed. For example, to precisely quantify the effect of the eye-fingertip alignment, we can have an IVA point with eye-fingertip alignment to compare with the current design. Through studying different pointing configurations, we can create a set of configurable IVA characters that individuals can personalize to optimize their interactions with the IVA.
230
+
231
+ Last, gesture and language are highly integrated components in interpersonal conversation $\left\lbrack {8,{23},{44}}\right\rbrack$ . Our study provides a foundation for designing IVAs that can point accurately to the real world. However, during a conversation, people do not rely on pointing gestures exclusively [7]. Typically, they will rely differently and flexibly on gestural or verbal means [6]. Thus, a future step will concentrate on the role of pointing gestures with verbal cues given to establish joint attention with the IVA.
232
+
233
+ § 8 CONCLUSION
234
+
235
+ In this paper, we proposed an IVA with design factors including a situated display, appearance, and pointing gesture strategy to investigate whether it is possible to have an IVA point accurately into the real world. Using a spherical FTVR display, we conducted a study to measure the IVA's pointing accuracy while comparing to a natural human pointing baseline. In the study, the IVA's pointing accuracy was determined by having participants estimate where they perceive the IVA is pointing in the real world. The participants also estimated where a real person was pointing using the same experimental setup for comparison and discussion of the different design factors.
236
+
237
+ Our results show that participants perceived the IVA's pointing into the real world with comparable accuracy to the real person. Specifically, the IVA outperformed the real person in the vertical dimension and yielded the same level of accuracy horizontally. We discussed design factors that likely contributed to the success of the IVA pointing accuracy, and suggested directions for future studies to provide accurate pointing perception. Our results for the human pointing baseline are consistent with previous literature, showing that participants mainly focus on the pointer's hand, which leads to a bias when interpreting a real person's pointing direction. Particularly, we found participants exhibited a systematic upward bias in the vertical dimension when perceiving the human pointer, which we suspect is due to the ambiguity associated with the eye-fingertip alignment that is commonly employed by people when they point in the real world. The adjustment afforded by the IVA design to use arm vector pointing is helpful to improve IVA pointing accuracy.
238
+
239
+ As voice and visual interfaces for home assistants and other digital assistants are becoming commonly used in daily life, an embodied IVA that can provide gesture cues is expected to enable a more human-like interaction. We demonstrated that a well-designed 3D visual representation of an IVA can be endowed with the capability to point to the real world with comparable accuracy to a real person. Our work shows how an IVA rendered in a 3D display can provide effective pointing gestures, which could be used in conjunction with a voice interface for natural communication bridging the virtual and the real world.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/HzgpxFETf5/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,449 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Tutorials for Children by Children: Design and Evaluation of a Children’s Tutorial Authoring Tool for Digital Art
2
+
3
+ ## Abstract
4
+
5
+ Digital art tools allow children to express their creativity and can help them develop important skills. There are numerous software tutorials available to help teach and inspire digital art enthusiasts, however, most are authored for and by adults. Given that children are increasingly contributing online digital content, in this paper, we investigate a tutorial authoring design concept where children can capture their drawings and information on their process, with the long-term objective of allowing children to share both their creativity and their workflows with other children. Through participatory design sessions, prototyping, and an evaluation, we explore children's attitudes towards the creation of digital art tutorials, focusing on their perceived incentives to author such tutorials and how they feel about the concept of sharing their tutorials with other children. We also elicit reactions towards specific design elements. Our findings suggest important considerations for tools designed to motivate and support children's creation of digital art tutorials.
6
+
7
+ Keywords: Digital art, Drawing, Tutorial authoring system, Sharing workflows, Child-computer interaction, Peer-based learning.
8
+
9
+ Index Terms: H.5.2 [Information Interfaces and Presentation (e.g., HCI)]: User Interfaces; H.5.m [Information Interfaces and Presentation (e.g., HCI)]: Miscellaneous-User studies, Participatory design
10
+
11
+ ## 1 INTRODUCTION
12
+
13
+ Art is a common way for children to express themselves. Engaging in art and creativity is considered a productive use of children's time, by promoting social, emotional, motor, and cognitive development $\left\lbrack {4,{44}}\right\rbrack$ , providing a sense of accomplishment, and boosting self-esteem [37]. Digital art tools allow for new effects, many of which are not possible with physical drawing tools. To inspire children to create digital art and connect with other art enthusiast peers, there are several digital art platforms that provide child-centric areas for children to share their creations [69,70].
14
+
15
+ While sharing digital art, many adult creators share not only their end products but also step-based instructions on how they used a particular feature-rich software tool to create them. In doing so, tutorial authors can both showcase their skills and creativity, and help others learn how to use feature-rich software tools to produce similar effects $\left\lbrack {{38},{42},{52}}\right\rbrack$ . With these advantages in mind, prior research has contributed several tutorial systems and authoring tools to support this process [14,19,21,27,40].
16
+
17
+ Despite the potential advantages of creating and sharing digital art tutorials and the fact that children are already actively sharing digital art online, research on tutorial creation tools has generally focused on adults. In addition to showcasing skills and creativity, generating tutorials for peers would provide children with the opportunity to take on the role of a tutor, which has been shown to help children learn to think from others' perspectives, grow their sense of responsibility [2], and foster self-acceptance [1]. Along with developing useful skills such as planning and communicating, enacting the role of a teacher while creating digital tutorials can provide children with a sense of ownership, and purpose [47].
18
+
19
+ In this research, we explore children's attitudes towards creating digital art tutorials for their peers and how a tutorial authoring system might support them in doing so. Our investigation centres around the following research questions: 1) Are children interested in authoring drawing tutorials for other children while creating digital art? 2) What do they see as potential benefits or incentives? 3) How might a semi-automated tool support children in creating tutorials? 4) How do children use a semi-automated tutorial authoring system to communicate their digital art workflows?
20
+
21
+ To address our research questions, we used prototyping as a means of inquiry to elicit reactions and input from our target population. We first conducted a formative study with eight children (ages 6-11) using paper prototyping to evoke responses towards an initial tutorial authoring concept and to refine individual design elements. In a second study with 16 additional children (ages 7-11), we used a higher-fidelity prototype to further probe on attitudes towards creating tutorials, as well as how children might use such a tool. Findings from our study suggested that many children are interested in creating tutorials, with perceived incentives ranging from altruism, to showcasing drawing skills, to documenting their workflows for their own recollection later. Children used the higher-fidelity prototype to generate a range of creative tutorials, indicating the potential of a semi-automated tutorial authoring system to support children's tutorial creation while producing digital art. Our findings also highlight considerations for child-centric authoring tools, such as the importance of balancing tutorial creation with drawing and providing scaffolding to help children annotate their tutorials.
22
+
23
+ The paper's contributions are as follows: 1) We present findings from two studies that illustrate children's attitudes and approaches to creating digital art workflows for their peers. 2) Through an iterative design and evaluation process with children, we provide insight into how an authoring system can support children in creating digital art tutorials.
24
+
25
+ ## 2 RELATED WORK
26
+
27
+ In this section, we first discuss prior research on tutorial systems and tutorial authoring tools. We then briefly discuss previous research demonstrating the potential for children to create tutorials for their peers. Finally, we turn to research on children creating different kinds of online digital content.
28
+
29
+ ### 2.1 Tutorial Systems and Tutorial Authoring Tools
30
+
31
+ Digital art is often created using complex software, which has been the focus of a large body of work on designing tutorials and other help systems to support their use [35]. For example, several studies have concentrated on generating image-based tutorials by capturing and visualizing users' operation history of using an application $\left\lbrack {{27},{32},{46}}\right\rbrack$ . There are also systems that automatically generate tutorials containing both the workflow histories and videos of the operations [14,28]. Our work is informed by these prior authoring systems; however, whereas the above work has focused on adults, we specifically focus on a system to help children create tutorials, involving them in the design process.
32
+
33
+ ---
34
+
35
+ * email address
36
+
37
+ ---
38
+
39
+ Also relevant to our work are systems that assist users with digital drawing, for example, by providing guidance on how to attain certain effects or drawing elements $\left\lbrack {{19},{21},{33},{40}}\right\rbrack$ . Other work has focused on assisting children in applying tutorials, by helping them locate relevant elements in the target software [30]. In our work, we focus on tools to support children in documenting their drawings and processes. As such, we see our work as being complementary to but distinct from this prior work on helping users (including children) achieve greater drawing success.
40
+
41
+ Although we are not aware of prior work examining tutorial authoring tools for children, there are online platforms for sharing digital art and tutorials with a degree of child focus. For example, DragoArt [71] and DrawingNow [72] list some drawing tutorials targeted at children, however, the vast majority are authored by adults or staff illustrators. Our work focuses on involving children in the design process and on eliciting their reactions to creating tutorials for other children.
42
+
43
+ ### 2.2 Children Tutoring Their Peers
44
+
45
+ Our work builds on previous research showing that children can author academic digital tutorials for their peers $\left\lbrack {{47},{48}}\right\rbrack$ and other teaching-oriented resources, such as educational games to teach other children [34]. Art tutorials differ from those investigated above in that they have the potential to focus more on creativity and inspiration generation than on teaching specific topics. For example, a child-generated math tutorial is created to help peers understand and review a mathematical concept [48], whereas a drawing tutorial might serve to inspire artistic creativity in others. Recent research showed the potential of a music learning app, where children recorded their piano pieces and tutorials on different practice strategies and shared those in an online space to encourage and help their peers to learn playing piano [11]. This research shared similar motivations as ours - inspiring creativity and supporting peer-based learning while enabling children to showcase their artistic competency.
46
+
47
+ Authoring content for other children can help children develop a variety of skills. For example, researchers have investigated the design of game-authoring tools for children $\left\lbrack {{24},{68}}\right\rbrack$ since game creation has the potential to develop narrative skills, improve critical thinking, computer and media literacy, and boost self-esteem [3,24,34]. Collaborative storytelling authoring tools [57,64] improve children's communication skills and writing abilities [57]. Motivated by these benefits, we explore children's attitudes towards a tutorial authoring system that allows them to create digital art tutorials for other children.
48
+
49
+ ### 2.3 Children's Creation of Different Creative Digital Content
50
+
51
+ Our work is also inspired by prior work showing that children are interested in and capable of generating creative digital content with the purpose of sharing this content with others. For example, online programming environments like Scratch [54] provide children with the opportunity to create their own interactive digital content, share ideas, collaborate, and communicate with like-minded peers $\left\lbrack {{12},{16},{56}}\right\rbrack$ . Interactive digital storytelling platforms also allow children to practice creativity by generating imaginative stories and collaborating with others $\left\lbrack {5,8,{26},{31},{41}}\right\rbrack$ . In another vein, online user-generated video-sharing communities like YouTube are becoming increasingly popular among children as a stage to exhibit their skills [67] and engage actively with their audience [43]. Digital art creation is but another way for children to express their creativity. Hence, to support children's creation of digital art, researchers have focused on children's cooperative drawing approach [58] and proposed tools to promote collaboration among peers $\left\lbrack {7,{20}}\right\rbrack$ . Findings from these studies suggest that appropriately designed tools to create digital content can provide children with the opportunity to express themselves [6,43,67], showcase their innovativeness $\left\lbrack {5,{12},{16},{54}}\right\rbrack$ , and also inspire others to participate and collaborate $\left\lbrack {8,{26},{31},{56}}\right\rbrack$ . These findings motivated us to investigate how children would approach a tutorial authoring system where they can create digital art tutorials as guidelines for their peers while showcasing their digital art skills.
52
+
53
+ ## 3 Authoring and Sharing Workflows: General Approach
54
+
55
+ To generate insight into how children respond to the idea of documenting and sharing their digital art workflows, we used prototyping as a means of inquiry. Based on previous research showing the value of low-fidelity (lo-fi) prototyping in designing child-oriented applications $\left\lbrack {{23},{45},{55},{60},{62}}\right\rbrack$ , we started with a paper prototype, which we used to elicit initial reactions in a formative study. We then used insights from this formative study to develop a higher-fidelity prototype that we used to conduct a more detailed evaluation.
56
+
57
+ After exploring prior work on tutorial authoring $\left\lbrack {{14},{19},{27},{32},{33},{35},{42},{46}}\right\rbrack$ , we used sketching to explore features that could facilitate a child's tutorial creation process. For example, we considered automatically capturing screenshots or videos of drawing steps (i.e., after each tool use or drawing modification), enabling children to capture their own steps while drawing, and allowing children to create tutorial steps later from a recorded video of their drawing process. In comparing alternatives, our goal was to keep the tutorial creation process simple, to provide some autonomy, and to avoid detracting too much from the fun of drawing.
58
+
59
+ After our sketching process and review of prior work, we settled on an initial design direction that involves allowing children to capture information on their workflows while they are drawing. Based on prior work showing that most tutorials follow a step-based nature $\left\lbrack {{27},{42}}\right\rbrack$ , our tutorial authoring approach assists the child in recording and documenting individual steps of their drawing. In our approach, the child decides when they are ready to save a step, with the prototype capturing the image and information regarding the tools used during that step. To communicate information about their drawing to others, we let children provide comments or tips associated with their steps since prior work suggests that instructions including a combination of images and text are more useful than those that rely on either images or text in isolation [27]. Finally, we wanted to include a review component, where the child could potentially modify their tutorial before saving it and/or sharing it. Our current design approach does not include video demonstrations. We made this design decision based on previous research indicating that navigating video or animations can be complex and time-consuming [27,46], however, adding video elements could be explored in future research.
60
+
61
+ Our initial target audience for this approach was children who are 6-11 years old. We targeted this range to cover children who can think logically and make independent decisions (ages 6-10) [18] and who can reason inductively and think from others' perspectives (ages 7-11) [51]. We refined our target age to 7-11, based on observations from our formative study.
62
+
63
+ ### 3.1 Low-fidelity Prototype
64
+
65
+ To explore the general authoring approach described above with children, we created a low-fidelity prototype. The lo-fi prototype is a paper-based template for a tutorial authoring and display system that has slots for each step in the tutorial (Figure 1). The steps are determined by the child while they are drawing: when they feel that they have reached a step in their workflow, an image of that step is added in the next available slot. Each step also includes sticky notes for the tools used and any comments the child provides. Figure 1 illustrates an example of a complete workflow created by a child with our prototype.
66
+
67
+ ![01963e7a-a3c9-730b-93cf-cc658cbc9c58_2_309_145_1179_756_0.jpg](images/01963e7a-a3c9-730b-93cf-cc658cbc9c58_2_309_145_1179_756_0.jpg)
68
+
69
+ Figure 1. Low-fidelity Prototype. The workflow was generated by a 9-yr-old girl participant in our formative study; The prototype depicts a series of steps to the drawing as defined by the participant. (A) Each box contains a picture of the image that the participant generated using the drawing program for that step. (B) The icon at the bottom left corner of each box indicates which tool was used during that step. (C) The participant also provided tips or comments on pieces of paper and attached them with her captured steps.
70
+
71
+ A challenge that we faced while paper prototyping was simulating the tools (e.g., colour effects, undo/redo, copy and paste) of a digital drawing application on paper in a way that would be engaging for children. So, instead of drawing on paper, we decided to let children draw using Microsoft Paint, which meant that we needed a way to transfer different states of their drawing to the paper prototype. We used a camera and a Polaroid printer for this purpose to capture the image on the screen and quickly print a photo to attach to the paper prototype. This enabled a child to work with the compelling drawing tools, while still retaining the advantages of paper prototyping for eliciting design feedback.
72
+
73
+ ## 4 FORMATIVE STUDY
74
+
75
+ We used our lo-fi prototype in a formative study to elicit initial reactions from children on the idea of sharing digital art along with how they made it. To design appropriate child-oriented technology, prior research has recommended involving children in the design process by adopting and extending participatory design methods $\left\lbrack {{10},{11},{17},{29},{41},{59}}\right\rbrack$ . Inspired by this body of research, we also conducted participatory design sessions with the children to refine our authoring system concept. Throughout these participatory design sessions, we encouraged the children to share their ideas while interacting with our low-fidelity prototype.
76
+
77
+ ### 4.1 Participants
78
+
79
+ We recruited 8 participants ( 5 girls, 3 boys) who were 6-11 years old with previous experience in digital drawing through snowball sampling [25] and by placing advertisements throughout our university campus. We also asked for a parent to participate in the study so that we could interview them regarding any concerns they might have. After receiving written consent from the parents and verbal assent from the children, we initiated the study sessions. We informed the child that they could withdraw from the study at anytime. In appreciation of their time and participation, the children received a small toy of their choice and the parents received $\$ {15}$ in cash. The study was approved by our research ethics board.
80
+
81
+ Of the eight participants, the two six-year-olds had difficulty grasping the idea of capturing the workflows of their digital art. The remaining six older children (i.e., 7-11) seemed to understand the concept and were therefore in a better position to provide concrete feedback. We report on findings from these six children in the 7-11 age range. We also used these observations to adjust the target age range for our second study.
82
+
83
+ ### 4.2 Study Tasks, Procedure, and Data Collection
84
+
85
+ The study was conducted in a research laboratory (pre-COVID) with one participant at a time. As per our institutionally approved protocol, the child's parent was also present during the entire session. The main tasks in these sessions involved the child creating a digital drawing while a researcher helped them capture their drawing steps. Using the lo-fi prototype, the child then worked with the researcher to craft a tutorial.
86
+
87
+ To help the children understand the context of the use of our prototype, we started the study session by demonstrating a storyboard prototype (see the Supplementary Material). The storyboard introduced the idea of creating a tutorial by depicting a scenario, where a child creates a tutorial and shares it with her friends to illustrate her drawing workflow. Next, we asked the child a few interview questions on their thoughts on sharing their drawings and workflows, seeing others' drawings, and following others' workflows. We then showed the child a PowerPoint prototype to demonstrate what capturing steps of their drawing might look like, before asking the child to draw using MS Paint.
88
+
89
+ After we had introduced the tutorial concept, we asked the child to create a drawing to include in a tutorial. We encouraged the child to tell us when they were ready to create a step, at which point we took a picture of their screen with our camera.
90
+
91
+ When the child was done with the drawing, we printed the captured photos. Then, the child and the researcher started pasting the photos on the prototype. We had a set of small sticky notes with icons of different drawing tools that the child could attach under each step. They could also write tips and comments on pieces of paper and attach those to the steps. We asked them about what they liked and did not like about the prototype, what they would want to change, and what other information they thought might be useful for another child wanting to follow their tutorial. During this process, we encouraged the participant to draw and sketch on the paper prototype to demonstrate their design ideas.
92
+
93
+ We concluded our study session by interviewing the child's parent about any concerns they might have regarding children's sharing of digital art and workflows. Each session lasted approximately one hour.
94
+
95
+ We collected data from our participatory design sessions, the semi-structured interviews with the children, and the short semi-structured interviews with their parents. We video-recorded the participatory design sessions and audio-recorded the interview sessions, which we transcribed and analyzed using open coding to identify participants' views (both positive and negative) towards our tutorial authoring approach and specific design ideas. While qualitative analysis should not necessarily yield counts, we felt that we could see clear enough boundaries in participant views to include counts in our reporting. We do so to give a sense of how prevalent certain sentiments were in our data.
96
+
97
+ ### 4.3 Findings
98
+
99
+ #### 4.3.1 Feedback from the Children
100
+
101
+ Upon asking whether they would like to share their drawings with others, most of the participants (5/6 participants) expressed enthusiasm for the idea of sharing drawings and workflows to showcase their drawing skills and also to help others attempt to recreate their drawings.
102
+
103
+ Then someone can do that too and then they'll be happy too. - P3 (7-yr girl)
104
+
105
+ Only one participant was hesitant to share his drawing because he felt that it was not good enough, suggesting a lack of confidence.
106
+
107
+ All six participants were interested to see other children's drawings. They found this concept entertaining and thought it would help them generate ideas. All participants also expressed interest in seeing the workflows behind these drawings. They felt it would help them to recreate a particular drawing they liked.
108
+
109
+ Once my friend Danny, she drew a really cool thing like a girl, and I was like how did you do that?! I would like to try that. - P6 (9-yr girl)
110
+
111
+ From our participatory design sessions, we observed that all six children who were 7-11 years old understood what steps are in a workflow. All six liked the sequential way of displaying the steps as shown in Figure 1. They also found the icons of the tools associated with each step helpful. They believed the display of the workflows was simple and intuitive for other children to understand the drawing process. I like this because if you are reading a book, you'll go like
112
+
113
+ $$
114
+ \text{this. - P3 (7-yr girl)}
115
+ $$
116
+
117
+ All participants created multiple steps to illustrate their drawing process. They did not hesitate to let us know to capture a photo of the drawing to make it a step. However, sometimes, when concentrating intently on their drawing, a few children (3/6) forgot to capture some of their steps. To tackle this, one participant suggested showing reminders. Participants did not, however, want the system to capture steps without their permission - they wanted to remain in control.
118
+
119
+ In terms of annotating their workflows, while most children were reluctant to write comments at the beginning, everyone attached at least one comment. Examples of their comments included: "Don't try to use pencil for this one", "Careful, this might be the hardest part!", "Now you're done. Great job!". One participant mentioned that having the option to write comments while saving the steps would be more beneficial as they might think of a comment while drawing a particular step and forget about it later.
120
+
121
+ #### 4.3.2 Feedback from Parents
122
+
123
+ In general, the parents were not concerned about their children sharing their drawings online $\left( {6/6}\right)$ . Their main concern was that appropriate parental controls be in place to control what children are sharing and with whom (3/6). A concern more specific to this sharing domain was children sharing art tutorials might affect their creativity negatively if they always try to follow others' instructions $\left( {2/6}\right)$ . On balance, the parents tended to feel that the opportunity to learn to draw from other children would have positive effects (4/6):
124
+
125
+ Sometimes learning to do something somebody else's way can kind of encourage you and give you ideas for how to do something your way. I don't think it'll stifle her creativity as long as she has time and space to do her own things too.
126
+
127
+ To summarize, in response to our first research question of how children would feel about the idea of creating tutorials to share with other children, our formative study provides preliminary insight that our participants were generally positive about the idea. We did see some hesitance that might be attributed to lower confidence, however, warranting further study with a larger sample. In terms of our specific design approach, which borrows elements from adult-oriented tutorial authoring systems (e.g., sequentially displayed steps, commenting, etc.), our participants were generally comfortable with the main interaction style and provided feedback on how to further improve it to meet their needs (e.g., step capture reminders and more flexible commenting). The parents responded positively to the idea of their child sharing their drawings with others, provided proper parental controls were in place.
128
+
129
+ ## 5 Developing a Higher-Fidelity Prototype
130
+
131
+ In the next phase of our research, we converted our lo-fi prototype into a higher-fidelity one by incorporating children's feedback from the formative study. In creating the higher-fidelity prototype, our goal was to use it as a means of further inquiry [63]. Specifically, we wanted to use this higher-fidelity prototype to gain more detailed insights into how children might respond to our tutorial authoring approach. To facilitate our prototype development, we used a mix of automated capture and Wizard-of-Oz techniques.
132
+
133
+ Our higher-fidelity prototype (Figure 2) allows a child to generate a tutorial while drawing digital art by enabling them to self-capture, annotate, and edit their drawing steps. Our prototype currently works with JS Paint [49] (Figure 2A), an open-source drawing program. While using this program to draw something, when the child chooses to capture a step by clicking the "Save Step" feature (Figure 2A), the prototype automatically records the current state of the drawing as well as the tools used as part of that step. The prototype also allows the child to add a comment when saving a step (Figure 2B). This design decision was based on the feedback from our formative study that some children preferred to write comments while working on the drawing to avoid forgetting them. During our formative study, we also observed that when concentrating on their drawing, participants sometimes forgot to save steps, which they later regretted. Our prototype, therefore, prompts the child to save a step at regular intervals. These prompts are currently controlled via a wizarding interface, which allows a facilitator to issue a reminder if the child appears to be forgetting to save their steps.
134
+
135
+ ![01963e7a-a3c9-730b-93cf-cc658cbc9c58_4_231_154_1330_723_0.jpg](images/01963e7a-a3c9-730b-93cf-cc658cbc9c58_4_231_154_1330_723_0.jpg)
136
+
137
+ Figure 2: Higher-Fidelity Prototype: (A) A "Save Step" feature has been added to JS Paint. When clicked, the prototype captures the progress of the drawing along with tools used for that step; (B) The child can optionally choose to provide a comment with the step; (C) Upon completion of the drawing, the prototype displays the captured steps sequentially along with the associated comments and used tools. Children can edit both the comments and the tools.
138
+
139
+ After the child completes their drawing, the prototype displays an automatically generated step-based tutorial in an HTML page that the child can open, as shown in Figure 2(C). This tutorial displays the sequence of steps captured by the child and includes information on the tools used as well as any comments that the child provided while drawing (Figure 2C). Children can further modify the tutorial by editing comments, deleting unnecessary tool information, and deleting entire steps. After they finish editing the tutorial, the prototype displays the final version of the tutorial so that the child could potentially share the tutorial with other children (e.g., friends). We leave investigating tools for sharing these captured tutorials for future research. In the Supplementary Materials, we include a short video walkthrough of the prototype.
140
+
141
+ ## 6 FURTHER CONCEPT EXPLORATION & PROTOTYPE EVALUATION
142
+
143
+ Our formative study provided initial indications that children seemed open to the idea of generating tutorials for other children. In this second study, we use our higher-fidelity prototype to investigate incentives children might have to generate a tutorial for others. We were also interested in observing how they use the prototype, including how they might balance tutorial generation while focusing on their own art, how they would decompose their drawings into steps, and what type of comments they would leave for other children.
144
+
145
+ Due to the COVID-19 pandemic, we transitioned to an online study, where we interacted with participants using video conferencing software.
146
+
147
+ ### 6.1 Participants
148
+
149
+ We recruited 16 participants for our study ( 8 girls, 8 boys), all of whom were 7-11 years old (mean age: 9.5) through word-of-mouth and snowball sampling [25]. Given that the study was conducted online, we were able to recruit internationally, including 9 participants from Canada, 6 from the US, 1 from Bangladesh. We again recruited participants with previous experience in using digital tools to draw. Participation was voluntary, and the children were informed that they could withdraw from the study any time. After receiving written consent from the parents and verbal assent from the children, we initiated the study sessions. The parents' presence on video was optional, based on the comfort level and preference of the child. In appreciation for their participation, the family was provided with $\$ {20}$ in cash or as a gift card. The study was approved by our institutional research ethics board.
150
+
151
+ ### 6.2 Study Tasks, Procedure, and Data Collection
152
+
153
+ To conduct the study remotely, we used video conferencing software with the parent's supervision. To enable the facilitator to act as the prototype "wizard", we used TeamViewer, which allowed the participants to access the facilitator's computer screen directly. This also meant that participants did not have to install any other software to run our prototype. Each study session was approximately 60 minutes long.
154
+
155
+ Like our initial formative study, we began by showing the child a storyboard (see the Supplementary Material) to introduce them to the idea of tutorial authoring and asked them some preliminary questions regarding their attitudes towards sharing their drawings and/or workflows. After that, the facilitator demonstrated the prototype by creating a short tutorial.
156
+
157
+ Next, we asked participants to perform the following two tasks: 1) We asked the participant to draw something of their choice. We asked them to capture their steps while drawing and told them that they could provide comments with each step if they wanted to. 2) After the child completed their drawing, we asked them to use the prototype to review the generated tutorial and make any desired modifications. We also showed the participant another tutorial of a simple drawing to get preliminary feedback on how they might feel about using others' tutorials.
158
+
159
+ After completing each task, we asked the child open-ended questions about their experience of using the prototype. We intermixed the interviews and tasks to create a more conversational atmosphere with the child as well as to provide a break from using the prototype. In piloting, we found these breaks to be particularly important with the study being online. We also asked them survey questions by adapting the Fun Toolkit survey technique [53], which has been used in previous studies with children to evaluate interface usability. Using the toolkit, we asked 10 closed, fixed-response questions, covering: i) how they felt about using the prototype's features; ii) which task they liked most, and iii) whether they would like to do each task again. The questionnaire items can be found in the Supplementary Materials. Participants completed the surveys on the facilitator's computer (using TeamViewer).
160
+
161
+ Our main source of data was obtained from the semi-structured interviews conducted throughout the study. We recorded the entire study sessions using a screen recorder to capture the interactions with the prototype. Finally, we used the surveys to elicit structured data on children's experiences with the prototype.
162
+
163
+ ### 6.3 Findings: Tutorial Creation
164
+
165
+ Most of our participants $\left( {{12}/{16}}\right)$ were familiar with the concept of a tutorial prior to the study and all successfully generated a step-based tutorial using the prototype. Participants' drawings included flowers, unicorns, nature scenery, ships, and favorite Lego characters. Figure 3 and Figure 4 show two example tutorials created by participants in the study (one from a 7-year-old girl and one from an 11-year-old boy).
166
+
167
+ The median number of steps generated by the participants was 7 per tutorial (min: 4; max: 10; IQR: 2). For most participants, each new element added to the drawing constituted a step. As the formation of a step was conceptual and related to elements of a child's drawing, this indicates that implementing automated step capture would be challenging. For example, simply creating a step for each tool used would have resulted in tutorials with much lower granularity than those created by our participants.
168
+
169
+ Almost all participants (15/16) provided comments with their steps. The median number of comments per tutorial was6.5(IQR: 3) and 14 participants provided a comment with each step. Comments often described the drawing element in the step, e.g., "the ocean", "Lego arms", "moon". Some participants provided more detailed or specific instructions with their comments, e.g., "You first make a cube-like structure", "Make a hill and colour on top", "Add texture to the grass", "Add any of your imaginary details you like". Notably, most of the comments did not focus on how to use certain features of the drawing application. Overall, we did not observe any age differences manifest themselves in the commenting style or informativeness.
170
+
171
+ Our survey findings indicated that all 16 participants felt positive about creating steps and viewing others' tutorials. 14/16 participants also felt positive about writing comments. Additionally, we found that ${10}/{16}$ participants wanted to create a tutorial again; the remaining $6/{16}$ indicated that they might be interested in doing so. However, 12/16 participants did not like to edit their tutorials once they have completed the drawing. This potentially supports our idea of capturing and generating steps when the child is creating the drawing
172
+
173
+ During the study, we also looked for indications of how children approached the task of authoring tutorials while drawing. We observed that participants did not take time to plan out how they want to design a tutorial before starting to work on the drawing. Instead, they seemed to go with the flow, adding elements as they saw fit in the moment while drawing. Participants worked intently on their drawings and tutorials, suggesting that they cared about the final product. Six participants were so engrossed in working on their tutorials that we had to cut them off due to time limitations.
174
+
175
+ ![01963e7a-a3c9-730b-93cf-cc658cbc9c58_5_187_1177_1437_888_0.jpg](images/01963e7a-a3c9-730b-93cf-cc658cbc9c58_5_187_1177_1437_888_0.jpg)
176
+
177
+ Figure 3: Tutorial authored by a 7-year-old girl (P16)
178
+
179
+ ![01963e7a-a3c9-730b-93cf-cc658cbc9c58_6_152_152_1504_720_0.jpg](images/01963e7a-a3c9-730b-93cf-cc658cbc9c58_6_152_152_1504_720_0.jpg)
180
+
181
+ Figure 4: Tutorial authored by an 11-year-old boy (P6)
182
+
183
+ ### 6.4 Findings: Interviews
184
+
185
+ Since we had a larger sample size and a larger volume of data, we applied a more rigorous qualitative analysis procedure than we did with our formative study, where the goal was to gather preliminary design insights. We transcribed the interview data and then analyzed it by using a bottom-up inductive approach and creating affinity diagrams to identify themes in the data [9]. While creating the affinity diagrams, one researcher initially applied open coding [15] to the quotes and then used affinity diagramming to refine the initial set of codes. This same researcher then clustered related quotes and performed axial coding [61] to identify themes. Two researchers collaboratively iterated on the raw data, clusters, and codes until clear themes emerged.
186
+
187
+ From our analysis, themes emerged related to incentives for creating and sharing, and attitudes towards child-authored tutorials, which we present below. The data collected from this study also contained insight regarding how different features of our prototype might support tutorial creation. To contextualize the quotes, we provide each participant's age and gender. As with our formative study, we report counts to give a sense of prevalence of the sentiments within our data, however, we once again acknowledge that doing so is a contentious issue within qualitative HCI research.
188
+
189
+ #### 6.4.1 Perceived Incentives and Deterrents to Create and Share Tutorials
190
+
191
+ Table 1 summarizes the reasons children provided for and against the idea of creating and sharing tutorials, which addresses our second research question. All the 16 children provided at least one reason in favor, with some children providing multiple reasons in favor. Five children expressed mixed views. We elaborate on their reasons below.
192
+
193
+ Altruism: The main incentive to create and share tutorials for most of the participants (14/16) was altruism. There were some nuances, however, in how children expressed their desire to create tutorials as a way of helping others. For example, some of the children $\left( {4/{16}}\right)$ wanted to help people in general by sharing their tutorials, whereas others specifically wanted to help their friends (3/16). In terms of why they wanted to share, participants were motivated to give other kids new ideas for drawing, and felt that their tutorials could help other kids create the drawings easily:
194
+
195
+ Table 1: Reasons for and against creating and sharing tutorials along with the number of participants who felt this way
196
+
197
+ <table><tr><td colspan="2">Reasons for Creating and Sharing Tutorials</td></tr><tr><td>Altruism</td><td>14 participants</td></tr><tr><td>Assessing own tutorial authoring skills and seeking validation</td><td>12 participants</td></tr><tr><td>Showcase drawing skills</td><td>6 participants</td></tr><tr><td>To keep a record of their own drawing</td><td>5 participants</td></tr><tr><td colspan="2">Reasons for Hesitating to Share Tutorials</td></tr><tr><td>Lack of confidence</td><td>5 participants</td></tr></table>
198
+
199
+ I'd like to show my friends so that they can get an idea of what to do next when they draw again and also, I can show them a few steps about how they can make it. ... I'd like it because it'd feel good. Like I'm helping people without even seeing them. - P2 (9-yr boy)
200
+
201
+ Other children (4/16) liked the idea of showing kids how they might draw something differently. For these participants, it seemed to be less about showcasing the final product and more about illustrating their process.
202
+
203
+ It's fun and lets other people learn how to draw something in another way. - P15 (10-yr girl)
204
+
205
+ A few participants $\left( {3/{16}}\right)$ wanted to share their tutorial only if their friends specifically asked for it. They were not confident in their drawing skills and were shy to share their drawings with others unless someone needs them.
206
+
207
+ Assessing Own Tutorial Authoring Skills and Seeking Validation: Some participants (7/16) wanted to share workflows with others to assess their tutorial authoring skills. If others could reproduce or make a better version of their drawings by following their tutorials, they felt that it implied that their tutorial was understandable and useful.
208
+
209
+ I'd just wanna see how good the steps were that I made, and if they ended up making it look more realistic. - P12 (11-yr boy)
210
+
211
+ Others (9/16) thought they would feel validated even just by having another child try their tutorial, since this would mean they produced something interesting. Knowing that others were going to view and use their tutorials to create a drawing gave them the satisfaction that their art is appreciated by others and their effort is valued.
212
+
213
+ I'd like it because some kids like to draw, and I'd like it if they do this thing. I'd be happy too, to see that they used my
214
+
215
+ tutorial. - P15 (10-yr girl)
216
+
217
+ Showcase Drawing Skills: Some of the children who seemed particularly confident in their art and drawing skills, wanted to create and share their tutorials to showcase their skills (6/16). For these children, it seemed less about receiving validation and more about having an outlet to share their creativity with others.
218
+
219
+ If I'm proud of the artwork then I'd wanna show it to other people. So that they have an opportunity to try doing art and
220
+
221
+ learn. - P14 (11-yr girl)
222
+
223
+ To Keep a Record of Their Own Drawing: Finally, some participants $\left( {5/{16}}\right)$ wanted to create tutorials to keep a record for themselves so that they could review it later to recreate the drawing. It indicates that even if a child is not comfortable sharing their tutorials with others, they can still create tutorials for themselves.
224
+
225
+ If I ever went back and reviewed it, it kinda leaves like a bookmark... Next time you can follow the steps again. - P6 (11-yr boy)
226
+
227
+ Lack of Confidence a Deterrent to Sharing: Some participants (5/16) were hesitant to create and share tutorials because they believed that their drawing skills are not adequate to create tutorials, even though we did not find their drawings to be noticeably worse than the other participants. They were not confident that others would like their tutorials.
228
+
229
+ Some of them are better at drawing and I'm scared that they're gonna judge me. - P14 (11-yr girl)
230
+
231
+ P14 mentioned earlier in the interview that she wanted to showcase her drawing skills by sharing the artwork she is proud of. However, at the same time, she had some reservations about sharing due to her lack of confidence. This indicates that some children might be in conflict about whether to share their tutorials.
232
+
233
+ #### 6.4.2 Feedback on the Design Approach
234
+
235
+ During our interviews, children provided feedback on our semiautomated tutorial authoring approach as well as on individual design elements.
236
+
237
+ Capturing Steps Was Intuitive but Can Divert Attention: Participants generally found saving steps while creating the drawing to be simple and intuitive (10/16). One participant mentioned that she got so accustomed to saving steps that she did it without even thinking about it.
238
+
239
+ At one point I kinda forgot that to save step (that she's using
240
+
241
+ the feature of saving steps subconsciously). I kinda got used
242
+
243
+ to saving the steps. - P14 (11-yr girl)
244
+
245
+ On the other hand, some participants $\left( {6/{16}}\right)$ felt that saving steps distracted them from their drawing. They worried that pausing to save steps might ruin their flow and they might forget what they wanted to do.
246
+
247
+ I was kinda in a mood. I like focusing on what I'm doing instead of stopping and doing something else. - P16 (7-yr girl)
248
+
249
+ While an automated step capture feature could avoid this hassle, the challenge would be developing an algorithm that can detect the conceptual, element-based segmentation that children seem to want to employ when manually capturing steps.
250
+
251
+ Mixed Reaction Towards Writing Comments: Though all but one participant provided comments with their steps, only half of those participants $\left( {7/{16}}\right)$ explicitly discussed the value that they saw in providing comments. They believed that comments could assist others to go through the steps and also help them remember what the steps meant if they wanted to review their own tutorials.
252
+
253
+ Writing comments is a good way to explain it because sometimes just looking at pictures doesn't make sense. - P14 (11-yr girl)
254
+
255
+ Some of the participants who were not as enthusiastic about commenting (4/16) found it difficult to come up with appropriate comments. They indicated that it was sometimes hard to explain the steps the way they wanted.
256
+
257
+ Sometimes you have another way to say it in your head and it's complicated to put it in comments. - P15 (10-yr girl)
258
+
259
+ Thus, overall, we observed mixed reactions towards commenting: some were enthusiastic about writing comments; for others, it seemed to be a source of pressure. At a minimum, this supports our decision to make commenting optional. Future versions could explore ways to assist the children who want to provide comments but struggle to verbalize their thoughts.
260
+
261
+ Tools Are Not Always Sufficient: The tool information provided with each of the steps was seen as useful by most participants as they felt it gave a clear idea of which tools were needed to achieve a certain effect. However, some participants wanted to provide more information regarding the tools that they used (6/16). For example, in addition to the tool name and the icon, some tools could have more details, such as brush size, the colour of the paint, etc. Future versions could explore designs that can include additional information for certain tools.
262
+
263
+ #### 6.4.3 Attitudes Towards Following Other Children's Tutorials
264
+
265
+ In addition to getting insights into children's incentives to generate tutorials, we hoped to gain initial insight into how the children felt about being consumers of kid-generated tutorials. To keep sessions at a reasonable length, we showed participants a sample tutorial to elicit their opinions, but they did not have to follow a tutorial.
266
+
267
+ All children in the study responded positively to the idea of viewing another child's tutorial. The main reason for wanting to see others’ tutorials was to gain new ideas and inspiration (11/16). Participants mentioned that they are sometimes unsure about what to draw, how to start, and are interested in seeing other ways to draw something. Participants also (7/16) mentioned how they can learn from others who are better at drawing by viewing their tutorials and by comparing their drawings to find potential ways to improve. One child mentioned that she wanted to make the authors feel happy that someone has tried out their tutorial. Although they were willing to view others' tutorials, three participants were not enthusiastic about the idea of following others' tutorials. They indicated that they did not like following instructions or wanted to draw something in their own way, with their own creativity.
268
+
269
+ ## 7 DISCUSSION, LIMITATIONS, AND FUTURE WORK
270
+
271
+ Findings from our second study suggest that most of the children were interested in and capable of authoring drawing tutorials. The study findings also shed light on children's perceived incentives to author and ultimately share their tutorials, which included helping their peers and other social incentives (e.g., seeking validation and showcasing skills). Some also wanted to maintain a record for their own purposes. We were surprised by the extent that their motivations mirrored those found in prior work on adult populations. For example, altruism is an intrinsic motivator for adults who share their knowledge online [65]. Similar to the incentive of 'showcase drawing skills', adults also author tutorials to showcase the workflows they find interesting [50]. Self-efficacy is another important consideration [66]. In our studies, we noticed that children's level of confidence in their drawing abilities seemed to affect their attitudes towards sharing.
272
+
273
+ Our findings indicate that a semi-automated tutorial authoring system can potentially enable children to generate step-based tutorials. In terms of important design considerations, most children in our study responded positively to the idea of creating a tutorial while they were drawing. Further, they found the post-hoc modifications to be the least fun activity of the study session. This suggests that interleaving tutorial generation with the principal activity is a promising design direction. We saw that children wanted to control the granularity of their steps, but sometimes became so engrossed in the drawing activity that they forgot to do so. Adaptive prompts or automated step-capture features could potentially address this but would need to consider the characteristics and tendencies of the child artist. Our findings also suggest that children appreciated the ability to annotate their steps, however, some found it difficult to craft good comments. Future work could therefore consider ways to scaffold this process, for example, through sample comments or comment templates.
274
+
275
+ Children seemed interested and open to the idea of using another child's tutorial, however, further study is needed to understand the relative advantages and disadvantages of the child- vs. adult-authored tutorials for this type of creative activity. To get an initial sense of how an adult-authored tutorial might compare to what we saw in our study, we selected a small sample of 16 adult-authored tutorials dedicated to children from DrawingNow [72] and DragoArt [71] that fell into similar drawing categories. Based on this small, curated sample, we observed both similarities and differences. For example, both the groups of tutorials had approximately the same number of steps and comments. On the other hand, the adult-authored tutorials tended to follow a structured way of drawing, starting with a workable frame to make the drawing process easier, whereas our participants took a less structured approach, allowing their drawings to move in creative directions. The children in our study seemed to focus more on their drawings and generated the tutorial as a by-product of that activity to share the process with others, however, this might be an artifact of our study design, which did not involve a dedicated tutorial planning phase. The child-created tutorials also involved more straightforward drawings and simpler comments than the adult-created ones, which might be easier for younger children to follow. Future research should investigate these differences in a more structured and systematic way as well as on how children experience the tutorials. For example, it is possible that adult-authored tutorials are better at teaching drawing skills and specific techniques, whereas children's tutorials might be more relatable and inspire creativity.
276
+
277
+ We conducted our second study online due to the COVID-19 restrictions, which introduced some limitations. For example, participants were sometimes distracted by siblings, some experienced internet issues, and some parents had difficulties setting up the study. A recent study investigating online synchronous co-design with children during the pandemic also identified these factors as impacting children's interaction during study sessions [39]. While designing the online study, we had to be particularly mindful of study session length due to the video conferencing fatigue. For example, we had originally intended to have children try a previously created tutorial to elicit grounded data on their perceptions, a task that we eliminated after pilot tests. Despite the difficulties, we found that participants in our studies were as or even more engaged in the interviews than they were in our initial lab-based study. We suspect that being in the familiar environment of their home helped make the children comfortable in expressing their thoughts.
278
+
279
+ Due to the study being online and the numerous COVID restrictions that were in place, we recruited via snowball sampling beginning with the authors' personal contacts [25]. Given that one of the paper authors is at an institution outside of their home country, this sampling technique resulted in participants from three different countries, which introduced diversity into our participant pool. On the other hand, diverse cultural backgrounds can impact interview responses [22] and could potentially have made our investigation less focused than it would have been with a more locally recruited population. While we did not see any noteworthy differences in how participants from the three different countries approached tutorial authoring and sharing, future work should investigate the generalizability of our findings to a larger sample of children both within and across cultures.
280
+
281
+ While building a child-centric sharing platform is beyond the scope of this work, the overlap in our participants' motivations for sharing their tutorials with prior results on adult tutorial sharing, suggests opportunities to learn from prior adult-centric research on how to motivate sharing online. For example, positive voting and textual comments have been shown to encourage adults to contribute [13]. Future work can explore the extent to which these prior approaches could also encourage a range of children to share their digital art workflows online, or conversely if new child-centric approaches are needed. In the future, a longitudinal study could enable us to investigate how the act of sharing one's art tutorials impacts a child's sense of self-accomplishment.
282
+
283
+ Future work could also explore alternative uses of this type of drawing-capture approach. For example, one child in our study proposed the idea of using the system to create an illustrated story with her friends. In addition to acting as a creative outlet, prior work in the domain of programming found that storytelling helped children learn the concepts [36]. Finally, it would be interesting to explore the generalizability of our approach to other creative activities that involve complex software, such as $3\mathrm{D}$ modelling for child-oriented makerspaces.
284
+
285
+ ## 8 CONCLUSION
286
+
287
+ In this paper, we present the participatory design and evaluation of a children's tutorial authoring system for digital art. Findings from our studies illustrate the potential for children to be engaged and motivated by this form of peer-based help and knowledge sharing, with potential applications to other domains (e.g., helping children create programming tutorials). Our approach is also but one way to provide children with tools to share aspects of their creative process with others. Future work should explore new ways for children to communicate their digital art ideas and skills with their peers and connect with other children in positive online communities. Future work should also study the role of such communities in fostering important social skills.
288
+
289
+ [1] Vernon L. Allen. 2013. Children as Teachers: Theory and Research
290
+
291
+ on Tutoring. Academic Press.
292
+
293
+ [2] Vernon L. Allen and Robert S. Feldman. 1973. Learning through tutoring: Low-achieving children as tutors. Journal of Experimental Education 42, https://doi.org/10.1080/00220973.1973.11011433
294
+
295
+ [3] Yasemin Allsop. 2012. Exploring the Educational Value of Children's Game Authoring Practices: A Primary School Case Study. In Proceedings of the European Conference on Games Based Learning, 21-30.
296
+
297
+ [4] Joyce Ofosua Anim. 2012. The role of drawing in promoting the children's communication in Early Childhood Education. Early Childhood Education and Care: 1-66.
298
+
299
+ [5] Alissa Antle. 2003. Case Study: The Design of CBC4Kids' Storybuilder. In Proceedings of the 2003 conference on Interaction design and children, 59-68.
300
+
301
+ [6] Alissa Antle. 2004. Supporting children's emotional expression and exploration in online environments. Proceeding of the 2004 conference on Interaction design and children building a community - IDC '04: 97-104. https://doi.org/10.1145/1017833.1017846
302
+
303
+ [7] Y. P. Atencio, M. I. Cabrera, and L. A. Huaman. 2019. A Cooperative Drawing Tool to Improve Children's Creativity. In International Conference on Cooperative Design, Visualization and Engineering , Springer, Cham., 162-171. https://doi.org/10.1007/978-3-030-30949- 7
304
+
305
+ [8] Steve Benford, Benjamin B. Bederson, Karl Petter Åkesson, Victor Bayon, Allison Druin, Pär Hansson, Juan Pablo Hourcade, Rob Ingram, Helen Neale, Claire O'Malley, Kristian T. Simsarian, Danaë Stanton, Yngve Sundblad, and Gustav Taxén. 2000. Designing storytelling technologies to encourage collaboration between young children. In Proceedings of the SIGCHI conference on Human factors in computing systems., 556-563. https://doi.org/10.1145/332040.332502
306
+
307
+ [9] H. Beyer and K. Holtzblatt. 1997. Contextual Design: Defining Customer-Centered Systems. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.
308
+
309
+ [10] H. Birch and C. Demmans Epp. 2015. Participatory design with music students: Empowering children to develop instructional technology. American Educational Research Association Annual Meeting.
310
+
311
+ [11] Heather Janette Spicer Birch. 2018. Music Learning in an Online Affinity Space: Using a Mobile Application to Create Interactions During Independent Musical Instrument Practice. Diss. University of Toronto (Canada).
312
+
313
+ [12] Karen Brennan and Mitchel Resnick. 2013. Imagining, Creating, Playing, Sharing, Reflecting: How Online Community Supports Young People as Designers of Interactive Media. In Emerging Technologies for the Classroom. Explorations in the Learning Sciences, Instructional Systems and Performance Technologies. Springer, New York, NY., 253-268. https://doi.org/10.1007/978-1- 4614-4696-5
314
+
315
+ [13] Langtao Chen, Aaron Baird, and Detmar Straub. 2019. Why do participants continue to contribute? Evaluation of usefulness voting and commenting motivational affordances within an online knowledge community. Decision Support Systems, 118: 21-32.
316
+
317
+ https://doi.org/10.1016/j.dss.2018.12.008
318
+
319
+ [14] Pei Yu Chi, Sally Ahn, Amanda Ren, Mira Dontcheva, Wilmot Li,
320
+
321
+ and Björn Hartmann. 2012. MixT: Automatic generation of step-by-step mixed media tutorials. Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology: 93-102.
322
+
323
+ [15] Juliet M. Corbin and Anselm Strauss. 1990. Grounded theory research: Procedures, canons, and evaluative criteria. Qualitative Sociology 13, 1: 3-21. https://doi.org/10.1007/BF00988593
324
+
325
+ [16] Shruti Dhariwal. 2018. Scratch Memories: A Visualization Tool for Children to Celebrate and Reflect on Their Creative Trajectories. Proceedings of the 17th ACM Conference on Interaction Design and Children: 449-455. https://doi.org/DOI 10.1007/s10705-010-9367-3
326
+
327
+ [17] Allison Druin. 1999. Cooperative inquiry: Developing new technologies for children with children. In Conference on Human Factors in Computing Systems - Proceedings, 592-599. https://doi.org/10.1145/302979.303166
328
+
329
+ [18] Jacquelynne S Eccles. 1999. The Development of Children Ages 6 to 14. Future of children 9, 2: 30-44.
330
+
331
+ [19] Jennifer Femquist, Tovi Grossman, and George Fitzmaurice. 2011. Sketch-sketch revolution: An engaging tutorial system for guided sketching and application learning. In Proceedings of the 24th annual ACM symposium on User interface software and technology, 373- 382. https://doi.org/10.1145/2047196.2047245
332
+
333
+ [20] C. Ferraris and C. Martel. 2000. Regulation in groupware: The example of a collaborative drawing tool for young children. In Proceedings Sixth International Workshop on Groupware. CRIWG 2000, 119-127. https://doi.org/10.1109/CRIWG.2000.885163
334
+
335
+ [21] Matthew Flagg and James M Rehg. 2006. Projector-Guided Painting. In Proceedings of the 19th annual ACM symposium on User interface technology, 235-244. https://doi.org/https://doi.org/10.1145/1166253.1166290
336
+
337
+ [22] Heléne Gelderblom and Paula Kotzé. 2009. Ten design lessons from the literature on child development and children's use of technology. In Proceedings of the 8th International Conference on Interaction Design and Children, 52-60. https://doi.org/https://doi.org/10.1145/1551788.1551798
338
+
339
+ [23] K. Derek Godinez, Pablo A. Alcaraz-Valencia, Laura S. Gaytán-Lugo, and Rocio Maciel Arellano. 2017. Evaluation of a Low Fidelity Prototype of a Serious Game to Encourage Reading in Elementary School Children. In Proceedings of the 8th Latin American Conference on Human-Computer Interaction, 1-4. https://doi.org/10.1145/3151470.3156640
340
+
341
+ [24] J Good and J Robertson. 2006. Learning and motivational affordances in narrative-based game authoring. In the Proceedings of the 4th International Conference for Narrative and Interactive Learning Environments (NILE), Edinburgh, 37-51.
342
+
343
+ [25] Leo A. Goodman. 1961. Snowball sampling. The annals of mathematical statistics: 148-170.
344
+
345
+ [26] Timo Göttel. 2011. Reviewing children's collaboration practices in storytelling environments. In Proceedings of the 10th International Conference on Interaction Design and Children, 153-156. https://doi.org/10.1145/1999030.1999049
346
+
347
+ [27] Floraine Grabler, Maneesh Agrawala, Wilmot Li, Mira Dontcheva, and Takeo Igarashi. 2009. Generating photo manipulation tutorials by demonstration. ACM SIGGRAPH 2009 papers: 1-9.
348
+
349
+ https://doi.org/10.1145/1531326.1531372
350
+
351
+ [28] Tovi Grossman, Justin Matejka, and George Fitzmaurice. 2010.
352
+
353
+ Chronicle: Capture, exploration, and playback of document workflow histories. In Proceedings of the 23nd annual ACM symposium on User interface software and technology, 143-152. https://doi.org/10.1145/1866029.1866054
354
+
355
+ [29] M.L. Guha, Allison Druin, Gene Chipman, J.A. Fails, Sante Simms, and Allison Farber. 2004. Mixing ideas: a new technique for working with young children as design partners. Proceedings of the 2004 conference on Interaction design and children: building a community. 35-42. https://doi.org/10.1145/1017833.1017838
356
+
357
+ [30] Kyle J. Harms, Jordana H. Kerr, and Caitlin L. Kelleher. 2011. Improving learning transfer from stencils-based tutorials. In Proceedings of the 10th International Conference on Interaction Design and Children, 157-160. https://doi.org/10.1145/1999030.1999050
358
+
359
+ [31] Juan Pablo Hourcade, Benjamin B. Bederson, Allison Druin, and Gustav Taxén. 2002. KidPad: Collaborative storytelling for children. In CHI'02 extended abstracts on Human factors in computing systems, 500-501.
360
+
361
+ [32] Jeff Huang and Michael B. Twidale. 2007. Graphstract: Minimal graphical help for computers. In Proceedings of the 20th annual ACM symposium on User interface software and technology, 203-212. https://doi.org/10.1145/1294211.1294248
362
+
363
+ [33] Emmanuel Iarussi, Adrien Bousseau, and Theophanis Tsandilas. 2013. The Drawing Assistant : Automated Drawing Guidance and Feedback from Photographs. In ACM Symposium on User Interface Software and Technology (UIST), 183-192.
364
+
365
+ [34] Karen Keifer-Boyd. 2005. Children Teaching Children with their Computer Game Creations. Visual arts research: educational, historical, philosophical, and psychological perspectives 31, 60: 117- 128. https://doi.org/10.2307/20715373
366
+
367
+ [35] Caitlin Kelleher and Randy Pausch. 2005. Stencils-Based Tutorials : Design and Evaluation. In Proceedings of the SIGCHI conference on Human factors in computing systems, CHI 2005, 541-550.
368
+
369
+ [36] Caitlin Kelleher, Randy Pausch, and Sara Kiesler. 2007. Storytelling alice motivates middle school girls to learn computer programming. In Proceedings of the SIGCHI conference on Human factors in computing systems, https://doi.org/10.1145/1240624.1240844
370
+
371
+ [37] Heejin Kim, Eunhye Park, and Jeehyun Lee. 2001. " All done ! Take it home . " Then into a Trashcan ?: Displaying and Using Children ' s Art Projects. Early Childhood Education Journal. 29, 1: 41-50.
372
+
373
+ [38] Ben Lafreniere, Andrea Bunt, Matthew Lount, and Michael Terry. 2012. "Looks cool, I'll try this later!": Understanding the Faces and Uses of Online Tutorials. University of Waterloo Tech Report.
374
+
375
+ [39] Lee, Kung Jin, Wendy Roldan, Tian Qi Zhu, Sungmin Na Harkiran Kaur Saluja, Britnie Chin, Yilin Zeng, Jin Ha Lee, and Jason Yip. 2021. The Show Must Go On: A Conceptual Model of Conducting Synchronous Participatory Design With Children Online. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1-16. https://doi.org/10.1080/03601277.2019.1599556
376
+
377
+ [40] Yong Jae Lee, C. Lawrence Zitnick, and Michael F. Cohen. 2011. ShadowDraw: Real-time user guidance for freehand drawing. ${ACM}$
378
+
379
+ Transactions on Graphics ${30},\;4 : \;1 - {10}$ . https://doi.org/10.1145/1964921.1964922
380
+
381
+ [41] Amna Liaqat, Benett Axtell, and Cosmin Munteanu. 2021.
382
+
383
+ Participatory Design for Intergenerational Culture Exchange in Immigrant Families: How Collaborative Narration and Creation Fosters Democratic Engagement. In Proceedings of the ACM on Human-Computer Interaction, 5(CSCWI), 1-40.
384
+
385
+ [42] M. Lount and A. Bunt. Characterizing web-based tutorials: Exploring quality, community, and showcasing strategies. In Proceedings of the 32nd ACM International Conference on The Design of Communication CD-ROM, 1-10.
386
+
387
+ [43] Sarah McRoberts, Elizabeth Bonsignore, Tamara Peyton, and Svetlana Yarosh. 2016. "Do It for the Viewers!" Audience engagement behaviors of young YouTubers. In Proceedings of the The 15th International Conference on Interaction Design and Children., 334-343. https://doi.org/10.1145/2930674.2930676
388
+
389
+ [44] Sağlam Mehmet and Aral Neriman. 2016. Developments in Educational Sciences - Child and Drawing.
390
+
391
+ [45] Christiane Moser. 2013. Child-centered game development (CCGD): Developing games with children at school. Personal and Ubiquitous Computing 17, 8: 1647-1661. https://doi.org/10.1007/s00779-012- 0528-z
392
+
393
+ [46] Toshio Nakamura and Takeo Igarashi. 2008. An application-independent system for visualizing user operation history. In Proceedings of the 21st annual ACM symposium on User interface software and technology., 23-32. https://doi.org/10.1145/1449715.1449721
394
+
395
+ [47] A. November. 2012. Who owns the learning?: Preparing students for success in the digital age. Solution Tree Press.
396
+
397
+ [48] Alan November. 2012. Students as Contributors: The Digital Learning Farm. November Learning: 1-4.
398
+
399
+ [49] Isaiah Odhner. JS Paint. Retrieved from https://github.com/1j01/jspaint
400
+
401
+ [50] Dan Perkel and Becky Herr-Stephenson. 2008. Peer pedagogy in an interest-driven community: the practices and problems of online tutorials. Media@lse Fifth Anniversary Conference: Media, Communication and Humanity: 1-30.
402
+
403
+ [51] B. Piaget, J., & Inhelder. 2008. The Psychology of the Child. Basic books.
404
+
405
+ [52] Vidya Ramesh, Charlie Hsu, Maneesh Agrawala, and Björn Hartmann. 2011. ShowMeHow: Translating User Interface Instructions Between Similar Applications. In Proceedings of the 24th annual ACM symposium on User interface software and technology, 127-134.
406
+
407
+ [53] Janet C. Read, Stuart MacFarlane, and Chris Casey. 2002. Endurability, engagement and expectations: Measuring children's fun. Eindhoven: Shaker Publishing.
408
+
409
+ [54] Mitchel Resnick, John Maloney, A Monroy-Hernández, Natalie Rusk, Evelyn Eastmond, Karen Brennan, Amon Millner, Eric Rosenbaum, Jay Silver, Brian Silverman, and Yasmin Kafai. 2009. Scratch: programming for all. Communications of the ACM 52, 11: 60-67. https://doi.org/10.1145/1592761.1592779
410
+
411
+ [55] Jochen Rick, Phyllis Francois, Bob Fields, Rowanne Fleck, Nicola Yuill, and Amanda Carr. 2010. Lo-Fi prototyping to design interactive-tabletop applications for children. In Proceedings of the
412
+
413
+ 9th International Conference on Interaction Design and Children, 138-146. https://doi.org/10.1145/1810543.1810559
414
+
415
+ [56] Ricarose Roque, Yasmin Kafai, and Deborah Fields. 2012. From tools
416
+
417
+ to communities: Designs to support online creative collaboration in scratch. In Proceedings of the 11th International Conference on Interaction Design and Children, 220-223. https://doi.org/10.1145/2307096.2307130
418
+
419
+ [57] Elisa Rubegni and Monica Landoni. 2018. How to design a digital storytelling authoring tool for developing pre-reading and pre-writing skills. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1-10. https://doi.org/10.1145/3173574.3173969
420
+
421
+ [58] Mona Sakr. 2018. Multimodal participation frameworks during young children's collaborative drawing on paper and on the iPad. Thinking Skills and Creativity. 29: 1-11 . https://doi.org/10.1016/j.tsc.2018.05.004
422
+
423
+ [59] D. Schuler and A. Namioka. 1993. Participatory design: Principles and practices. Hillsdale, NJ: Lawrence Erlbaum.
424
+
425
+ [60] Gavin Sim and Brendan Cassidy. 2013. Investigating the fidelity effect when evaluating game prototypes with children. In 27th International British Computer Society Human Computer Interaction Conference (HCI 2013): The Internet of Things, 1-6. https://doi.org/10.14236/ewic/hci2013.62
426
+
427
+ [61] J. Strauss, A., Corbin. 1998. Basics of qualitative research techniques. Thousand Oaks, CA: Sage publications.
428
+
429
+ [62] Jean Lee Tan, Dion Hoe Lian Goh, Rebecca P. Ang, and Vivien S. Huan. 2011. Child-centered interaction in the design of a game for social skills intervention. Computers in Entertainment (CIE). 9, (1): 1-17. https://doi.org/10.1145/1953005.1953007
430
+
431
+ [63] S. Wensveen and B. Matthews. 2014. Prototypes and prototyping in design research. The Routledge Companion to Design Research. Taylor & Francis.: 262-276.
432
+
433
+ [64] Wahju Agung Widjajanto, Michael Lund, and Heidi Schelhowe. 2008. "Wayang Authoring": a web-based authoring tool for visual storytelling for children. In Proceedings of the 6th International Conference on Advances in Mobile Computing and Multimedia, 464- 467.
434
+
435
+ [65] Bo Xu and Dahui Li. 2015. An empirical study of the motivations for content contribution and community participation in Wikipedia. Information and Management 52,3: ${275} - {286}$ . https://doi.org/10.1016/j.im.2014.12.003
436
+
437
+ [66] Heng Li Yang and Cheng Yu Lai. 2010. Motivations of Wikipedia content contributors. Computers in Human Behavior 26, 6: 1377- 1383. https://doi.org/10.1016/j.chb.2010.04.011
438
+
439
+ [67] Svetlana Yarosh, Elizabeth Bonsignore, Sarah Mcroberts, and Tamara Peyton. 2016. YouthTube: Youth video authorship on youtube and vine. Proceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW 27: 1423-1437. https://doi.org/10.1145/2818048.2819961
440
+
441
+ [68] Maizatul HM Yatim and Maic Masuch. 2007. GATELOCK-A Game Authoring Tool for Children. Proceedings of the 6th international conference on Interaction design and children: 173-174. https://doi.org/10.1145/1297277
442
+
443
+ [69] Pixilart - Free Online Art Community and Pixel Art Tool. Retrieved August 26, 2020 from https://www.pixilart.com/
444
+
445
+ [70] Tate Kids. Retrieved August 26, 2020 from https://www.tate.org.uk/kids
446
+
447
+ [71] Dragoart - How to Draw Anime, People, Cartoons, Tattoos, Cars & More, Step by Step! | dragoart.com. Retrieved September 7, 2020 from https://dragoart.com/
448
+
449
+ [72] DrawingNow - Learn How to Draw. Retrieved September 7, 2020 from https://www.drawingnow.com/
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/HzgpxFETf5/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,305 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § TUTORIALS FOR CHILDREN BY CHILDREN: DESIGN AND EVALUATION OF A CHILDREN’S TUTORIAL AUTHORING TOOL FOR DIGITAL ART
2
+
3
+ § ABSTRACT
4
+
5
+ Digital art tools allow children to express their creativity and can help them develop important skills. There are numerous software tutorials available to help teach and inspire digital art enthusiasts, however, most are authored for and by adults. Given that children are increasingly contributing online digital content, in this paper, we investigate a tutorial authoring design concept where children can capture their drawings and information on their process, with the long-term objective of allowing children to share both their creativity and their workflows with other children. Through participatory design sessions, prototyping, and an evaluation, we explore children's attitudes towards the creation of digital art tutorials, focusing on their perceived incentives to author such tutorials and how they feel about the concept of sharing their tutorials with other children. We also elicit reactions towards specific design elements. Our findings suggest important considerations for tools designed to motivate and support children's creation of digital art tutorials.
6
+
7
+ Keywords: Digital art, Drawing, Tutorial authoring system, Sharing workflows, Child-computer interaction, Peer-based learning.
8
+
9
+ Index Terms: H.5.2 [Information Interfaces and Presentation (e.g., HCI)]: User Interfaces; H.5.m [Information Interfaces and Presentation (e.g., HCI)]: Miscellaneous-User studies, Participatory design
10
+
11
+ § 1 INTRODUCTION
12
+
13
+ Art is a common way for children to express themselves. Engaging in art and creativity is considered a productive use of children's time, by promoting social, emotional, motor, and cognitive development $\left\lbrack {4,{44}}\right\rbrack$ , providing a sense of accomplishment, and boosting self-esteem [37]. Digital art tools allow for new effects, many of which are not possible with physical drawing tools. To inspire children to create digital art and connect with other art enthusiast peers, there are several digital art platforms that provide child-centric areas for children to share their creations [69,70].
14
+
15
+ While sharing digital art, many adult creators share not only their end products but also step-based instructions on how they used a particular feature-rich software tool to create them. In doing so, tutorial authors can both showcase their skills and creativity, and help others learn how to use feature-rich software tools to produce similar effects $\left\lbrack {{38},{42},{52}}\right\rbrack$ . With these advantages in mind, prior research has contributed several tutorial systems and authoring tools to support this process [14,19,21,27,40].
16
+
17
+ Despite the potential advantages of creating and sharing digital art tutorials and the fact that children are already actively sharing digital art online, research on tutorial creation tools has generally focused on adults. In addition to showcasing skills and creativity, generating tutorials for peers would provide children with the opportunity to take on the role of a tutor, which has been shown to help children learn to think from others' perspectives, grow their sense of responsibility [2], and foster self-acceptance [1]. Along with developing useful skills such as planning and communicating, enacting the role of a teacher while creating digital tutorials can provide children with a sense of ownership, and purpose [47].
18
+
19
+ In this research, we explore children's attitudes towards creating digital art tutorials for their peers and how a tutorial authoring system might support them in doing so. Our investigation centres around the following research questions: 1) Are children interested in authoring drawing tutorials for other children while creating digital art? 2) What do they see as potential benefits or incentives? 3) How might a semi-automated tool support children in creating tutorials? 4) How do children use a semi-automated tutorial authoring system to communicate their digital art workflows?
20
+
21
+ To address our research questions, we used prototyping as a means of inquiry to elicit reactions and input from our target population. We first conducted a formative study with eight children (ages 6-11) using paper prototyping to evoke responses towards an initial tutorial authoring concept and to refine individual design elements. In a second study with 16 additional children (ages 7-11), we used a higher-fidelity prototype to further probe on attitudes towards creating tutorials, as well as how children might use such a tool. Findings from our study suggested that many children are interested in creating tutorials, with perceived incentives ranging from altruism, to showcasing drawing skills, to documenting their workflows for their own recollection later. Children used the higher-fidelity prototype to generate a range of creative tutorials, indicating the potential of a semi-automated tutorial authoring system to support children's tutorial creation while producing digital art. Our findings also highlight considerations for child-centric authoring tools, such as the importance of balancing tutorial creation with drawing and providing scaffolding to help children annotate their tutorials.
22
+
23
+ The paper's contributions are as follows: 1) We present findings from two studies that illustrate children's attitudes and approaches to creating digital art workflows for their peers. 2) Through an iterative design and evaluation process with children, we provide insight into how an authoring system can support children in creating digital art tutorials.
24
+
25
+ § 2 RELATED WORK
26
+
27
+ In this section, we first discuss prior research on tutorial systems and tutorial authoring tools. We then briefly discuss previous research demonstrating the potential for children to create tutorials for their peers. Finally, we turn to research on children creating different kinds of online digital content.
28
+
29
+ § 2.1 TUTORIAL SYSTEMS AND TUTORIAL AUTHORING TOOLS
30
+
31
+ Digital art is often created using complex software, which has been the focus of a large body of work on designing tutorials and other help systems to support their use [35]. For example, several studies have concentrated on generating image-based tutorials by capturing and visualizing users' operation history of using an application $\left\lbrack {{27},{32},{46}}\right\rbrack$ . There are also systems that automatically generate tutorials containing both the workflow histories and videos of the operations [14,28]. Our work is informed by these prior authoring systems; however, whereas the above work has focused on adults, we specifically focus on a system to help children create tutorials, involving them in the design process.
32
+
33
+ * email address
34
+
35
+ Also relevant to our work are systems that assist users with digital drawing, for example, by providing guidance on how to attain certain effects or drawing elements $\left\lbrack {{19},{21},{33},{40}}\right\rbrack$ . Other work has focused on assisting children in applying tutorials, by helping them locate relevant elements in the target software [30]. In our work, we focus on tools to support children in documenting their drawings and processes. As such, we see our work as being complementary to but distinct from this prior work on helping users (including children) achieve greater drawing success.
36
+
37
+ Although we are not aware of prior work examining tutorial authoring tools for children, there are online platforms for sharing digital art and tutorials with a degree of child focus. For example, DragoArt [71] and DrawingNow [72] list some drawing tutorials targeted at children, however, the vast majority are authored by adults or staff illustrators. Our work focuses on involving children in the design process and on eliciting their reactions to creating tutorials for other children.
38
+
39
+ § 2.2 CHILDREN TUTORING THEIR PEERS
40
+
41
+ Our work builds on previous research showing that children can author academic digital tutorials for their peers $\left\lbrack {{47},{48}}\right\rbrack$ and other teaching-oriented resources, such as educational games to teach other children [34]. Art tutorials differ from those investigated above in that they have the potential to focus more on creativity and inspiration generation than on teaching specific topics. For example, a child-generated math tutorial is created to help peers understand and review a mathematical concept [48], whereas a drawing tutorial might serve to inspire artistic creativity in others. Recent research showed the potential of a music learning app, where children recorded their piano pieces and tutorials on different practice strategies and shared those in an online space to encourage and help their peers to learn playing piano [11]. This research shared similar motivations as ours - inspiring creativity and supporting peer-based learning while enabling children to showcase their artistic competency.
42
+
43
+ Authoring content for other children can help children develop a variety of skills. For example, researchers have investigated the design of game-authoring tools for children $\left\lbrack {{24},{68}}\right\rbrack$ since game creation has the potential to develop narrative skills, improve critical thinking, computer and media literacy, and boost self-esteem [3,24,34]. Collaborative storytelling authoring tools [57,64] improve children's communication skills and writing abilities [57]. Motivated by these benefits, we explore children's attitudes towards a tutorial authoring system that allows them to create digital art tutorials for other children.
44
+
45
+ § 2.3 CHILDREN'S CREATION OF DIFFERENT CREATIVE DIGITAL CONTENT
46
+
47
+ Our work is also inspired by prior work showing that children are interested in and capable of generating creative digital content with the purpose of sharing this content with others. For example, online programming environments like Scratch [54] provide children with the opportunity to create their own interactive digital content, share ideas, collaborate, and communicate with like-minded peers $\left\lbrack {{12},{16},{56}}\right\rbrack$ . Interactive digital storytelling platforms also allow children to practice creativity by generating imaginative stories and collaborating with others $\left\lbrack {5,8,{26},{31},{41}}\right\rbrack$ . In another vein, online user-generated video-sharing communities like YouTube are becoming increasingly popular among children as a stage to exhibit their skills [67] and engage actively with their audience [43]. Digital art creation is but another way for children to express their creativity. Hence, to support children's creation of digital art, researchers have focused on children's cooperative drawing approach [58] and proposed tools to promote collaboration among peers $\left\lbrack {7,{20}}\right\rbrack$ . Findings from these studies suggest that appropriately designed tools to create digital content can provide children with the opportunity to express themselves [6,43,67], showcase their innovativeness $\left\lbrack {5,{12},{16},{54}}\right\rbrack$ , and also inspire others to participate and collaborate $\left\lbrack {8,{26},{31},{56}}\right\rbrack$ . These findings motivated us to investigate how children would approach a tutorial authoring system where they can create digital art tutorials as guidelines for their peers while showcasing their digital art skills.
48
+
49
+ § 3 AUTHORING AND SHARING WORKFLOWS: GENERAL APPROACH
50
+
51
+ To generate insight into how children respond to the idea of documenting and sharing their digital art workflows, we used prototyping as a means of inquiry. Based on previous research showing the value of low-fidelity (lo-fi) prototyping in designing child-oriented applications $\left\lbrack {{23},{45},{55},{60},{62}}\right\rbrack$ , we started with a paper prototype, which we used to elicit initial reactions in a formative study. We then used insights from this formative study to develop a higher-fidelity prototype that we used to conduct a more detailed evaluation.
52
+
53
+ After exploring prior work on tutorial authoring $\left\lbrack {{14},{19},{27},{32},{33},{35},{42},{46}}\right\rbrack$ , we used sketching to explore features that could facilitate a child's tutorial creation process. For example, we considered automatically capturing screenshots or videos of drawing steps (i.e., after each tool use or drawing modification), enabling children to capture their own steps while drawing, and allowing children to create tutorial steps later from a recorded video of their drawing process. In comparing alternatives, our goal was to keep the tutorial creation process simple, to provide some autonomy, and to avoid detracting too much from the fun of drawing.
54
+
55
+ After our sketching process and review of prior work, we settled on an initial design direction that involves allowing children to capture information on their workflows while they are drawing. Based on prior work showing that most tutorials follow a step-based nature $\left\lbrack {{27},{42}}\right\rbrack$ , our tutorial authoring approach assists the child in recording and documenting individual steps of their drawing. In our approach, the child decides when they are ready to save a step, with the prototype capturing the image and information regarding the tools used during that step. To communicate information about their drawing to others, we let children provide comments or tips associated with their steps since prior work suggests that instructions including a combination of images and text are more useful than those that rely on either images or text in isolation [27]. Finally, we wanted to include a review component, where the child could potentially modify their tutorial before saving it and/or sharing it. Our current design approach does not include video demonstrations. We made this design decision based on previous research indicating that navigating video or animations can be complex and time-consuming [27,46], however, adding video elements could be explored in future research.
56
+
57
+ Our initial target audience for this approach was children who are 6-11 years old. We targeted this range to cover children who can think logically and make independent decisions (ages 6-10) [18] and who can reason inductively and think from others' perspectives (ages 7-11) [51]. We refined our target age to 7-11, based on observations from our formative study.
58
+
59
+ § 3.1 LOW-FIDELITY PROTOTYPE
60
+
61
+ To explore the general authoring approach described above with children, we created a low-fidelity prototype. The lo-fi prototype is a paper-based template for a tutorial authoring and display system that has slots for each step in the tutorial (Figure 1). The steps are determined by the child while they are drawing: when they feel that they have reached a step in their workflow, an image of that step is added in the next available slot. Each step also includes sticky notes for the tools used and any comments the child provides. Figure 1 illustrates an example of a complete workflow created by a child with our prototype.
62
+
63
+ < g r a p h i c s >
64
+
65
+ Figure 1. Low-fidelity Prototype. The workflow was generated by a 9-yr-old girl participant in our formative study; The prototype depicts a series of steps to the drawing as defined by the participant. (A) Each box contains a picture of the image that the participant generated using the drawing program for that step. (B) The icon at the bottom left corner of each box indicates which tool was used during that step. (C) The participant also provided tips or comments on pieces of paper and attached them with her captured steps.
66
+
67
+ A challenge that we faced while paper prototyping was simulating the tools (e.g., colour effects, undo/redo, copy and paste) of a digital drawing application on paper in a way that would be engaging for children. So, instead of drawing on paper, we decided to let children draw using Microsoft Paint, which meant that we needed a way to transfer different states of their drawing to the paper prototype. We used a camera and a Polaroid printer for this purpose to capture the image on the screen and quickly print a photo to attach to the paper prototype. This enabled a child to work with the compelling drawing tools, while still retaining the advantages of paper prototyping for eliciting design feedback.
68
+
69
+ § 4 FORMATIVE STUDY
70
+
71
+ We used our lo-fi prototype in a formative study to elicit initial reactions from children on the idea of sharing digital art along with how they made it. To design appropriate child-oriented technology, prior research has recommended involving children in the design process by adopting and extending participatory design methods $\left\lbrack {{10},{11},{17},{29},{41},{59}}\right\rbrack$ . Inspired by this body of research, we also conducted participatory design sessions with the children to refine our authoring system concept. Throughout these participatory design sessions, we encouraged the children to share their ideas while interacting with our low-fidelity prototype.
72
+
73
+ § 4.1 PARTICIPANTS
74
+
75
+ We recruited 8 participants ( 5 girls, 3 boys) who were 6-11 years old with previous experience in digital drawing through snowball sampling [25] and by placing advertisements throughout our university campus. We also asked for a parent to participate in the study so that we could interview them regarding any concerns they might have. After receiving written consent from the parents and verbal assent from the children, we initiated the study sessions. We informed the child that they could withdraw from the study at anytime. In appreciation of their time and participation, the children received a small toy of their choice and the parents received $\$ {15}$ in cash. The study was approved by our research ethics board.
76
+
77
+ Of the eight participants, the two six-year-olds had difficulty grasping the idea of capturing the workflows of their digital art. The remaining six older children (i.e., 7-11) seemed to understand the concept and were therefore in a better position to provide concrete feedback. We report on findings from these six children in the 7-11 age range. We also used these observations to adjust the target age range for our second study.
78
+
79
+ § 4.2 STUDY TASKS, PROCEDURE, AND DATA COLLECTION
80
+
81
+ The study was conducted in a research laboratory (pre-COVID) with one participant at a time. As per our institutionally approved protocol, the child's parent was also present during the entire session. The main tasks in these sessions involved the child creating a digital drawing while a researcher helped them capture their drawing steps. Using the lo-fi prototype, the child then worked with the researcher to craft a tutorial.
82
+
83
+ To help the children understand the context of the use of our prototype, we started the study session by demonstrating a storyboard prototype (see the Supplementary Material). The storyboard introduced the idea of creating a tutorial by depicting a scenario, where a child creates a tutorial and shares it with her friends to illustrate her drawing workflow. Next, we asked the child a few interview questions on their thoughts on sharing their drawings and workflows, seeing others' drawings, and following others' workflows. We then showed the child a PowerPoint prototype to demonstrate what capturing steps of their drawing might look like, before asking the child to draw using MS Paint.
84
+
85
+ After we had introduced the tutorial concept, we asked the child to create a drawing to include in a tutorial. We encouraged the child to tell us when they were ready to create a step, at which point we took a picture of their screen with our camera.
86
+
87
+ When the child was done with the drawing, we printed the captured photos. Then, the child and the researcher started pasting the photos on the prototype. We had a set of small sticky notes with icons of different drawing tools that the child could attach under each step. They could also write tips and comments on pieces of paper and attach those to the steps. We asked them about what they liked and did not like about the prototype, what they would want to change, and what other information they thought might be useful for another child wanting to follow their tutorial. During this process, we encouraged the participant to draw and sketch on the paper prototype to demonstrate their design ideas.
88
+
89
+ We concluded our study session by interviewing the child's parent about any concerns they might have regarding children's sharing of digital art and workflows. Each session lasted approximately one hour.
90
+
91
+ We collected data from our participatory design sessions, the semi-structured interviews with the children, and the short semi-structured interviews with their parents. We video-recorded the participatory design sessions and audio-recorded the interview sessions, which we transcribed and analyzed using open coding to identify participants' views (both positive and negative) towards our tutorial authoring approach and specific design ideas. While qualitative analysis should not necessarily yield counts, we felt that we could see clear enough boundaries in participant views to include counts in our reporting. We do so to give a sense of how prevalent certain sentiments were in our data.
92
+
93
+ § 4.3 FINDINGS
94
+
95
+ § 4.3.1 FEEDBACK FROM THE CHILDREN
96
+
97
+ Upon asking whether they would like to share their drawings with others, most of the participants (5/6 participants) expressed enthusiasm for the idea of sharing drawings and workflows to showcase their drawing skills and also to help others attempt to recreate their drawings.
98
+
99
+ Then someone can do that too and then they'll be happy too. - P3 (7-yr girl)
100
+
101
+ Only one participant was hesitant to share his drawing because he felt that it was not good enough, suggesting a lack of confidence.
102
+
103
+ All six participants were interested to see other children's drawings. They found this concept entertaining and thought it would help them generate ideas. All participants also expressed interest in seeing the workflows behind these drawings. They felt it would help them to recreate a particular drawing they liked.
104
+
105
+ Once my friend Danny, she drew a really cool thing like a girl, and I was like how did you do that?! I would like to try that. - P6 (9-yr girl)
106
+
107
+ From our participatory design sessions, we observed that all six children who were 7-11 years old understood what steps are in a workflow. All six liked the sequential way of displaying the steps as shown in Figure 1. They also found the icons of the tools associated with each step helpful. They believed the display of the workflows was simple and intuitive for other children to understand the drawing process. I like this because if you are reading a book, you'll go like
108
+
109
+ $$
110
+ \text{ this. - P3 (7-yr girl) }
111
+ $$
112
+
113
+ All participants created multiple steps to illustrate their drawing process. They did not hesitate to let us know to capture a photo of the drawing to make it a step. However, sometimes, when concentrating intently on their drawing, a few children (3/6) forgot to capture some of their steps. To tackle this, one participant suggested showing reminders. Participants did not, however, want the system to capture steps without their permission - they wanted to remain in control.
114
+
115
+ In terms of annotating their workflows, while most children were reluctant to write comments at the beginning, everyone attached at least one comment. Examples of their comments included: "Don't try to use pencil for this one", "Careful, this might be the hardest part!", "Now you're done. Great job!". One participant mentioned that having the option to write comments while saving the steps would be more beneficial as they might think of a comment while drawing a particular step and forget about it later.
116
+
117
+ § 4.3.2 FEEDBACK FROM PARENTS
118
+
119
+ In general, the parents were not concerned about their children sharing their drawings online $\left( {6/6}\right)$ . Their main concern was that appropriate parental controls be in place to control what children are sharing and with whom (3/6). A concern more specific to this sharing domain was children sharing art tutorials might affect their creativity negatively if they always try to follow others' instructions $\left( {2/6}\right)$ . On balance, the parents tended to feel that the opportunity to learn to draw from other children would have positive effects (4/6):
120
+
121
+ Sometimes learning to do something somebody else's way can kind of encourage you and give you ideas for how to do something your way. I don't think it'll stifle her creativity as long as she has time and space to do her own things too.
122
+
123
+ To summarize, in response to our first research question of how children would feel about the idea of creating tutorials to share with other children, our formative study provides preliminary insight that our participants were generally positive about the idea. We did see some hesitance that might be attributed to lower confidence, however, warranting further study with a larger sample. In terms of our specific design approach, which borrows elements from adult-oriented tutorial authoring systems (e.g., sequentially displayed steps, commenting, etc.), our participants were generally comfortable with the main interaction style and provided feedback on how to further improve it to meet their needs (e.g., step capture reminders and more flexible commenting). The parents responded positively to the idea of their child sharing their drawings with others, provided proper parental controls were in place.
124
+
125
+ § 5 DEVELOPING A HIGHER-FIDELITY PROTOTYPE
126
+
127
+ In the next phase of our research, we converted our lo-fi prototype into a higher-fidelity one by incorporating children's feedback from the formative study. In creating the higher-fidelity prototype, our goal was to use it as a means of further inquiry [63]. Specifically, we wanted to use this higher-fidelity prototype to gain more detailed insights into how children might respond to our tutorial authoring approach. To facilitate our prototype development, we used a mix of automated capture and Wizard-of-Oz techniques.
128
+
129
+ Our higher-fidelity prototype (Figure 2) allows a child to generate a tutorial while drawing digital art by enabling them to self-capture, annotate, and edit their drawing steps. Our prototype currently works with JS Paint [49] (Figure 2A), an open-source drawing program. While using this program to draw something, when the child chooses to capture a step by clicking the "Save Step" feature (Figure 2A), the prototype automatically records the current state of the drawing as well as the tools used as part of that step. The prototype also allows the child to add a comment when saving a step (Figure 2B). This design decision was based on the feedback from our formative study that some children preferred to write comments while working on the drawing to avoid forgetting them. During our formative study, we also observed that when concentrating on their drawing, participants sometimes forgot to save steps, which they later regretted. Our prototype, therefore, prompts the child to save a step at regular intervals. These prompts are currently controlled via a wizarding interface, which allows a facilitator to issue a reminder if the child appears to be forgetting to save their steps.
130
+
131
+ < g r a p h i c s >
132
+
133
+ Figure 2: Higher-Fidelity Prototype: (A) A "Save Step" feature has been added to JS Paint. When clicked, the prototype captures the progress of the drawing along with tools used for that step; (B) The child can optionally choose to provide a comment with the step; (C) Upon completion of the drawing, the prototype displays the captured steps sequentially along with the associated comments and used tools. Children can edit both the comments and the tools.
134
+
135
+ After the child completes their drawing, the prototype displays an automatically generated step-based tutorial in an HTML page that the child can open, as shown in Figure 2(C). This tutorial displays the sequence of steps captured by the child and includes information on the tools used as well as any comments that the child provided while drawing (Figure 2C). Children can further modify the tutorial by editing comments, deleting unnecessary tool information, and deleting entire steps. After they finish editing the tutorial, the prototype displays the final version of the tutorial so that the child could potentially share the tutorial with other children (e.g., friends). We leave investigating tools for sharing these captured tutorials for future research. In the Supplementary Materials, we include a short video walkthrough of the prototype.
136
+
137
+ § 6 FURTHER CONCEPT EXPLORATION & PROTOTYPE EVALUATION
138
+
139
+ Our formative study provided initial indications that children seemed open to the idea of generating tutorials for other children. In this second study, we use our higher-fidelity prototype to investigate incentives children might have to generate a tutorial for others. We were also interested in observing how they use the prototype, including how they might balance tutorial generation while focusing on their own art, how they would decompose their drawings into steps, and what type of comments they would leave for other children.
140
+
141
+ Due to the COVID-19 pandemic, we transitioned to an online study, where we interacted with participants using video conferencing software.
142
+
143
+ § 6.1 PARTICIPANTS
144
+
145
+ We recruited 16 participants for our study ( 8 girls, 8 boys), all of whom were 7-11 years old (mean age: 9.5) through word-of-mouth and snowball sampling [25]. Given that the study was conducted online, we were able to recruit internationally, including 9 participants from Canada, 6 from the US, 1 from Bangladesh. We again recruited participants with previous experience in using digital tools to draw. Participation was voluntary, and the children were informed that they could withdraw from the study any time. After receiving written consent from the parents and verbal assent from the children, we initiated the study sessions. The parents' presence on video was optional, based on the comfort level and preference of the child. In appreciation for their participation, the family was provided with $\$ {20}$ in cash or as a gift card. The study was approved by our institutional research ethics board.
146
+
147
+ § 6.2 STUDY TASKS, PROCEDURE, AND DATA COLLECTION
148
+
149
+ To conduct the study remotely, we used video conferencing software with the parent's supervision. To enable the facilitator to act as the prototype "wizard", we used TeamViewer, which allowed the participants to access the facilitator's computer screen directly. This also meant that participants did not have to install any other software to run our prototype. Each study session was approximately 60 minutes long.
150
+
151
+ Like our initial formative study, we began by showing the child a storyboard (see the Supplementary Material) to introduce them to the idea of tutorial authoring and asked them some preliminary questions regarding their attitudes towards sharing their drawings and/or workflows. After that, the facilitator demonstrated the prototype by creating a short tutorial.
152
+
153
+ Next, we asked participants to perform the following two tasks: 1) We asked the participant to draw something of their choice. We asked them to capture their steps while drawing and told them that they could provide comments with each step if they wanted to. 2) After the child completed their drawing, we asked them to use the prototype to review the generated tutorial and make any desired modifications. We also showed the participant another tutorial of a simple drawing to get preliminary feedback on how they might feel about using others' tutorials.
154
+
155
+ After completing each task, we asked the child open-ended questions about their experience of using the prototype. We intermixed the interviews and tasks to create a more conversational atmosphere with the child as well as to provide a break from using the prototype. In piloting, we found these breaks to be particularly important with the study being online. We also asked them survey questions by adapting the Fun Toolkit survey technique [53], which has been used in previous studies with children to evaluate interface usability. Using the toolkit, we asked 10 closed, fixed-response questions, covering: i) how they felt about using the prototype's features; ii) which task they liked most, and iii) whether they would like to do each task again. The questionnaire items can be found in the Supplementary Materials. Participants completed the surveys on the facilitator's computer (using TeamViewer).
156
+
157
+ Our main source of data was obtained from the semi-structured interviews conducted throughout the study. We recorded the entire study sessions using a screen recorder to capture the interactions with the prototype. Finally, we used the surveys to elicit structured data on children's experiences with the prototype.
158
+
159
+ § 6.3 FINDINGS: TUTORIAL CREATION
160
+
161
+ Most of our participants $\left( {{12}/{16}}\right)$ were familiar with the concept of a tutorial prior to the study and all successfully generated a step-based tutorial using the prototype. Participants' drawings included flowers, unicorns, nature scenery, ships, and favorite Lego characters. Figure 3 and Figure 4 show two example tutorials created by participants in the study (one from a 7-year-old girl and one from an 11-year-old boy).
162
+
163
+ The median number of steps generated by the participants was 7 per tutorial (min: 4; max: 10; IQR: 2). For most participants, each new element added to the drawing constituted a step. As the formation of a step was conceptual and related to elements of a child's drawing, this indicates that implementing automated step capture would be challenging. For example, simply creating a step for each tool used would have resulted in tutorials with much lower granularity than those created by our participants.
164
+
165
+ Almost all participants (15/16) provided comments with their steps. The median number of comments per tutorial was6.5(IQR: 3) and 14 participants provided a comment with each step. Comments often described the drawing element in the step, e.g., "the ocean", "Lego arms", "moon". Some participants provided more detailed or specific instructions with their comments, e.g., "You first make a cube-like structure", "Make a hill and colour on top", "Add texture to the grass", "Add any of your imaginary details you like". Notably, most of the comments did not focus on how to use certain features of the drawing application. Overall, we did not observe any age differences manifest themselves in the commenting style or informativeness.
166
+
167
+ Our survey findings indicated that all 16 participants felt positive about creating steps and viewing others' tutorials. 14/16 participants also felt positive about writing comments. Additionally, we found that ${10}/{16}$ participants wanted to create a tutorial again; the remaining $6/{16}$ indicated that they might be interested in doing so. However, 12/16 participants did not like to edit their tutorials once they have completed the drawing. This potentially supports our idea of capturing and generating steps when the child is creating the drawing
168
+
169
+ During the study, we also looked for indications of how children approached the task of authoring tutorials while drawing. We observed that participants did not take time to plan out how they want to design a tutorial before starting to work on the drawing. Instead, they seemed to go with the flow, adding elements as they saw fit in the moment while drawing. Participants worked intently on their drawings and tutorials, suggesting that they cared about the final product. Six participants were so engrossed in working on their tutorials that we had to cut them off due to time limitations.
170
+
171
+ < g r a p h i c s >
172
+
173
+ Figure 3: Tutorial authored by a 7-year-old girl (P16)
174
+
175
+ < g r a p h i c s >
176
+
177
+ Figure 4: Tutorial authored by an 11-year-old boy (P6)
178
+
179
+ § 6.4 FINDINGS: INTERVIEWS
180
+
181
+ Since we had a larger sample size and a larger volume of data, we applied a more rigorous qualitative analysis procedure than we did with our formative study, where the goal was to gather preliminary design insights. We transcribed the interview data and then analyzed it by using a bottom-up inductive approach and creating affinity diagrams to identify themes in the data [9]. While creating the affinity diagrams, one researcher initially applied open coding [15] to the quotes and then used affinity diagramming to refine the initial set of codes. This same researcher then clustered related quotes and performed axial coding [61] to identify themes. Two researchers collaboratively iterated on the raw data, clusters, and codes until clear themes emerged.
182
+
183
+ From our analysis, themes emerged related to incentives for creating and sharing, and attitudes towards child-authored tutorials, which we present below. The data collected from this study also contained insight regarding how different features of our prototype might support tutorial creation. To contextualize the quotes, we provide each participant's age and gender. As with our formative study, we report counts to give a sense of prevalence of the sentiments within our data, however, we once again acknowledge that doing so is a contentious issue within qualitative HCI research.
184
+
185
+ § 6.4.1 PERCEIVED INCENTIVES AND DETERRENTS TO CREATE AND SHARE TUTORIALS
186
+
187
+ Table 1 summarizes the reasons children provided for and against the idea of creating and sharing tutorials, which addresses our second research question. All the 16 children provided at least one reason in favor, with some children providing multiple reasons in favor. Five children expressed mixed views. We elaborate on their reasons below.
188
+
189
+ Altruism: The main incentive to create and share tutorials for most of the participants (14/16) was altruism. There were some nuances, however, in how children expressed their desire to create tutorials as a way of helping others. For example, some of the children $\left( {4/{16}}\right)$ wanted to help people in general by sharing their tutorials, whereas others specifically wanted to help their friends (3/16). In terms of why they wanted to share, participants were motivated to give other kids new ideas for drawing, and felt that their tutorials could help other kids create the drawings easily:
190
+
191
+ Table 1: Reasons for and against creating and sharing tutorials along with the number of participants who felt this way
192
+
193
+ max width=
194
+
195
+ 2|c|Reasons for Creating and Sharing Tutorials
196
+
197
+ 1-2
198
+ Altruism 14 participants
199
+
200
+ 1-2
201
+ Assessing own tutorial authoring skills and seeking validation 12 participants
202
+
203
+ 1-2
204
+ Showcase drawing skills 6 participants
205
+
206
+ 1-2
207
+ To keep a record of their own drawing 5 participants
208
+
209
+ 1-2
210
+ 2|c|Reasons for Hesitating to Share Tutorials
211
+
212
+ 1-2
213
+ Lack of confidence 5 participants
214
+
215
+ 1-2
216
+
217
+ I'd like to show my friends so that they can get an idea of what to do next when they draw again and also, I can show them a few steps about how they can make it. ... I'd like it because it'd feel good. Like I'm helping people without even seeing them. - P2 (9-yr boy)
218
+
219
+ Other children (4/16) liked the idea of showing kids how they might draw something differently. For these participants, it seemed to be less about showcasing the final product and more about illustrating their process.
220
+
221
+ It's fun and lets other people learn how to draw something in another way. - P15 (10-yr girl)
222
+
223
+ A few participants $\left( {3/{16}}\right)$ wanted to share their tutorial only if their friends specifically asked for it. They were not confident in their drawing skills and were shy to share their drawings with others unless someone needs them.
224
+
225
+ Assessing Own Tutorial Authoring Skills and Seeking Validation: Some participants (7/16) wanted to share workflows with others to assess their tutorial authoring skills. If others could reproduce or make a better version of their drawings by following their tutorials, they felt that it implied that their tutorial was understandable and useful.
226
+
227
+ I'd just wanna see how good the steps were that I made, and if they ended up making it look more realistic. - P12 (11-yr boy)
228
+
229
+ Others (9/16) thought they would feel validated even just by having another child try their tutorial, since this would mean they produced something interesting. Knowing that others were going to view and use their tutorials to create a drawing gave them the satisfaction that their art is appreciated by others and their effort is valued.
230
+
231
+ I'd like it because some kids like to draw, and I'd like it if they do this thing. I'd be happy too, to see that they used my
232
+
233
+ tutorial. - P15 (10-yr girl)
234
+
235
+ Showcase Drawing Skills: Some of the children who seemed particularly confident in their art and drawing skills, wanted to create and share their tutorials to showcase their skills (6/16). For these children, it seemed less about receiving validation and more about having an outlet to share their creativity with others.
236
+
237
+ If I'm proud of the artwork then I'd wanna show it to other people. So that they have an opportunity to try doing art and
238
+
239
+ learn. - P14 (11-yr girl)
240
+
241
+ To Keep a Record of Their Own Drawing: Finally, some participants $\left( {5/{16}}\right)$ wanted to create tutorials to keep a record for themselves so that they could review it later to recreate the drawing. It indicates that even if a child is not comfortable sharing their tutorials with others, they can still create tutorials for themselves.
242
+
243
+ If I ever went back and reviewed it, it kinda leaves like a bookmark... Next time you can follow the steps again. - P6 (11-yr boy)
244
+
245
+ Lack of Confidence a Deterrent to Sharing: Some participants (5/16) were hesitant to create and share tutorials because they believed that their drawing skills are not adequate to create tutorials, even though we did not find their drawings to be noticeably worse than the other participants. They were not confident that others would like their tutorials.
246
+
247
+ Some of them are better at drawing and I'm scared that they're gonna judge me. - P14 (11-yr girl)
248
+
249
+ P14 mentioned earlier in the interview that she wanted to showcase her drawing skills by sharing the artwork she is proud of. However, at the same time, she had some reservations about sharing due to her lack of confidence. This indicates that some children might be in conflict about whether to share their tutorials.
250
+
251
+ § 6.4.2 FEEDBACK ON THE DESIGN APPROACH
252
+
253
+ During our interviews, children provided feedback on our semiautomated tutorial authoring approach as well as on individual design elements.
254
+
255
+ Capturing Steps Was Intuitive but Can Divert Attention: Participants generally found saving steps while creating the drawing to be simple and intuitive (10/16). One participant mentioned that she got so accustomed to saving steps that she did it without even thinking about it.
256
+
257
+ At one point I kinda forgot that to save step (that she's using
258
+
259
+ the feature of saving steps subconsciously). I kinda got used
260
+
261
+ to saving the steps. - P14 (11-yr girl)
262
+
263
+ On the other hand, some participants $\left( {6/{16}}\right)$ felt that saving steps distracted them from their drawing. They worried that pausing to save steps might ruin their flow and they might forget what they wanted to do.
264
+
265
+ I was kinda in a mood. I like focusing on what I'm doing instead of stopping and doing something else. - P16 (7-yr girl)
266
+
267
+ While an automated step capture feature could avoid this hassle, the challenge would be developing an algorithm that can detect the conceptual, element-based segmentation that children seem to want to employ when manually capturing steps.
268
+
269
+ Mixed Reaction Towards Writing Comments: Though all but one participant provided comments with their steps, only half of those participants $\left( {7/{16}}\right)$ explicitly discussed the value that they saw in providing comments. They believed that comments could assist others to go through the steps and also help them remember what the steps meant if they wanted to review their own tutorials.
270
+
271
+ Writing comments is a good way to explain it because sometimes just looking at pictures doesn't make sense. - P14 (11-yr girl)
272
+
273
+ Some of the participants who were not as enthusiastic about commenting (4/16) found it difficult to come up with appropriate comments. They indicated that it was sometimes hard to explain the steps the way they wanted.
274
+
275
+ Sometimes you have another way to say it in your head and it's complicated to put it in comments. - P15 (10-yr girl)
276
+
277
+ Thus, overall, we observed mixed reactions towards commenting: some were enthusiastic about writing comments; for others, it seemed to be a source of pressure. At a minimum, this supports our decision to make commenting optional. Future versions could explore ways to assist the children who want to provide comments but struggle to verbalize their thoughts.
278
+
279
+ Tools Are Not Always Sufficient: The tool information provided with each of the steps was seen as useful by most participants as they felt it gave a clear idea of which tools were needed to achieve a certain effect. However, some participants wanted to provide more information regarding the tools that they used (6/16). For example, in addition to the tool name and the icon, some tools could have more details, such as brush size, the colour of the paint, etc. Future versions could explore designs that can include additional information for certain tools.
280
+
281
+ § 6.4.3 ATTITUDES TOWARDS FOLLOWING OTHER CHILDREN'S TUTORIALS
282
+
283
+ In addition to getting insights into children's incentives to generate tutorials, we hoped to gain initial insight into how the children felt about being consumers of kid-generated tutorials. To keep sessions at a reasonable length, we showed participants a sample tutorial to elicit their opinions, but they did not have to follow a tutorial.
284
+
285
+ All children in the study responded positively to the idea of viewing another child's tutorial. The main reason for wanting to see others’ tutorials was to gain new ideas and inspiration (11/16). Participants mentioned that they are sometimes unsure about what to draw, how to start, and are interested in seeing other ways to draw something. Participants also (7/16) mentioned how they can learn from others who are better at drawing by viewing their tutorials and by comparing their drawings to find potential ways to improve. One child mentioned that she wanted to make the authors feel happy that someone has tried out their tutorial. Although they were willing to view others' tutorials, three participants were not enthusiastic about the idea of following others' tutorials. They indicated that they did not like following instructions or wanted to draw something in their own way, with their own creativity.
286
+
287
+ § 7 DISCUSSION, LIMITATIONS, AND FUTURE WORK
288
+
289
+ Findings from our second study suggest that most of the children were interested in and capable of authoring drawing tutorials. The study findings also shed light on children's perceived incentives to author and ultimately share their tutorials, which included helping their peers and other social incentives (e.g., seeking validation and showcasing skills). Some also wanted to maintain a record for their own purposes. We were surprised by the extent that their motivations mirrored those found in prior work on adult populations. For example, altruism is an intrinsic motivator for adults who share their knowledge online [65]. Similar to the incentive of 'showcase drawing skills', adults also author tutorials to showcase the workflows they find interesting [50]. Self-efficacy is another important consideration [66]. In our studies, we noticed that children's level of confidence in their drawing abilities seemed to affect their attitudes towards sharing.
290
+
291
+ Our findings indicate that a semi-automated tutorial authoring system can potentially enable children to generate step-based tutorials. In terms of important design considerations, most children in our study responded positively to the idea of creating a tutorial while they were drawing. Further, they found the post-hoc modifications to be the least fun activity of the study session. This suggests that interleaving tutorial generation with the principal activity is a promising design direction. We saw that children wanted to control the granularity of their steps, but sometimes became so engrossed in the drawing activity that they forgot to do so. Adaptive prompts or automated step-capture features could potentially address this but would need to consider the characteristics and tendencies of the child artist. Our findings also suggest that children appreciated the ability to annotate their steps, however, some found it difficult to craft good comments. Future work could therefore consider ways to scaffold this process, for example, through sample comments or comment templates.
292
+
293
+ Children seemed interested and open to the idea of using another child's tutorial, however, further study is needed to understand the relative advantages and disadvantages of the child- vs. adult-authored tutorials for this type of creative activity. To get an initial sense of how an adult-authored tutorial might compare to what we saw in our study, we selected a small sample of 16 adult-authored tutorials dedicated to children from DrawingNow [72] and DragoArt [71] that fell into similar drawing categories. Based on this small, curated sample, we observed both similarities and differences. For example, both the groups of tutorials had approximately the same number of steps and comments. On the other hand, the adult-authored tutorials tended to follow a structured way of drawing, starting with a workable frame to make the drawing process easier, whereas our participants took a less structured approach, allowing their drawings to move in creative directions. The children in our study seemed to focus more on their drawings and generated the tutorial as a by-product of that activity to share the process with others, however, this might be an artifact of our study design, which did not involve a dedicated tutorial planning phase. The child-created tutorials also involved more straightforward drawings and simpler comments than the adult-created ones, which might be easier for younger children to follow. Future research should investigate these differences in a more structured and systematic way as well as on how children experience the tutorials. For example, it is possible that adult-authored tutorials are better at teaching drawing skills and specific techniques, whereas children's tutorials might be more relatable and inspire creativity.
294
+
295
+ We conducted our second study online due to the COVID-19 restrictions, which introduced some limitations. For example, participants were sometimes distracted by siblings, some experienced internet issues, and some parents had difficulties setting up the study. A recent study investigating online synchronous co-design with children during the pandemic also identified these factors as impacting children's interaction during study sessions [39]. While designing the online study, we had to be particularly mindful of study session length due to the video conferencing fatigue. For example, we had originally intended to have children try a previously created tutorial to elicit grounded data on their perceptions, a task that we eliminated after pilot tests. Despite the difficulties, we found that participants in our studies were as or even more engaged in the interviews than they were in our initial lab-based study. We suspect that being in the familiar environment of their home helped make the children comfortable in expressing their thoughts.
296
+
297
+ Due to the study being online and the numerous COVID restrictions that were in place, we recruited via snowball sampling beginning with the authors' personal contacts [25]. Given that one of the paper authors is at an institution outside of their home country, this sampling technique resulted in participants from three different countries, which introduced diversity into our participant pool. On the other hand, diverse cultural backgrounds can impact interview responses [22] and could potentially have made our investigation less focused than it would have been with a more locally recruited population. While we did not see any noteworthy differences in how participants from the three different countries approached tutorial authoring and sharing, future work should investigate the generalizability of our findings to a larger sample of children both within and across cultures.
298
+
299
+ While building a child-centric sharing platform is beyond the scope of this work, the overlap in our participants' motivations for sharing their tutorials with prior results on adult tutorial sharing, suggests opportunities to learn from prior adult-centric research on how to motivate sharing online. For example, positive voting and textual comments have been shown to encourage adults to contribute [13]. Future work can explore the extent to which these prior approaches could also encourage a range of children to share their digital art workflows online, or conversely if new child-centric approaches are needed. In the future, a longitudinal study could enable us to investigate how the act of sharing one's art tutorials impacts a child's sense of self-accomplishment.
300
+
301
+ Future work could also explore alternative uses of this type of drawing-capture approach. For example, one child in our study proposed the idea of using the system to create an illustrated story with her friends. In addition to acting as a creative outlet, prior work in the domain of programming found that storytelling helped children learn the concepts [36]. Finally, it would be interesting to explore the generalizability of our approach to other creative activities that involve complex software, such as $3\mathrm{D}$ modelling for child-oriented makerspaces.
302
+
303
+ § 8 CONCLUSION
304
+
305
+ In this paper, we present the participatory design and evaluation of a children's tutorial authoring system for digital art. Findings from our studies illustrate the potential for children to be engaged and motivated by this form of peer-based help and knowledge sharing, with potential applications to other domains (e.g., helping children create programming tutorials). Our approach is also but one way to provide children with tools to share aspects of their creative process with others. Future work should explore new ways for children to communicate their digital art ideas and skills with their peers and connect with other children in positive online communities. Future work should also study the role of such communities in fostering important social skills.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/SDyj8aZBPrs/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,361 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # An Empirical Study on the Effect of Quick and Careful Labeling Styles in Image Annotation
2
+
3
+ Chia-Ming Chang*
4
+
5
+ The University of Tokyo
6
+
7
+ Xi Yang†
8
+
9
+ Jilin University
10
+
11
+ Takeo Igarashi‡
12
+
13
+ The University of Tokyo
14
+
15
+ ## Abstract
16
+
17
+ Assigning a label to difficult data requires a long time, particularly when non-expert annotators attempt to select the best possible label. However, there have been no detailed studies exploring a label selection style during annotation. This is very important and may affect the efficiency and quality of annotation. In this study, we explored the effects of labeling style on data annotation and machine learning. We conducted an empirical study comparing "quick labeling" and "careful labeling" styles in image-labeling tasks with three levels of difficulty. Additionally, we performed a machine learning experiment using labeled images from the two labeling styles. The results indicated that quick and careful labeling styles have both advantages and disadvantages in terms of annotation efficiency, label quality, and machine learning performance. Specifically, careful labeling improves label accuracy when the task is moderately difficult, whereas it is time-consuming when the task is easy or extremely difficult.
18
+
19
+ Keywords: Cognitive Psychology, Labeling Style, Non-Expert Data Annotation, Data Collection, Machine Learning.
20
+
21
+ Index Terms: - Computing methodologies~Artificial intelligence ~Philosophical/theoretical foundations of artificial intelligence -Cognitive science
22
+
23
+ ## 1 INTRODUCTION
24
+
25
+ A large, high-quality dataset is necessary to obtain better machine learning results. However, it is expensive to recruit a large number of expert annotators (who have sufficient domain knowledge) to work on it. Recruiting non-expert annotators (typically crowd workers) is cheaper and easier; therefore, it is often the only viable option in practice [20, 21]. However, label quality is critical in non-expert data annotation (i.e., crowdsourcing tasks) $\left\lbrack {{22},{23},{35}}\right\rbrack$ . Various annotation methods and tools have been introduced to address this issue [13, 14, 29, 32, 40]. However, there are no detailed studies examining this issue from a human perspective (i.e., cognitive psychology), such as investigating the effect of a label selection style during annotation (i.e., how a user makes a label decision). This is important, and it could affect annotation efficiency and quality. The different labeling styles used by annotators could affect the annotation efficiency and quality.
26
+
27
+ This study presents an empirical study comparing two labeling styles (quick labeling and careful labeling) for a manual image labeling task with three datasets under different levels of data difficulty: easy (MNIST), moderately difficult (Fashion-MNIST), and extremely difficult (Kuzushiji-MNIST). Thereafter, we conducted a machine learning experiment using the labeled images with quick labeling and careful labeling styles and compared the machine learning results (classification accuracy), as shown in Figure 1.
28
+
29
+ ![01963e66-e142-7255-9402-efb093b041aa_0_928_870_715_279_0.jpg](images/01963e66-e142-7255-9402-efb093b041aa_0_928_870_715_279_0.jpg)
30
+
31
+ Figure 1: Research Overview.
32
+
33
+ These results indicated that the labeling style affects annotation efficiency (task completion time) and label accuracy. The careful labeling style exhibited significantly higher accuracy than the quick labeling style when the dataset was moderately difficult. Moreover, the results of the machine learning experiment indicated that the labeled images (training data) collected via the careful labeling style could achieve better machine learning performance (higher classification accuracy) than that collected via the quick labeling style. However, the careful labeling style did not bring benefits when the images were easy (i.e., label accuracy was already high via the quick labeling style) or extremely difficult (i.e., the careful labeling style could not significantly improve the label accuracy). We discussed the effects of the two labeling styles according to three different levels of data difficulty (easy, moderately difficult, and extremely difficult), and we discussed three factors that need to be carefully considered when selecting an appropriate labeling style for an annotation task. This study makes the following contributions:
34
+
35
+ - Identifying labeling styles as a variable in non-expert image annotation and machine learning.
36
+
37
+ - An empirical study comparing quick labeling and careful labeling styles, demonstrated the benefits of the careful labeling style in an image annotation task.
38
+
39
+ - A machine learning experiment using labeled images with different labeling styles, demonstrated the effects of the labeling styles on image classification accuracy.
40
+
41
+ ---
42
+
43
+ *e-mail: chiaming@ui.is.s.u-tokyo.ac.jp
44
+
45
+ †e-mail: yangxi21@jlu.edu.cn
46
+
47
+ ‡e-mail: takeo@acm.com
48
+
49
+ ---
50
+
51
+ ## 2 RELATED WORK
52
+
53
+ ### 2.1 Manual Data Annotation and Challenges
54
+
55
+ Manual data annotation is a basic practice in machine learning. A large size is often necessary to improve machine learning results. Popular datasets including ImageNet [28], AudioSet [30], and YouTube-8M [31], which were manually labeled by human annotators, were applied. The manual data annotation process is extremely tedious and time-consuming, and many studies have proposed annotation tools for assisting manual data annotation. For instance, LabelMe [10] is a web-based image annotation tool that allows multiple annotators to label an image and share their labeling results instantly. ESP [11] is an image annotation tool combined with a computer game that provides an enjoyable labeling process for annotators, and TagATune [27] is an audio annotation tool that shares the same idea. VIA [12] is an annotation tool that allows annotators to define and describe spatial regions in images, audio segments, and video frames. iVAT [32] is a video annotation tool that supports manual, semiautomatic, and automatic video annotation.
56
+
57
+ Most of these tools provide supportive, efficient, and enjoyable systems for improving the tedious processes of manual data annotation. These tools generally assume that annotators have sufficient domain knowledge for labeling tasks. However, data annotation tasks often rely on non-expert annotators who lack sufficient domain knowledge because access to a sufficient number of expert annotators is limited and expensive [18, 19]. Therefore, labeling tasks can be significantly difficult for nonexpert annotators and the labeled data may contain numerous errors [23] [33] [36]. However, these annotation tools may not be able to address this issue when annotators are non-experts.
58
+
59
+ ### 2.2 Annotation Workflows for Improving the Label Quality
60
+
61
+ Many data annotation workflows have been proposed to improve the label quality, particularly for annotation tasks conducted through crowdsourcing. Revolt [13] is a collaborative crowdsourcing labeling workflow that applies concepts from expert annotation workflows (label-check-modification). This specific workflow can produce higher label quality than a conventional labeling workflow. Pairwise HITS [14] is a labeling workflow for quality estimation that allows annotators to compare a pair of labeled data and select the better one. Fang et al. [32] introduced a two-round workflow to improve the quality of crowdsourced image labeling. During the first round, the annotators select a label for the target images (several labels are assigned to each image). During the second round, the annotators are required to select the best label for each image (referring to the results from other annotators). Baba [29] introduced two types of labeling workflows (parallel and interactive) that allow multiple annotators to be involved in an annotation task in different ways to improve the label quality. In addition, various studies have used the concept of hierarchical classification in data annotation to increase the labeling efficiency and label quality [15] [16].
62
+
63
+ The main concept of these annotation workflows is the gathering of knowledge from multiple individuals. This is a typical workflow for improving the label quality by involving a group of annotators (non-experts or experts) to collaborate for a labeling task. In this study, we aim to explore the "labeling styles" rather than the "labeling workflows" during a data annotation task. In addition to machine learning, data annotation has been used in various research areas. For instance, social scientists annotate data to discover interesting phenomena and establish theories [7] [9], and data annotations, such as a thematic analysis approach, is often used to analyze qualitative data [8]. Data annotation is not only a labeling process but also a cognitive process by which annotators view data, organize concepts, and make labeling decisions. Concept organization plays a crucial role in data annotation. Kulesza et al. [17] indicated that annotators often organize their conceptual similarity by observing more items during data annotation. Chang et al. [40] shared the same concept and proposed a spatial layout labeling interface for concept organization during the annotation process. We believe that cognitive processes (i.e., how to make a labeling decision) in a manual data annotation task are important and they could affect the label quality and cost.
64
+
65
+ ### 2.3 Intuitive and Systematic Decision-Making
66
+
67
+ Cognitive style (or thinking style) is a term used in cognitive psychology to describe ways in which individuals organize and process information, and finally make decisions [1] [2] [3]. Intuitive and systematic decision-making are two types of cognitive style. Intuitive decision-making is a type of associative thinking that relies on intuition, and systematic decision-making is a type of rule-based thinking that relies on logical evaluation [4]. Both cognitive styles have advantages and disadvantages. For instance, intuitive decision-making requires less time than systematic decision-making. However, systematic decision-making involves a deeper consideration process than intuitive decision-making.
68
+
69
+ These two cognitive styles have been used and discussed in several fields. Sagiv et al. [3] analyzed different intuitive and systematic cognitive styles used by art, accounting, and mathematics students. They established that different students prefer different cognitive styles in their class, and the cognitive style is consistent with an individual's personal attributes. Ma-Kellams and Lerner [5] compared intuitive and systematic cognitive styles to understand the feelings of other people, and they established that a systematic cognitive style can produce better empathic accuracy than an intuitive cognitive style. Hwang and Lee [6] explored the impact of intuitive and systematic cognitive styles of consumers on their visual attention patterns in online shopping environments. The results indicated that consumers pay different visual attention to webpages when they use different cognitive styles to make purchasing decisions. These studies have shown the effects of the cognitive styles used in various activities. In this study, we share a similar concept to explore the effects of cognitive styles (labeling styles) in manual data annotation, where annotators complete a labeling task (i.e., select an appropriate label for an image) through intuitive (quick labeling) and systematic decision-making (careful labeling).
70
+
71
+ ## 3 LABELING STYLE AND USER INTERFACE
72
+
73
+ We defined two labeling styles of manual image annotation based on the theory of cognitive psychology: quick labeling and careful labeling.
74
+
75
+ Quick labeling. Quick labeling refers to intuitive decision-making. Here, annotators select a label for an image as quickly as possible even when they lack confidence in the target image and label. They are strongly encouraged to select labels within a short time.
76
+
77
+ Careful labeling. Careful labeling refers to the concept of systematic decision-making. Here, annotators select an image label as carefully as possible, particularly when they are not confident of the target image and label. They are strongly encouraged to spend sufficient time before making a label decision.
78
+
79
+ A labeling system was developed to evaluate the quick labeling and careful labeling styles in a manual image annotation task. Figure 2 shows a screenshot of the image-labeling interface. The left side of the interface lists the labels ( 10 labels/categories in the Fashion-MNIST dataset), and the right side of the interface represents the target image. During the labeling task, the annotators were asked to select an appropriate label from the label list and apply it to the target image. After selecting a label for an image (by clicking on a label), the system automatically moves to the next image. The annotators were not allowed to return to the previous images after selecting an image label.
80
+
81
+ ![01963e66-e142-7255-9402-efb093b041aa_2_176_337_667_446_0.jpg](images/01963e66-e142-7255-9402-efb093b041aa_2_176_337_667_446_0.jpg)
82
+
83
+ Figure 2: Screenshot of the labeling interface.
84
+
85
+ ## 4 USER STUDY
86
+
87
+ A user study was conducted to compare the quick labeling and careful labeling styles applied to an image-labeling task. We aimed to observe the effects of the two labeling styles in image labeling tasks, specifically at the three different levels of data difficulty (easy, moderately difficult, and extremely difficult). We compared quick labeling and careful labeling styles in terms of label accuracy and labeling time for the given image labeling tasks.
88
+
89
+ ### 4.1 Apparatus
90
+
91
+ To control user study quality, specifically the use of quick labeling and careful labeling styles during image-labeling tasks, we outsourced the execution of the user study to a professional company, which asked their employees to participate in the user evaluation process as part of their job. The total cost was approximately $\$ {1600}$ , that is,\$640 for the quick labeling task (\$53 per participant) and \$960 for the careful labeling task (\$80 per participant). During the user study, the participants were asked to sit in front of a desktop and complete the given image-labeling tasks (Figure 3).
92
+
93
+ ![01963e66-e142-7255-9402-efb093b041aa_2_208_1516_609_342_0.jpg](images/01963e66-e142-7255-9402-efb093b041aa_2_208_1516_609_342_0.jpg)
94
+
95
+ Figure 3: Photography of user study.
96
+
97
+ ### 4.2 Participants
98
+
99
+ Twenty-four participants ( 12 men and 12 women, 18-49 years) were invited by the company to participate in the user study. All the participants were Japanese (i.e., understood Japanese Hiragana letters). Most of the participants $\left( {\mathrm{n} = {19}}\right)$ had no prior experience with data annotation, four had less than half a year of experience, and one had between half and a full year of experience.
100
+
101
+ ### 4.3 Dataset
102
+
103
+ Three datasets, MNIST [24], Fashion-MNIST [25], and Kuzushiji-MNIST [26], were used for labeling tasks in the user study. Each dataset contained 60,000 training images and 10,000 testing images in ten categories (labels). Figure 4 shows the ten categories for each dataset.
104
+
105
+ ![01963e66-e142-7255-9402-efb093b041aa_2_936_450_695_225_0.jpg](images/01963e66-e142-7255-9402-efb093b041aa_2_936_450_695_225_0.jpg)
106
+
107
+ Figure 4: Ten categories in the MNIST, F-MNIST, and K-MNIST datasets.
108
+
109
+ The datasets had varying levels of difficulty. MNIST is an "easy" dataset. These handwritten number digits are not difficult for human users to recognize, even when they are in difficult cases, as shown in Figure 5(a). Fashion-MNIST is a "moderately difficult" dataset. It is because it contains some difficult (confusing) items (e.g., "Pullover" and "T-shirt/Top"). Figure 5(b) shows examples of easy and difficult (confusing) cases. Kuzushiji-MNIST is an "extremely difficult" dataset. The handwritten Japanese Hiragana letters are very difficult to recognize (even for Japanese users), specifically when the letters are in difficult cases. Figure 5(c) shows examples of easy and difficult cases.
110
+
111
+ ![01963e66-e142-7255-9402-efb093b041aa_2_926_1133_719_182_0.jpg](images/01963e66-e142-7255-9402-efb093b041aa_2_926_1133_719_182_0.jpg)
112
+
113
+ Figure 5: Examples of easy (upper part) and difficult (lower part) cases from the MINIST, F-MNIST and K-MNIST datasets.
114
+
115
+ We randomly selected 100 images (10 images per label) from each dataset. Thereafter, we created 12 100-image datasets (e.g., Datasets 1-12) with no overlapping images (1200 images from each dataset, and ten split into 12 non-overlapping subsets). The 12 100-image datasets were used for 12 participants in the quick labeling task and 12 participants in the careful labeling task (the same datasets were used for both labeling styles).
116
+
117
+ ### 4.4 Task and Condition
118
+
119
+ The image labeling tasks involved labeling 300 images (100 images for each dataset) for each participant. During the image labeling task, the participants were requested to select an appropriate label from a 10-category list (10 labels) for each image. A between-subject method was used, in which 12 of the participants were asked to complete the labeling task using the quick labeling style, and the other 12 participants were asked to complete the labeling task using the careful labeling style.
120
+
121
+ Quick Labeling Conditions. The participants of the quick labeling task were provided with the following instructions for the labeling task:
122
+
123
+ "Please select a label for an image as quickly as possible. Here, if you are unsure about the target images and labels, please simply select the most appropriate one (i.e., make a guess based on your intuition). Please do not spend too much time considering this before selecting the label. We STRONGLY ENCOURAGE you to select a label for an image within $5\mathrm{\;s}$ . A $5\mathrm{\;s}$ -timer is provided on the labeling interface. After selecting a label for an image, the system will automatically show the next image. This means that you are not allowed to change your selected label."
124
+
125
+ Our labeling system for the label-quick task contains a 5 s-timer for labeling each image. An alert message "Time's up! Please select a label now" is displayed when the timer ends, as shown in Figure 6.
126
+
127
+ ![01963e66-e142-7255-9402-efb093b041aa_3_183_536_667_453_0.jpg](images/01963e66-e142-7255-9402-efb093b041aa_3_183_536_667_453_0.jpg)
128
+
129
+ Figure 6: Timer and alert message in the labeling system.
130
+
131
+ The system did not automatically move to the next image until the participants selected a label for the target image, even when the timer ended. The timer was designed as a reminder to the participants during annotation. In addition, participants were not allowed to return to the previous image after selecting a label for the image.
132
+
133
+ Careful Labeling Conditions. The participants in the careful labeling task were provided with the following instructions:
134
+
135
+ "Please select a label for an image as carefully as possible. There is no time limitation for labeling each image. You have sufficient time to think carefully before making a label decision, particularly when the target images are difficult or when you are not confident about the images and labels. We STRONGLY ENCOURAGE you to spend sufficient time (there is no time limitation) before making a label decision. After selecting a label for an image, the system will automatically show the next image. This means that you are not allowed to change your selected label."
136
+
137
+ The labeling system for the careful labeling task is the same as that used for the quick labeling task; but there is no timer shown on the labeling interface (Figure 2). The participants were allowed to spend as much time as necessary to select a label. In addition, the participants were not allowed to return to the previous image after selecting a label for an image. The participants were informed that they could receive 1.5 times higher rewards for the careful labeling task than for a normal labeling task (i.e., quick labeling task).
138
+
139
+ ### 4.5 Procedure
140
+
141
+ The instructor provided an oral overview and detailed written instructions to the participants. The evaluation itself was composed of three parts (in order): instruction and trial (5-10 min), labeling tasks (20-45 min), and questionnaire (3-5 min). The entire evaluation process was completed within ${40} - {60}\mathrm{\;{min}}$ (depending on the labeling style). After providing instructions on the labeling interfaces and the given tasks, the participants were allowed to practice on a small labeling task (to label three images for each dataset that differed from the images used in the formal tasks) before starting the given image labeling tasks.
142
+
143
+ ### 4.6 Measurement
144
+
145
+ Our labeling system automatically recorded and measured the time and accuracy of the image-labeling tasks completed by the participants. The timer started when the participants clicked on "START" and stopped when they clicked on "FINISH." In addition, the system recorded the time spent by the participants for each image-labeling process. After the image labeling tasks, the participants were asked to answer a questionnaire regarding the labeling process. The questionnaire contained three Likert-scale questions for each labeling style (Section 6.4).
146
+
147
+ ## 5 Machine Learning Experiment
148
+
149
+ In the user study, 7200 labeled images were collected (2400 for each dataset and 1200 for each labeling style in a dataset). The training dataset contained errors made by the participants (i.e., the accuracy rate of the training data was not ${100}\%$ ). We used them as the training data to perform a machine learning experiment to evaluate the effects of labeling styles (data collected via the quick labeling and careful labeling styles) on machine learning accuracy (image classification). Three common machine learning algorithms (logistic regression, K-nearest neighbors, and support vector machine) were selected for the case study in the machine learning experiment. We did not use more advanced techniques (e.g., deep learning) because the training dataset was too small and our goal was not to pursue high machine learning accuracy but to compare the difference between the two labeling styles. The testing data used in the machine learning experiment were 30,000 images (10,000 for each dataset), which were different from the training dataset (7200 labeled images).
150
+
151
+ ## 6 RESULTS
152
+
153
+ ### 6.1 Task Completion Time
154
+
155
+ Figure 7 shows the task completion times for the different labeling styles and datasets. The results from the quick labeling task indicated that the participants spent an average of $4\mathrm{\;{min}}$ and ${23}\mathrm{\;s}$ , $5\mathrm{\;{min}}$ and $6\mathrm{\;s}$ , as well as $5\mathrm{\;{min}}$ and ${37}\mathrm{\;s}$ to label the 100 images with the MNIST, Fashion-MNIST, and Kuzushiji-MNIST datasets, respectively. The results indicated that the participants spent an average of $4\mathrm{\;{min}}$ and ${42}\mathrm{\;s},6\mathrm{\;{min}}$ and ${58}\mathrm{\;s}$ , and ${10}\mathrm{\;{min}}$ and ${13}\mathrm{\;s}$ labeling the 100 images in the careful labeling task. The results of an unpaired t-test of the task completion time indicated that the difference was insignificant $\left( {\mathrm{p} > {0.05}}\right)$ between the quick labeling and careful labeling styles in the MNIST dataset, whereas there were significant differences $\left( {\mathrm{p} < {0.01}}\right)$ in the Fashion-MNIST and Kuzushiji-MNIST datasets. This indicates that the careful labeling style requires a longer time to complete labeling tasks than the quick labeling style when the images are moderately and extremely difficult. However, the task completion time was comparable between the quick and careful labeling styles when the images were easy.
156
+
157
+ ![01963e66-e142-7255-9402-efb093b041aa_4_171_145_714_423_0.jpg](images/01963e66-e142-7255-9402-efb093b041aa_4_171_145_714_423_0.jpg)
158
+
159
+ Figure 7: Task completion time. MNIST. LQ: mean $= {4.23};\mathrm{{SD}} =$ ${0.73};\mathrm{{LC}} :$ mean $= {4.42};\mathrm{{SD}} = {0.61}$ . F-MNIST. LQ: mean $= {5.06}$ ; SD = 0.76; LC: mean = 6.58; SD = 1.13. K-MNIST. LQ: mean = ${5.37};\mathrm{{SD}} = {0.90};\mathrm{{LC}}$ : mean $= {10.13};\mathrm{{SD}} = {3.55}$ .
160
+
161
+ ### 6.2 Annotation Accuracy
162
+
163
+ Figure 8 shows the accuracy of the labels given by the participants in the quick labeling and careful labeling styles. The results from the quick labeling task indicated accuracies of 97.58%, 72.08%, and 58.08% for the three datasets, and the results from the careful labeling tasks indicated accuracies of ${97.58}\% ,{76.83}\%$ , and ${60.08}\%$ . The analysis of accuracy using an unpaired t-test showed that the difference was insignificant $\left( {\mathrm{p} > {0.05}}\right)$ in the MNIST and Kuzushiji-MNIST datasets, whereas there was a significant difference $\left( {\mathrm{p} < {0.05}}\right)$ between the two labeling styles in the Fashion-MNIST dataset. This indicates that the careful labeling style can help non-expert annotators to select labels more correctly when the images are moderately difficult (Fashion-MNIST), whereas no clear benefit was observed when the images were easy (MNIST) and extremely difficult (Kuzushiji-MNIST). This indicates that conducting a careful labeling task is expensive when the task is either easy or difficult.
164
+
165
+ ![01963e66-e142-7255-9402-efb093b041aa_4_172_1236_692_328_0.jpg](images/01963e66-e142-7255-9402-efb093b041aa_4_172_1236_692_328_0.jpg)
166
+
167
+ Figure 8: Accuracy of Labeling Tasks. MNIST. LQ: mean = 97.58; $\mathrm{{SD}} = {1.44}$ ; LC: mean $= {97.58}$ ; SD $= {1.56}$ . F-MNIST. LQ: mean $= {72.08};\mathrm{{SD}} = {4.17};\mathrm{{LC}}$ : mean $= {76.83};\mathrm{{SD}} = {6.64}$ . K-MNIST. LQ: mean $= {58.08};\mathrm{{SD}} = {6.53};\mathrm{{LC}} :$ mean $= {60.08};\mathrm{{SD}} = {7.82}$ .
168
+
169
+ ### 6.3 Temporal Effect
170
+
171
+ ## Task Completion Time
172
+
173
+ Figure 9 shows the average time for the labeling process for the first half (1-50 images) and second half (51-100 images) in the three datasets using the quick labeling style. The results indicated that the participants spent an average of $2\mathrm{\;{min}}6\mathrm{\;s}$ and $2\mathrm{\;{min}}{17}\mathrm{\;s}$ , $2\mathrm{\;{min}}{36}\mathrm{\;s}$ and $2\mathrm{\;{min}}{30}\mathrm{\;s}$ , and $2\mathrm{\;{min}}{29}\mathrm{\;s}$ and $3\mathrm{\;{min}}8\mathrm{\;s}$ to complete the first and second halves of the MNIST, Fashion-MNIST, and Kuzushiji-MNIST datasets, respectively. The results of the paired t-test indicated that the difference was not significant (p > 0.05) between the first and second halves of the labeling process in the MNIST and Fashion-MNIST datasets, but the difference was significant $\left( {\mathrm{p} < {0.05}}\right)$ in the Kuzushiji-MNIST dataset. This indicates that there is a temporal effect at all levels of data difficulty when using the quick labeling style. However, the results interestingly indicated that the participants spent a longer time completing the second half of the image-labeling task.
174
+
175
+ ![01963e66-e142-7255-9402-efb093b041aa_4_946_293_612_308_0.jpg](images/01963e66-e142-7255-9402-efb093b041aa_4_946_293_612_308_0.jpg)
176
+
177
+ Figure 9: Average time of the first and second 50 images in the quick labeling style. MNIST (1-50 images): mean $= {2.06};\mathrm{{SD}} =$ 0.38; MNIST (51-100 images): mean $= {2.17};\mathrm{{SD}} = {0.35}$ . F-MNIST (1-50 images): mean $= {2.36};\mathrm{{SD}} = {0.28}$ ; F-MNIST (51- 100 images): mean $= {2.30};\mathrm{{SD}} = {0.48}$ . K-MNIST (1-50 images): mean $= {2.29};\mathrm{{SD}} = {0.21}$ ; K-MNIST (51-100 images): mean $= {3.08};\mathrm{{SD}} = {0.27}$ .
178
+
179
+ Figure 10 shows the average time for the labeling process for the first half (1-50 images) and second half (51-100 images) using the careful labeling style in different datasets. The results indicated that the participants spent an average of $2\mathrm{\;{min}}{24}\mathrm{\;s}$ and $2\mathrm{\;{min}}{18}\mathrm{\;s},4\mathrm{\;{min}}5\mathrm{\;s}$ and $2\mathrm{\;{min}}{53}\mathrm{\;s}$ , and $4\mathrm{\;{min}}{21}\mathrm{\;s}$ and $5\mathrm{\;{min}}{52}$ s to complete the first and second halves of the MNIST, Fashion-MNIST, and Kuzushiji-MNIST datasets, respectively. The results of the paired t-test indicated that the difference was not significant $\left( {\mathrm{p} > {0.05}}\right)$ between the first and second halves of the labeling process in the MNIST dataset, but the difference was significant (p < 0.05) in the Fashion-MNIST and Kuzushiji-MNIST datasets. This indicates that there is no temporal effect in the use of the careful labeling style when the images are easy. However, a temporal effect was observed when the images were moderately difficult. The participants significantly increased their labeling speed in the second half of the image-labeling task. In addition, the participants spent more time in the second half when the images were extremely difficult, which is the same as the quick labeling style.
180
+
181
+ ![01963e66-e142-7255-9402-efb093b041aa_4_947_1418_621_294_0.jpg](images/01963e66-e142-7255-9402-efb093b041aa_4_947_1418_621_294_0.jpg)
182
+
183
+ Figure 10: Average time of the first and second 50 images in the careful labeling style. MNIST (1-50 images): mean = 2.24; SD $= {0.33}$ ; MNIST (51-100 images): mean $= {2.18}$ ; SD $= {0.29}$ . F-MNIST (1-50 images): mean $= {4.05};\mathrm{{SD}} = {0.57};\mathrm{F}$ -MNIST (51- 100 images): mean $= {2.53};\mathrm{{SD}} = {0.38}$ . K-MNIST (1-50 images): mean $= {4.21};\mathrm{{SD}} = {1.02};\mathrm{K}$ -MNIST (51-100 images): mean $= {5.52};\mathrm{{SD}} = {0.73}$ .
184
+
185
+ ## Annotation Accuracy
186
+
187
+ Figure 11 shows the accuracy of the labeling process in the first half (1-50 images) and the second half (51-100 images) via the quick labeling style. The results indicated that the accuracy rates were 97.5% and 97.67% in the first and second halves of the MNIST dataset, 73.17% and 71% in the Fashion-MNIST dataset, as well as 56.67% and 59.5% in the Kuzushiji-MNIST dataset. The results of the paired t-test indicated that the difference was not significant $\left( {\mathrm{p} > {0.05}}\right)$ between the first and second halves of the labeling process for all datasets. This indicates that there is no temporal effect at any level of data difficulty when using the quick labeling style. The label accuracy was not significantly affected (improved) by different labeling styles.
188
+
189
+ ![01963e66-e142-7255-9402-efb093b041aa_5_173_418_597_307_0.jpg](images/01963e66-e142-7255-9402-efb093b041aa_5_173_418_597_307_0.jpg)
190
+
191
+ Figure 11: Average time of the first and second 50 images in the quick labeling style. MNIST (1-50 images): mean = 97.50; SD = 1.27; MNIST (51-100 images): mean = 97.67; SD = 1.53. F-MNIST (1-50 images): mean $= {73.17},\mathrm{{SD}} = {3.93}$ ; F-MNIST (51-100 images): mean $= {71},\mathrm{{SD}} = {4.31}$ . K-MNIST $(1 - {50}$ images): mean $= {56.67};\mathrm{{SD}} = {6.95}$ ; K-MNIST (51-100 images): mean $= {59.50};\mathrm{{SD}} = {5.47}$ .
192
+
193
+ Figure 12 shows the accuracy of the labeling process in the first half (1-50 images) and the second half (51-100 images) via the careful labeling style. The results indicated that the accuracy rates were 97.17% and 98% in the first and second halves of the MNIST dataset, 73.67% and 80% in the Fashion-MNIST dataset, as well as 62.83% and 57.33% in the Kuzushiji-MNIST dataset. The results of the paired t-test indicated that the difference was not significant $\left( {\mathrm{p} > {0.05}}\right)$ between the first and second halves of the labeling process for the MNIST and Kuzushiji-MNIST datasets. However, the difference was significant $\left( {\mathrm{p} < {0.05}}\right)$ for the Fashion-MNIST dataset. This indicates that there is no temporal effect when the images are easy and extremely difficult to use with the careful labeling style. However, a temporal effect was observed when the images were moderately difficult. The participants could significantly improve the label accuracy in the second half of the image-labeling task.
194
+
195
+ ![01963e66-e142-7255-9402-efb093b041aa_5_173_1454_597_311_0.jpg](images/01963e66-e142-7255-9402-efb093b041aa_5_173_1454_597_311_0.jpg)
196
+
197
+ Figure 12: Average time of the first and second 50 images in the careful labeling style. MNIST (1-50 images): mean = 97.17; SD = 1.83; MNIST (51-100 images): mean = 9; SD = 1.35. F-MNIST (1-50 images): mean $= {73.67};\mathrm{{SD}} = {4.95}$ ; F-MNIST (51-100 images): mean $= {80};\mathrm{{SD}} = {6.58}$ . K-MNIST $(1 - {50}$ images): mean $= {62.83};\mathrm{{SD}} = {8.13}$ ; K-MNIST (51-100 images): mean $= {57.22}$ ; $\mathrm{{SD}} = {7.22}$ .
198
+
199
+ In summary, the careful labeling style has a temporal effect during the labeling process in the task completion time and accuracy rate (i.e., reduced time and increased accuracy) only when the images are moderately difficult. When the images are too easy, there is no temporal effect. Interestingly, if the images are too difficult, the task completion time is longer in the second half.
200
+
201
+ ### 6.4 Questionnaire
202
+
203
+ Figure 13 shows how confident the participants felt in the given image labeling tasks. In the MNIST dataset, the results indicated that most of the participants felt extremely confident or confident when selecting a label for an image using either the quick labeling $\left( {\mathrm{n} = {10}}\right)$ or careful labeling $\left( {\mathrm{n} = 9}\right)$ styles, whereas none of the participants felt apprehensive. For the Fashion-MNIST dataset, the results indicated that only one participant felt extremely confident when selecting a label for an image through the quick labeling style and only two participants felt confident when selecting a label for an image through the careful labeling style. In the Kuzushiji-MNIST dataset, the results indicated that no participants felt confident or extremely confident when selecting a label for an image through either the quick labeling or careful labeling styles. More participants using the careful labeling style felt apprehensive $\left( {n = 4}\right)$ or extremely apprehensive $\left( {n = 7}\right)$ in comparison to the participants using the quick labeling style (apprehensive, $\mathrm{n} = 3$ ; extremely apprehensive, $\mathrm{n} = 6$ ). This indicates that the labeling styles do not affect the subjective impression of the participants' confidence during annotation. However, the ambiguities in the data affect the confidence of the participants in selecting a label for an image during annotation.
204
+
205
+ ![01963e66-e142-7255-9402-efb093b041aa_5_950_979_723_433_0.jpg](images/01963e66-e142-7255-9402-efb093b041aa_5_950_979_723_433_0.jpg)
206
+
207
+ Figure 13: Confidence of the participants when selecting a label from the MNIST, F-MNIST, and K-MNIST datasets.
208
+
209
+ ### 6.5 Results of Machine Learning Experiment
210
+
211
+ Figure 14 presents the machine learning results (i.e., image classification accuracy) for the three datasets with the quick and careful labeling styles.
212
+
213
+ ![01963e66-e142-7255-9402-efb093b041aa_5_949_1650_724_331_0.jpg](images/01963e66-e142-7255-9402-efb093b041aa_5_949_1650_724_331_0.jpg)
214
+
215
+ Figure 14: Accuracy of machine learning models in the MNIST, F-MNIST, and K-MNIST datasets.
216
+
217
+ In the data analysis, we did not compute the accuracy for each participant because the dataset was too small. We combined all annotations, trained the model, and measured the performance of the model.
218
+
219
+ MNIST Dataset. The accuracy of the training data was ${97.58}\%$ for both the quick labeling and careful labeling styles (Figure 12). Based on the accuracy of the training data, the machine learning performance (accuracy) showed almost no differences between the two labeling styles (logistic regression, $\mathrm{{LQ}} = {88.63}\% ,\mathrm{{LC}} =$ ${88.57}\% ;\mathrm{K}$ -nearest neighbors, $\mathrm{{LQ}} = {87.81}\% ,\mathrm{{LC}} = {87.71}\%$ ; support vector machine, $\mathrm{{LQ}} = {92.44}\% ,\mathrm{{LC}} = {92.53}\%$ ). These results were not surprising because the label accuracy of the training data was the same.
220
+
221
+ Fashion-MNIST Dataset. The accuracy of the training data was ${72.08}\%$ for the quick labeling style and 76.83% for the careful labeling style (Figure 12), which is a significant difference between the two labeling styles. Based on the accuracy, the machine learning performance (accuracy) showed that there were differences between the two labeling styles (logistic regression, $\mathrm{{LQ}} = {66.51}\% ,\mathrm{{LC}} = {68.71}\% ;\mathrm{K}$ -nearest neighbors, $\mathrm{{LQ}} = {65.99}\%$ , $\mathrm{{LC}} = {70.11}\%$ ; support vector machine: $\mathrm{{LQ}} = {67.91}\% ,\mathrm{{LC}} =$ 72.66%). The differences were between 2.2% and 4.12%. Machine learning algorithms often work well even if the labels given to difficult data contain errors. Our results indicate that improving the label accuracy via the careful labeling style can also improve the accuracy of machine learning.
222
+
223
+ Kuzushiji-MNIST Dataset. The accuracy of the training data was ${58.08}\%$ for the quick labeling style and ${60.08}\%$ for the careful labeling style (Figure 12). There was a small difference of $2\%$ between the two labeling styles, but it was not significant according to the paired t-test analysis. Based on the accuracy of the training data, the machine learning performance (accuracy) showed that the differences were significantly small between the two labeling styles (logistic regression, $\mathrm{{LQ}} = {37.58}\% ,\mathrm{{LC}} =$ ${37.42}\% ;\mathrm{K}$ -nearest neighbors, $\mathrm{{LQ}} = {44.17}\% ,\mathrm{{LC}} = {44.23}\%$ ; support vector machine, $\mathrm{{LQ}} = {43.36}\% ,\mathrm{{LC}} = {43.47}\% )$ . This indicates that a small difference in the label accuracy in the training data cannot affect the machine learning performance.
224
+
225
+ ## 7 Discussion
226
+
227
+ ### 7.1 Effects of Labeling Styles in Annotation Efficiency and Label Quality
228
+
229
+ In psychology, decision-making is a cognitive process in which the cognitive styles of individuals affect the decision-making process as well as the decision outcomes and quality [37] [38] [39]. In manual data annotation, a labeling style can be considered as a decision-making process (i.e., selecting an appropriate label for an image) that may affect the outcomes and quality of the label. In general, the quick labeling style requires less time to complete an annotation task, whereas the careful labeling style requires more time, and it can result in higher quality data. However, it depends on the data difficulty and annotation tasks. Our results indicate that there was no significant difference between the quick and careful labeling styles in the task completion time and label quality when the data were easy (i.e., MNIST). However, there are differences between the two labeling styles when the data becomes difficult. For instance, the careful labeling style requires more time to complete a labeling task that contains moderately difficult images than the quick labeling style. Moreover, it significantly improves the label quality. However, if a labeling task contains extremely difficult images, the careful labeling style cannot improve the label quality and it requires longer time to complete the labeling task. These results indicate that the labeling style affects the annotation efficiency (task completion time) and label quality (accuracy rate) in non-expert data annotation. However, these effects are dependent on the data difficulty of the annotation task. In addition, the questionnaire results indicated that the subjective impression of the annotator's confidence during annotation was not affected by the labeling styles in any of the labeling tasks with different data ambiguities. However, the confidence of the annotator was affected by the data ambiguities (higher confidence in less difficult data and lower confidence in more difficult data).
230
+
231
+ ### 7.2 Temporal Effects in the Quick and Careful Labeling Tasks
232
+
233
+ The temporal effect has been used to analyze task performance during image-labeling tasks [16]. It describes ways in which people change their behavior over time, which is a method for analyzing the efficiency of an activity or study [42, 43, 44]. Our results indicated that there was a significant temporal effect (p < 0.05 during the labeling process using the careful labeling style when the images were moderately difficult (Fashion-MNIST). The participants could reduce the task completion time in the second half of the image labeling task by using the careful labeling style. This indicates that the careful labeling style not only improves the label quality but also causes a temporal effect during annotation in a labeling task containing moderately difficult images. Furthermore, there were significant temporal effects (p < 0.05) during annotation using the careful labeling style when the images were extremely difficult (Kuzushiji-MNIST). However, the participants spent longer time completing the second half of the image-labeling task than in the first half. In addition, there was no significant temporal effect $\left( {\mathrm{p} > {0.05}}\right)$ during annotation using both the quick and careful labeling styles when the images were easy (MNIST). However, the reason for the temporal effect has not been clearly demonstrated.
234
+
235
+ ### 7.3 Effects of Labeling Styles on Machine Learning Performance
236
+
237
+ The data quality plays a critical role in machine learning. Our user study demonstrated that the careful labeling style can significantly improve the label quality of the image labeling task, which contains moderately difficult images, and it slightly improves the label quality when the task contains extremely difficult images. The machine learning experiment also showed similar results. Labeled data collected via the careful labeling style can result in better machine learning performance (higher accuracy) than that collected via the quick labeling style when the images are moderately difficult (Fashion-MNIST). However, the machine learning performance showed almost no difference between the labeled data collected via the two labeling styles when the images were easy (MNIST), and small differences when the data were extremely difficult (Kuzushiji-MNIST). Machine learning algorithms often work well even if the labels given to difficult data contain errors. This indicates that different label qualities may result in no difference in the machine learning accuracy. In such cases, the labeling style may not be a variable in machine learning (only in manual data annotation). However, our results indicated that the improvement in label quality via the careful labeling style could increase machine learning accuracy when the data are moderately ambiguous. This finding indicates that the careful labeling style can benefit both data annotation and machine learning. However, this depends on data ambiguities. Our machine learning experiment used basic algorithms that only showed labeling style as a variable in the machine learning performance. Therefore, a machine learning experiment with advanced techniques (e.g., deep learning with a large-scale dataset) still needs to be implemented in the future.
238
+
239
+ ### 7.4 Three Factors for Selecting an Appropriate Labeling Style for an Annotation Task
240
+
241
+ Our study and machine learning experiment have shown that different labeling styles have their advantages and disadvantages for different annotation tasks. For instance, conducting a careful labeling task is costly (requires longer time to complete a task) than a quick labeling task; however, it cannot guarantee the improvement of the label quality at all levels of data difficulty. Therefore, it is important to decide a reasonable labeling style for an annotation task; otherwise, it may be a waste of time and money if the improvement is not clear. Here, we discuss three factors that should be carefully considered when selecting a labeling style for an annotation task.
242
+
243
+ ## (1) Data Difficulty
244
+
245
+ Our results indicated that data difficulty is a crucial factor affecting annotation results when using different labeling styles. For instance, the careful labeling style only shows benefit (i.e., improves the label quality) when the data is easy and extremely difficult, whereas the quick labeling style only shows benefit (i.e., requires less time) when the data is easy. Based on the results, we suggest that the quick labeling style is a reasonable choice when conducting an easy annotation task. However, when an annotation task contains difficult data, a careful labeling style can be worthwhile.
246
+
247
+ ## (2) Annotator Type and Task Conditions
248
+
249
+ Although this study only focused on non-expert annotators, we believe that the annotator's experience (i.e., domain knowledge in the given task) is an important factor that may significantly affect the annotation results of using different labeling styles. For instance, the data difficulty depends on individual experiences and subjective impressions. We suggest that qualification is important and necessary for recruiting annotators when conducting annotation tasks with different labeling styles. In addition, the task condition (e.g., crowd tasks and in-person tasks) should be considered when deciding the labeling style for an annotation task. Crowdsourcing is a popular approach for conducting annotation tasks, such as Amazon Mechanical Turk [41]. However, the quality of crowd tasks is a critical issue that has been discussed for many years $\left\lbrack {{22},{35}}\right\rbrack$ . We believe that this issue also occurs when different labeling styles are used in a crowd task. Therefore, we recruited participants (annotators) for the user study via a professional company. This helps us explore the effects of labeling styles more precisely ( to prove the research concept). However, crowdsourcing remains an indispensable approach for conducting annotation tasks. We suggest that an online workflow should be carefully designed to control the annotation quality, even with different labeling styles.
250
+
251
+ ## (3) Instruction for Implementing the Labeling Style
252
+
253
+ After deciding on the labeling style, it is important to ensure that annotators can follow and implement the labeling style precisely. In this study, we provided textual and oral instructions for each labeling style by an instructor before starting a formal task (including a trial). This is only for a user study. We believe that instructions for using a labeling style are not sufficient in a realistic annotation task. This is because some annotators may be inherently careful to follow an assigned labeling style, whereas others may be inherently sloppy. To avoid this kind of bias, we suggest that a specific labeling workflow (or labeling interface) be designed and provided to afford different labeling styles, or "FORCE" annotators to follow specific steps. For instance, a workflow that requires annotators to double-check or spend a certain amount of time before making a label decision when an annotation task is conducted via the careful labeling style should be designed.
254
+
255
+ ## 8 LIMITATION AND FUTURE WORK
256
+
257
+ One limitation of this study is that the size of the training data (1200 labeled images collected via each labeling style for each dataset) was small in the machine learning experiment. This is the main reason for the significantly lower machine learning accuracy in our study compared to the benchmarks [24] [25] [26]. Another limitation is that we only used basic machine learning algorithms for training and testing our collected data. However, the main purpose of this study is not to pursue high accuracy of machine learning results, but to focus on the effects of the labeling styles. Our results indicate that the careful labeling style can improve the label accuracy in manual data annotation as well as increase machine learning accuracy. We believe that the labeling styles might have an even greater effect on large-scale labeling tasks and advanced machine learning techniques (e.g., deep learning). In the future, we plan to conduct a large-scale user study via crowdsourcing and test more machine learning algorithms.
258
+
259
+ Another limitation is the careful labeling style used in this study. In the current instruction (design) of the careful labeling style, the participants were asked to select a label for an image as carefully as possible without time limitations. This condition may be insufficient to conduct a precise label-careful task. A more specific condition or workflow (e.g., allowing modification or force to double-check) for a careful labeling task may be needed for further investigation. In addition to the labeling style, the level of data difficulty should be carefully defined. For instance, how to define the "too easy" and "too difficult" data for each annotator because different annotators may feel different about the same data. In the future, we will explore more details regarding the careful labeling style. For instance, the cause of the temporal effect and the effect of compensation were not clearly demonstrated in this study. Another interesting possibility is the dynamic control of the labeling styles during annotation. If a system can judge the difficulty of each data item before an annotation, it might be possible to ask an annotator to use an appropriate labeling style (using a careful labeling style for data with only moderate difficulty).
260
+
261
+ ## 9 CONCLUSION
262
+
263
+ In this study, we investigated the effects of labeling style on nonexpert data annotation and machine learning. We conducted a user study to compare the quick labeling and careful labeling styles for a manual image annotation task, and we used the labeled data (as training data) to perform a machine learning experiment. Our results indicated that the labeling style is a variable in the data annotation process and machine learning performance. The careful labeling style improves the label accuracy only when the task is moderately difficult, whereas it only increases the cost without improving accuracy when the task is easy or extremely difficult. These findings provide insights for annotators when selecting an appropriate labeling style for an annotation task. This could be an alternative solution for improving non-expert annotations.
264
+
265
+ ## ACKNOWLEDGMENTS
266
+
267
+ This work was supported by JST CREST Grant Number JP-MJCR17A1, and JST, ACT-X Grant Number JP-MJAX21AG, Japan.
268
+
269
+ ## REFERENCES
270
+
271
+ [1] Kozhevnikov, Maria. 2007. Cognitive styles in the context of
272
+
273
+ modern psychology: toward an integrated framework of cognitive style. Psychological bulletin 133, no. 3: 464. DIO: http://dx.doi.org/10.1037/0033-2909.133.3.464
274
+
275
+ [2] Lynna J. Ausburn, and Floyd B. Ausburn. 1978. Cognitive styles: Some information and implications for instructional design. Ectj 26, no. 4: 337-354. https://www.jstor.org/stable/30219783
276
+
277
+ [3] Lilach Sagiv, Adi Amit, Danit Ein-Gar, and Sharon Arieli. 2014. Not all great minds think alike: Systematic and intuitive cognitive styles. Journal of Personality 82, no. 5: 402-417. DIO: http://dx.doi.org/10.1111/jopy.12071
278
+
279
+ [4] Eliot R. Smith, and Jamie DeCoster. 2000. Dual-process models in social and cognitive psychology: Conceptual integration and links to underlying memory systems. Personality and social psychology review 4, no. 2: 108-131. DIO: http://dx.doi.org/10.1207/S15327957PSPR0402 01
280
+
281
+ [5] Christine Ma-Kellams, and Jennifer Lerner. 2016. Trust your gut or think carefully? Examining whether an intuitive, versus a systematic, mode of thought produces greater empathic accuracy. Journal of personality and social psychology 111, no. 5: 674. DIO: http://dx.doi.org/10.1037/pspi0000063
282
+
283
+ [6] Yoon Min Hwang, and Kun Chang Lee. 2015. Exploring the impacts of consumers' systematic and intuitive cognitive styles on their visual attention patterns in online shopping environments: emphasis on eye-tracking method. International Journal of Multimedia and Ubiquitous Engineering 10: 175-182. DIO: http://dx.doi.org/10.14257/ijmue.2015.10.12.18
284
+
285
+ [7] Juliet Corbin, and Anselm Strauss. 2014. Basics of qualitative research: Techniques and procedures for developing grounded theory. Sage publications.
286
+
287
+ [8] Virginia Braun, and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative research in psychology 3, no. 2: 77-101. DIO: https://doi.org/10.1191/1478088706qp063oa
288
+
289
+ [9] Kathleen M. MacQueen, Eleanor McLellan, Kelly Kay, and Bobby Milstein. 1998. Codebook development for team-based qualitative analysis. Cam Journal 10, no. 2: 31-36. DIO: https://doi.org/10.1177/1525822X980100020301
290
+
291
+ [10] Bryan C. Russell, Antonio Torralba, Kevin P. Murphy, and William T. Freeman. 2008. LabelMe: A Database and Web-Based Tool for Image Annotation. International journal of computer vision 77, no. 1-3: 157-173. DOI: http://dx.doi.org/10.1007/s11263-007-0090-8
292
+
293
+ [11] Luis Von Ahn, and Laura Dabbish. 2004. Labeling Images with a Computer Game. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 319-326. DOI: http://dx.doi.org/10.1145/985692.985733
294
+
295
+ [12] Abhishek Dutta, and Andrew Zisserman. 2019. The VIA Annotation Software for Images, Audio and Video. In Proceedings of the 27th ACM International Conference on Multimedia, pp. 2276-2279. DOI: http://dx.doi.org/10.1145/3343031.3350535
296
+
297
+ [13] Joseph Chee Chang, Saleema Amershi, and Ece Kamar. 2017. Revolt: Collaborative Crowdsourcing for Labeling Machine Learning Datasets. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 2334-2346. DOI: http://dx.doi.org/10.1145/3025453.302604
298
+
299
+ [14] Takeru Sunahase, Yukino Baba, and Hisashi Kashima. 2017. Pairwise HITS: Quality Estimation from Pairwise Comparisons in Creator- Evaluator Crowdsourcing Process. In Thirty-First AAAI Conference on Artificial Intelligence. https://dl.acm.org/doi/abs/10.5555/3298239.3298383
300
+
301
+ [15] Naoki Otani, Yukino Baba, and Hisashi Kashima. 2015. Quality Control for Crowdsourced Hierarchical Classification. In 2015 IEEE International Conference on Data Mining, pp. 937-942. DOI:
302
+
303
+ http://dx.doi.org/10.1109/ICDM.2015.83
304
+
305
+ [16] Chia-Ming Chang, Siddharth Deepak Mishra, and Takeo Igarashi. 2019. A Hierarchical Task Assignment for Manual Image Labeling. In 2019 IEEE Symposium on Visual Languages and Human -Centric Computing (VL/HCC), pp. 139 -143. DOI: http://dx.doi.org/10.1109/VLHCC.2019.8818828
306
+
307
+ [17] Todd Kulesza, Saleema Amershi, Rich Caruana, Danyel Fisher, and Denis Charles. 2014. Structured Labeling for Facilitating Concept Evolution in Machine Learning. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 3075 - 3084. DOI: http://dx.doi.org/10.1145/2556288.2557238
308
+
309
+ [18] Stefanie Nowak and Stefan Rüger. 2010. How Reliable are Annotations via Crowdsourcing: A Study about Inter-Annotator Agreement for Multi-Label Image Annotation. In Proceedings of the international conference on Multimedia information retrieval, pp. 557-566. DOI: http://dx.doi.org/10.1145/1743384.174347
310
+
311
+ [19] Roland Kwitt, Sebastian Hegenbart, Nikhil Rasiwasia, Andreas Vécsei, and Andreas Uhl. 2014. Do We Need Annotation Experts? A Case 17 Study in Celiac Disease Classification. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 454-461. Springer, Cham. DOI: http://dx.doi.org/10.1007/978-3-319-10470-6 57
312
+
313
+ [20] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A Large-Scale Hierarchical Image Database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. DOI: http://dx.doi.org/10.1109/CVPR.2009.5206848
314
+
315
+ [21] Cyrus Rashtchian, Peter Young, Micah Hodosh, and Julia Hockenmaier. 2010. Collecting Image Annotations Using Amazon's Mechanical Turk. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk, pp. 139-147. https://dl.acm.org/doi/10.5555/1866696.1866717
316
+
317
+ [22] Jiyin He, Jacco van Ossenbruggen, and Arjen P. de Vries. 2013. Do You Need Experts in the Crowd? A Case Study in Image Annotation for Marine Biology. In Proceedings of the 10th Conference on Open Research Areas in Information Retrieval, pp. 57-60. https://dl.acm.org/doi/10.5555/2491748.2491763
318
+
319
+ [23] Donghui Feng, Sveva Besana, and Remi Zajac. 2009. Acquiring High Quality Non-Expert Knowledge from On-demand Workforce. In Proceedings of the 2009 Workshop on The People's Web Meets NLP: Collaboratively Constructed Semantic Resources (People's Web), pp. 51-56. https://dl.acm.org/doi/10.5555/1699765.1699773
320
+
321
+ [24] Yann LeCun. 1998. The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/
322
+
323
+ [25] Han Xiao, Kashif Rasul, and Roland Vollgraf. 2017. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 https://arxiv.org/abs/1708.07747
324
+
325
+ [26] Tarin Clanuwat, Mikel Bober-Irizar, Asanobu Kitamoto, Alex Lamb, Kazuaki Yamamoto, and David Ha. 2018. Deep learning for classical Japanese literature. arXiv preprint arXiv:1812.01718. DOI: https://dx.doi.org/10.20676/00000341
326
+
327
+ [27] Law, Edith LM, Luis Von Ahn, Roger B. Dannenberg, and Mike Crawford. 2007. TagATune: A Game for Music and Sound Annotation. In ISMIR, vol. 3, p. 2. DOI: https://dx.doi.org/10.5281/zenodo.1415568
328
+
329
+ [28] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A Large-Scale Hierarchical Image Database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. DOI: http://dx.doi.org/10.1109/CVPR.2009.5206848
330
+
331
+ [29] Yukino Baba. 2018. Statistical Quality Control for Human Computation and Crowdsourcing. In IJCAI, pp. 5667-5671. DOI: https://doi.org/10.24963/ijcai.2018/806
332
+
333
+ [30] Jort F. Gemmeke, Daniel PW Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R. Channing Moore, Manoj Plakal, and Marvin Ritter. 2017. Audio set: An ontology and human-labeled dataset for audio events. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 776-780. IEEE. DOI: http://dx.doi.org/10.1109/ICASSP.2017.7952261
334
+
335
+ [31] Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, and Sudheendra Vijayanarasimhan. 2016. Youtube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675. https://arxiv.org/abs/1609.08675
336
+
337
+ [32] Bianco Simone, Gianluigi Ciocca, Paolo Napoletano, and Raimondo Schettini. 2015. An interactive tool for manual, semi-automatic and automatic video annotation. Computer Vision and Image Understanding 131: $\;{88} - {99}$ . https://doi.org/10.1016/j.cviu.2014.06.015
338
+
339
+ [33] Pei-Yun Hsueh, Prem Melville, and Vikas Sindhwani. 2009. Data quality from crowdsourcing: a study of annotation selection criteria. In Proceedings of the NAACL HLT 2009 workshop on active learning for natural language processing, pp. 27-35. https://www.aclweb.org/anthology/W09-1904/
340
+
341
+ [34] Yi-Li Fang, Hai-Long Sun, Peng-Peng Chen, and Ting Deng. 2017. Improving the Quality of Crowdsourced Image Labeling via Label Similarity. Journal of Computer Science and Technology 32, no. 5: 877-889. DOI: http://dx.doi.org/10.1007/s11390-017-1770-7
342
+
343
+ [35] Wei Wang, and Zhi-Hua Zhou. 2015. Crowdsourcing label quality: a theoretical analysis. Science China Information Sciences 58, no. 11: 1-12. DOI: https://doi.org/10.1007/s11432-015-5391-x
344
+
345
+ [36] Ofer Dekel, and Ohad Shamir. 2009. Vox Populi: Collecting High-Quality Labels from a Crowd. In COLT. https://www.cs.mcgill.ca/~colt2009/papers/037.pdf
346
+
347
+ [37] Raymond G. Hunt, Frank J. Krzystofiak, James R. Meindl, and Abdalla M. Yousry. 1989. Cognitive style and decision making. Organizational behavior and human decision processes 44, no. 3: 436-453. DOI: https://doi.org/10.1016/0749-5978(89)90018-6
348
+
349
+ [38] Matteo Cristofaro. 2016. Cognitive styles in dynamic decision making: a laboratory experiment. International Journal of Management and Decision Making 15, no. 1: 53-82. DOI: https://doi.org/10.1504/IJMDM.2016.076840
350
+
351
+ [39] Jill R. Hough, and D. T. Ogilvie. 2005. An empirical test of cognitive style and strategic decision outcomes. Journal of Management Studies 42, no. 2: 417-448. DOI: https://doi.org/10.1111/j.1467-6486.2005.00502.x
352
+
353
+ [40] Chia-Ming Chang, Chia-Hsien Lee, and Takeo Igarashi. 2021. Spatial Labeling: Leveraging Spatial Layout for Improving Label Quality in Non-Expert Image Annotation. In CHI Conference on Human Factors in Computing Systems (CHI '21), May 8-13, 2021, Yokohama, Japan. ACM, New York, NY, USA, 20 pages. https://doi.org/10.1145/3411764.3445165
354
+
355
+ [41] Michael Buhrmester, Tracy Kwang, and Samuel D. Gosling. 2011. Amazon's mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science 6, 1, 3-5. DOI: http://dx.doi.org/10.1177/1745691610393980
356
+
357
+ [42] Gur Mosheiov. 2001. Parallel machine scheduling with a learning effect. Journal of the Operational Research Society 52, no. 10: 1165- 1169. DOI: https://doi.org/10.1057/palgrave.jors.2601215
358
+
359
+ [43] Koun-tem Sun, Yuan-cheng Lin, and Chia-jui Yu. 2008. A study on learning effect among different learning styles in a Web-based lab of science for elementary school students. Computers & Education 50, no. 4: 1411-1422. DOI: https://doi.org/10.1016/j.compedu.2007.01.003
360
+
361
+ [44] Yvonne Kammerer, Rowan Nairn, Peter Pirolli, and Ed H. Chi. 2009. Signpost from the masses: learning effects in an exploratory social tag search browser. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 625-634. DOI: https://doi.org/10.1145/1518701.1518797
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/SDyj8aZBPrs/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,263 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § AN EMPIRICAL STUDY ON THE EFFECT OF QUICK AND CAREFUL LABELING STYLES IN IMAGE ANNOTATION
2
+
3
+ Chia-Ming Chang*
4
+
5
+ The University of Tokyo
6
+
7
+ Xi Yang†
8
+
9
+ Jilin University
10
+
11
+ Takeo Igarashi‡
12
+
13
+ The University of Tokyo
14
+
15
+ § ABSTRACT
16
+
17
+ Assigning a label to difficult data requires a long time, particularly when non-expert annotators attempt to select the best possible label. However, there have been no detailed studies exploring a label selection style during annotation. This is very important and may affect the efficiency and quality of annotation. In this study, we explored the effects of labeling style on data annotation and machine learning. We conducted an empirical study comparing "quick labeling" and "careful labeling" styles in image-labeling tasks with three levels of difficulty. Additionally, we performed a machine learning experiment using labeled images from the two labeling styles. The results indicated that quick and careful labeling styles have both advantages and disadvantages in terms of annotation efficiency, label quality, and machine learning performance. Specifically, careful labeling improves label accuracy when the task is moderately difficult, whereas it is time-consuming when the task is easy or extremely difficult.
18
+
19
+ Keywords: Cognitive Psychology, Labeling Style, Non-Expert Data Annotation, Data Collection, Machine Learning.
20
+
21
+ Index Terms: - Computing methodologiesÃrtificial intelligence P̃hilosophical/theoretical foundations of artificial intelligence -Cognitive science
22
+
23
+ § 1 INTRODUCTION
24
+
25
+ A large, high-quality dataset is necessary to obtain better machine learning results. However, it is expensive to recruit a large number of expert annotators (who have sufficient domain knowledge) to work on it. Recruiting non-expert annotators (typically crowd workers) is cheaper and easier; therefore, it is often the only viable option in practice [20, 21]. However, label quality is critical in non-expert data annotation (i.e., crowdsourcing tasks) $\left\lbrack {{22},{23},{35}}\right\rbrack$ . Various annotation methods and tools have been introduced to address this issue [13, 14, 29, 32, 40]. However, there are no detailed studies examining this issue from a human perspective (i.e., cognitive psychology), such as investigating the effect of a label selection style during annotation (i.e., how a user makes a label decision). This is important, and it could affect annotation efficiency and quality. The different labeling styles used by annotators could affect the annotation efficiency and quality.
26
+
27
+ This study presents an empirical study comparing two labeling styles (quick labeling and careful labeling) for a manual image labeling task with three datasets under different levels of data difficulty: easy (MNIST), moderately difficult (Fashion-MNIST), and extremely difficult (Kuzushiji-MNIST). Thereafter, we conducted a machine learning experiment using the labeled images with quick labeling and careful labeling styles and compared the machine learning results (classification accuracy), as shown in Figure 1.
28
+
29
+ < g r a p h i c s >
30
+
31
+ Figure 1: Research Overview.
32
+
33
+ These results indicated that the labeling style affects annotation efficiency (task completion time) and label accuracy. The careful labeling style exhibited significantly higher accuracy than the quick labeling style when the dataset was moderately difficult. Moreover, the results of the machine learning experiment indicated that the labeled images (training data) collected via the careful labeling style could achieve better machine learning performance (higher classification accuracy) than that collected via the quick labeling style. However, the careful labeling style did not bring benefits when the images were easy (i.e., label accuracy was already high via the quick labeling style) or extremely difficult (i.e., the careful labeling style could not significantly improve the label accuracy). We discussed the effects of the two labeling styles according to three different levels of data difficulty (easy, moderately difficult, and extremely difficult), and we discussed three factors that need to be carefully considered when selecting an appropriate labeling style for an annotation task. This study makes the following contributions:
34
+
35
+ * Identifying labeling styles as a variable in non-expert image annotation and machine learning.
36
+
37
+ * An empirical study comparing quick labeling and careful labeling styles, demonstrated the benefits of the careful labeling style in an image annotation task.
38
+
39
+ * A machine learning experiment using labeled images with different labeling styles, demonstrated the effects of the labeling styles on image classification accuracy.
40
+
41
+ *e-mail: chiaming@ui.is.s.u-tokyo.ac.jp
42
+
43
+ †e-mail: yangxi21@jlu.edu.cn
44
+
45
+ ‡e-mail: takeo@acm.com
46
+
47
+ § 2 RELATED WORK
48
+
49
+ § 2.1 MANUAL DATA ANNOTATION AND CHALLENGES
50
+
51
+ Manual data annotation is a basic practice in machine learning. A large size is often necessary to improve machine learning results. Popular datasets including ImageNet [28], AudioSet [30], and YouTube-8M [31], which were manually labeled by human annotators, were applied. The manual data annotation process is extremely tedious and time-consuming, and many studies have proposed annotation tools for assisting manual data annotation. For instance, LabelMe [10] is a web-based image annotation tool that allows multiple annotators to label an image and share their labeling results instantly. ESP [11] is an image annotation tool combined with a computer game that provides an enjoyable labeling process for annotators, and TagATune [27] is an audio annotation tool that shares the same idea. VIA [12] is an annotation tool that allows annotators to define and describe spatial regions in images, audio segments, and video frames. iVAT [32] is a video annotation tool that supports manual, semiautomatic, and automatic video annotation.
52
+
53
+ Most of these tools provide supportive, efficient, and enjoyable systems for improving the tedious processes of manual data annotation. These tools generally assume that annotators have sufficient domain knowledge for labeling tasks. However, data annotation tasks often rely on non-expert annotators who lack sufficient domain knowledge because access to a sufficient number of expert annotators is limited and expensive [18, 19]. Therefore, labeling tasks can be significantly difficult for nonexpert annotators and the labeled data may contain numerous errors [23] [33] [36]. However, these annotation tools may not be able to address this issue when annotators are non-experts.
54
+
55
+ § 2.2 ANNOTATION WORKFLOWS FOR IMPROVING THE LABEL QUALITY
56
+
57
+ Many data annotation workflows have been proposed to improve the label quality, particularly for annotation tasks conducted through crowdsourcing. Revolt [13] is a collaborative crowdsourcing labeling workflow that applies concepts from expert annotation workflows (label-check-modification). This specific workflow can produce higher label quality than a conventional labeling workflow. Pairwise HITS [14] is a labeling workflow for quality estimation that allows annotators to compare a pair of labeled data and select the better one. Fang et al. [32] introduced a two-round workflow to improve the quality of crowdsourced image labeling. During the first round, the annotators select a label for the target images (several labels are assigned to each image). During the second round, the annotators are required to select the best label for each image (referring to the results from other annotators). Baba [29] introduced two types of labeling workflows (parallel and interactive) that allow multiple annotators to be involved in an annotation task in different ways to improve the label quality. In addition, various studies have used the concept of hierarchical classification in data annotation to increase the labeling efficiency and label quality [15] [16].
58
+
59
+ The main concept of these annotation workflows is the gathering of knowledge from multiple individuals. This is a typical workflow for improving the label quality by involving a group of annotators (non-experts or experts) to collaborate for a labeling task. In this study, we aim to explore the "labeling styles" rather than the "labeling workflows" during a data annotation task. In addition to machine learning, data annotation has been used in various research areas. For instance, social scientists annotate data to discover interesting phenomena and establish theories [7] [9], and data annotations, such as a thematic analysis approach, is often used to analyze qualitative data [8]. Data annotation is not only a labeling process but also a cognitive process by which annotators view data, organize concepts, and make labeling decisions. Concept organization plays a crucial role in data annotation. Kulesza et al. [17] indicated that annotators often organize their conceptual similarity by observing more items during data annotation. Chang et al. [40] shared the same concept and proposed a spatial layout labeling interface for concept organization during the annotation process. We believe that cognitive processes (i.e., how to make a labeling decision) in a manual data annotation task are important and they could affect the label quality and cost.
60
+
61
+ § 2.3 INTUITIVE AND SYSTEMATIC DECISION-MAKING
62
+
63
+ Cognitive style (or thinking style) is a term used in cognitive psychology to describe ways in which individuals organize and process information, and finally make decisions [1] [2] [3]. Intuitive and systematic decision-making are two types of cognitive style. Intuitive decision-making is a type of associative thinking that relies on intuition, and systematic decision-making is a type of rule-based thinking that relies on logical evaluation [4]. Both cognitive styles have advantages and disadvantages. For instance, intuitive decision-making requires less time than systematic decision-making. However, systematic decision-making involves a deeper consideration process than intuitive decision-making.
64
+
65
+ These two cognitive styles have been used and discussed in several fields. Sagiv et al. [3] analyzed different intuitive and systematic cognitive styles used by art, accounting, and mathematics students. They established that different students prefer different cognitive styles in their class, and the cognitive style is consistent with an individual's personal attributes. Ma-Kellams and Lerner [5] compared intuitive and systematic cognitive styles to understand the feelings of other people, and they established that a systematic cognitive style can produce better empathic accuracy than an intuitive cognitive style. Hwang and Lee [6] explored the impact of intuitive and systematic cognitive styles of consumers on their visual attention patterns in online shopping environments. The results indicated that consumers pay different visual attention to webpages when they use different cognitive styles to make purchasing decisions. These studies have shown the effects of the cognitive styles used in various activities. In this study, we share a similar concept to explore the effects of cognitive styles (labeling styles) in manual data annotation, where annotators complete a labeling task (i.e., select an appropriate label for an image) through intuitive (quick labeling) and systematic decision-making (careful labeling).
66
+
67
+ § 3 LABELING STYLE AND USER INTERFACE
68
+
69
+ We defined two labeling styles of manual image annotation based on the theory of cognitive psychology: quick labeling and careful labeling.
70
+
71
+ Quick labeling. Quick labeling refers to intuitive decision-making. Here, annotators select a label for an image as quickly as possible even when they lack confidence in the target image and label. They are strongly encouraged to select labels within a short time.
72
+
73
+ Careful labeling. Careful labeling refers to the concept of systematic decision-making. Here, annotators select an image label as carefully as possible, particularly when they are not confident of the target image and label. They are strongly encouraged to spend sufficient time before making a label decision.
74
+
75
+ A labeling system was developed to evaluate the quick labeling and careful labeling styles in a manual image annotation task. Figure 2 shows a screenshot of the image-labeling interface. The left side of the interface lists the labels ( 10 labels/categories in the Fashion-MNIST dataset), and the right side of the interface represents the target image. During the labeling task, the annotators were asked to select an appropriate label from the label list and apply it to the target image. After selecting a label for an image (by clicking on a label), the system automatically moves to the next image. The annotators were not allowed to return to the previous images after selecting an image label.
76
+
77
+ < g r a p h i c s >
78
+
79
+ Figure 2: Screenshot of the labeling interface.
80
+
81
+ § 4 USER STUDY
82
+
83
+ A user study was conducted to compare the quick labeling and careful labeling styles applied to an image-labeling task. We aimed to observe the effects of the two labeling styles in image labeling tasks, specifically at the three different levels of data difficulty (easy, moderately difficult, and extremely difficult). We compared quick labeling and careful labeling styles in terms of label accuracy and labeling time for the given image labeling tasks.
84
+
85
+ § 4.1 APPARATUS
86
+
87
+ To control user study quality, specifically the use of quick labeling and careful labeling styles during image-labeling tasks, we outsourced the execution of the user study to a professional company, which asked their employees to participate in the user evaluation process as part of their job. The total cost was approximately $\$ {1600}$ , that is,$640 for the quick labeling task ($53 per participant) and $960 for the careful labeling task ($80 per participant). During the user study, the participants were asked to sit in front of a desktop and complete the given image-labeling tasks (Figure 3).
88
+
89
+ < g r a p h i c s >
90
+
91
+ Figure 3: Photography of user study.
92
+
93
+ § 4.2 PARTICIPANTS
94
+
95
+ Twenty-four participants ( 12 men and 12 women, 18-49 years) were invited by the company to participate in the user study. All the participants were Japanese (i.e., understood Japanese Hiragana letters). Most of the participants $\left( {\mathrm{n} = {19}}\right)$ had no prior experience with data annotation, four had less than half a year of experience, and one had between half and a full year of experience.
96
+
97
+ § 4.3 DATASET
98
+
99
+ Three datasets, MNIST [24], Fashion-MNIST [25], and Kuzushiji-MNIST [26], were used for labeling tasks in the user study. Each dataset contained 60,000 training images and 10,000 testing images in ten categories (labels). Figure 4 shows the ten categories for each dataset.
100
+
101
+ < g r a p h i c s >
102
+
103
+ Figure 4: Ten categories in the MNIST, F-MNIST, and K-MNIST datasets.
104
+
105
+ The datasets had varying levels of difficulty. MNIST is an "easy" dataset. These handwritten number digits are not difficult for human users to recognize, even when they are in difficult cases, as shown in Figure 5(a). Fashion-MNIST is a "moderately difficult" dataset. It is because it contains some difficult (confusing) items (e.g., "Pullover" and "T-shirt/Top"). Figure 5(b) shows examples of easy and difficult (confusing) cases. Kuzushiji-MNIST is an "extremely difficult" dataset. The handwritten Japanese Hiragana letters are very difficult to recognize (even for Japanese users), specifically when the letters are in difficult cases. Figure 5(c) shows examples of easy and difficult cases.
106
+
107
+ < g r a p h i c s >
108
+
109
+ Figure 5: Examples of easy (upper part) and difficult (lower part) cases from the MINIST, F-MNIST and K-MNIST datasets.
110
+
111
+ We randomly selected 100 images (10 images per label) from each dataset. Thereafter, we created 12 100-image datasets (e.g., Datasets 1-12) with no overlapping images (1200 images from each dataset, and ten split into 12 non-overlapping subsets). The 12 100-image datasets were used for 12 participants in the quick labeling task and 12 participants in the careful labeling task (the same datasets were used for both labeling styles).
112
+
113
+ § 4.4 TASK AND CONDITION
114
+
115
+ The image labeling tasks involved labeling 300 images (100 images for each dataset) for each participant. During the image labeling task, the participants were requested to select an appropriate label from a 10-category list (10 labels) for each image. A between-subject method was used, in which 12 of the participants were asked to complete the labeling task using the quick labeling style, and the other 12 participants were asked to complete the labeling task using the careful labeling style.
116
+
117
+ Quick Labeling Conditions. The participants of the quick labeling task were provided with the following instructions for the labeling task:
118
+
119
+ "Please select a label for an image as quickly as possible. Here, if you are unsure about the target images and labels, please simply select the most appropriate one (i.e., make a guess based on your intuition). Please do not spend too much time considering this before selecting the label. We STRONGLY ENCOURAGE you to select a label for an image within $5\mathrm{\;s}$ . A $5\mathrm{\;s}$ -timer is provided on the labeling interface. After selecting a label for an image, the system will automatically show the next image. This means that you are not allowed to change your selected label."
120
+
121
+ Our labeling system for the label-quick task contains a 5 s-timer for labeling each image. An alert message "Time's up! Please select a label now" is displayed when the timer ends, as shown in Figure 6.
122
+
123
+ < g r a p h i c s >
124
+
125
+ Figure 6: Timer and alert message in the labeling system.
126
+
127
+ The system did not automatically move to the next image until the participants selected a label for the target image, even when the timer ended. The timer was designed as a reminder to the participants during annotation. In addition, participants were not allowed to return to the previous image after selecting a label for the image.
128
+
129
+ Careful Labeling Conditions. The participants in the careful labeling task were provided with the following instructions:
130
+
131
+ "Please select a label for an image as carefully as possible. There is no time limitation for labeling each image. You have sufficient time to think carefully before making a label decision, particularly when the target images are difficult or when you are not confident about the images and labels. We STRONGLY ENCOURAGE you to spend sufficient time (there is no time limitation) before making a label decision. After selecting a label for an image, the system will automatically show the next image. This means that you are not allowed to change your selected label."
132
+
133
+ The labeling system for the careful labeling task is the same as that used for the quick labeling task; but there is no timer shown on the labeling interface (Figure 2). The participants were allowed to spend as much time as necessary to select a label. In addition, the participants were not allowed to return to the previous image after selecting a label for an image. The participants were informed that they could receive 1.5 times higher rewards for the careful labeling task than for a normal labeling task (i.e., quick labeling task).
134
+
135
+ § 4.5 PROCEDURE
136
+
137
+ The instructor provided an oral overview and detailed written instructions to the participants. The evaluation itself was composed of three parts (in order): instruction and trial (5-10 min), labeling tasks (20-45 min), and questionnaire (3-5 min). The entire evaluation process was completed within ${40} - {60}\mathrm{\;{min}}$ (depending on the labeling style). After providing instructions on the labeling interfaces and the given tasks, the participants were allowed to practice on a small labeling task (to label three images for each dataset that differed from the images used in the formal tasks) before starting the given image labeling tasks.
138
+
139
+ § 4.6 MEASUREMENT
140
+
141
+ Our labeling system automatically recorded and measured the time and accuracy of the image-labeling tasks completed by the participants. The timer started when the participants clicked on "START" and stopped when they clicked on "FINISH." In addition, the system recorded the time spent by the participants for each image-labeling process. After the image labeling tasks, the participants were asked to answer a questionnaire regarding the labeling process. The questionnaire contained three Likert-scale questions for each labeling style (Section 6.4).
142
+
143
+ § 5 MACHINE LEARNING EXPERIMENT
144
+
145
+ In the user study, 7200 labeled images were collected (2400 for each dataset and 1200 for each labeling style in a dataset). The training dataset contained errors made by the participants (i.e., the accuracy rate of the training data was not ${100}\%$ ). We used them as the training data to perform a machine learning experiment to evaluate the effects of labeling styles (data collected via the quick labeling and careful labeling styles) on machine learning accuracy (image classification). Three common machine learning algorithms (logistic regression, K-nearest neighbors, and support vector machine) were selected for the case study in the machine learning experiment. We did not use more advanced techniques (e.g., deep learning) because the training dataset was too small and our goal was not to pursue high machine learning accuracy but to compare the difference between the two labeling styles. The testing data used in the machine learning experiment were 30,000 images (10,000 for each dataset), which were different from the training dataset (7200 labeled images).
146
+
147
+ § 6 RESULTS
148
+
149
+ § 6.1 TASK COMPLETION TIME
150
+
151
+ Figure 7 shows the task completion times for the different labeling styles and datasets. The results from the quick labeling task indicated that the participants spent an average of $4\mathrm{\;{min}}$ and ${23}\mathrm{\;s}$ , $5\mathrm{\;{min}}$ and $6\mathrm{\;s}$ , as well as $5\mathrm{\;{min}}$ and ${37}\mathrm{\;s}$ to label the 100 images with the MNIST, Fashion-MNIST, and Kuzushiji-MNIST datasets, respectively. The results indicated that the participants spent an average of $4\mathrm{\;{min}}$ and ${42}\mathrm{\;s},6\mathrm{\;{min}}$ and ${58}\mathrm{\;s}$ , and ${10}\mathrm{\;{min}}$ and ${13}\mathrm{\;s}$ labeling the 100 images in the careful labeling task. The results of an unpaired t-test of the task completion time indicated that the difference was insignificant $\left( {\mathrm{p} > {0.05}}\right)$ between the quick labeling and careful labeling styles in the MNIST dataset, whereas there were significant differences $\left( {\mathrm{p} < {0.01}}\right)$ in the Fashion-MNIST and Kuzushiji-MNIST datasets. This indicates that the careful labeling style requires a longer time to complete labeling tasks than the quick labeling style when the images are moderately and extremely difficult. However, the task completion time was comparable between the quick and careful labeling styles when the images were easy.
152
+
153
+ < g r a p h i c s >
154
+
155
+ Figure 7: Task completion time. MNIST. LQ: mean $= {4.23};\mathrm{{SD}} =$ ${0.73};\mathrm{{LC}} :$ mean $= {4.42};\mathrm{{SD}} = {0.61}$ . F-MNIST. LQ: mean $= {5.06}$ ; SD = 0.76; LC: mean = 6.58; SD = 1.13. K-MNIST. LQ: mean = ${5.37};\mathrm{{SD}} = {0.90};\mathrm{{LC}}$ : mean $= {10.13};\mathrm{{SD}} = {3.55}$ .
156
+
157
+ § 6.2 ANNOTATION ACCURACY
158
+
159
+ Figure 8 shows the accuracy of the labels given by the participants in the quick labeling and careful labeling styles. The results from the quick labeling task indicated accuracies of 97.58%, 72.08%, and 58.08% for the three datasets, and the results from the careful labeling tasks indicated accuracies of ${97.58}\% ,{76.83}\%$ , and ${60.08}\%$ . The analysis of accuracy using an unpaired t-test showed that the difference was insignificant $\left( {\mathrm{p} > {0.05}}\right)$ in the MNIST and Kuzushiji-MNIST datasets, whereas there was a significant difference $\left( {\mathrm{p} < {0.05}}\right)$ between the two labeling styles in the Fashion-MNIST dataset. This indicates that the careful labeling style can help non-expert annotators to select labels more correctly when the images are moderately difficult (Fashion-MNIST), whereas no clear benefit was observed when the images were easy (MNIST) and extremely difficult (Kuzushiji-MNIST). This indicates that conducting a careful labeling task is expensive when the task is either easy or difficult.
160
+
161
+ < g r a p h i c s >
162
+
163
+ Figure 8: Accuracy of Labeling Tasks. MNIST. LQ: mean = 97.58; $\mathrm{{SD}} = {1.44}$ ; LC: mean $= {97.58}$ ; SD $= {1.56}$ . F-MNIST. LQ: mean $= {72.08};\mathrm{{SD}} = {4.17};\mathrm{{LC}}$ : mean $= {76.83};\mathrm{{SD}} = {6.64}$ . K-MNIST. LQ: mean $= {58.08};\mathrm{{SD}} = {6.53};\mathrm{{LC}} :$ mean $= {60.08};\mathrm{{SD}} = {7.82}$ .
164
+
165
+ § 6.3 TEMPORAL EFFECT
166
+
167
+ § TASK COMPLETION TIME
168
+
169
+ Figure 9 shows the average time for the labeling process for the first half (1-50 images) and second half (51-100 images) in the three datasets using the quick labeling style. The results indicated that the participants spent an average of $2\mathrm{\;{min}}6\mathrm{\;s}$ and $2\mathrm{\;{min}}{17}\mathrm{\;s}$ , $2\mathrm{\;{min}}{36}\mathrm{\;s}$ and $2\mathrm{\;{min}}{30}\mathrm{\;s}$ , and $2\mathrm{\;{min}}{29}\mathrm{\;s}$ and $3\mathrm{\;{min}}8\mathrm{\;s}$ to complete the first and second halves of the MNIST, Fashion-MNIST, and Kuzushiji-MNIST datasets, respectively. The results of the paired t-test indicated that the difference was not significant (p > 0.05) between the first and second halves of the labeling process in the MNIST and Fashion-MNIST datasets, but the difference was significant $\left( {\mathrm{p} < {0.05}}\right)$ in the Kuzushiji-MNIST dataset. This indicates that there is a temporal effect at all levels of data difficulty when using the quick labeling style. However, the results interestingly indicated that the participants spent a longer time completing the second half of the image-labeling task.
170
+
171
+ < g r a p h i c s >
172
+
173
+ Figure 9: Average time of the first and second 50 images in the quick labeling style. MNIST (1-50 images): mean $= {2.06};\mathrm{{SD}} =$ 0.38; MNIST (51-100 images): mean $= {2.17};\mathrm{{SD}} = {0.35}$ . F-MNIST (1-50 images): mean $= {2.36};\mathrm{{SD}} = {0.28}$ ; F-MNIST (51- 100 images): mean $= {2.30};\mathrm{{SD}} = {0.48}$ . K-MNIST (1-50 images): mean $= {2.29};\mathrm{{SD}} = {0.21}$ ; K-MNIST (51-100 images): mean $= {3.08};\mathrm{{SD}} = {0.27}$ .
174
+
175
+ Figure 10 shows the average time for the labeling process for the first half (1-50 images) and second half (51-100 images) using the careful labeling style in different datasets. The results indicated that the participants spent an average of $2\mathrm{\;{min}}{24}\mathrm{\;s}$ and $2\mathrm{\;{min}}{18}\mathrm{\;s},4\mathrm{\;{min}}5\mathrm{\;s}$ and $2\mathrm{\;{min}}{53}\mathrm{\;s}$ , and $4\mathrm{\;{min}}{21}\mathrm{\;s}$ and $5\mathrm{\;{min}}{52}$ s to complete the first and second halves of the MNIST, Fashion-MNIST, and Kuzushiji-MNIST datasets, respectively. The results of the paired t-test indicated that the difference was not significant $\left( {\mathrm{p} > {0.05}}\right)$ between the first and second halves of the labeling process in the MNIST dataset, but the difference was significant (p < 0.05) in the Fashion-MNIST and Kuzushiji-MNIST datasets. This indicates that there is no temporal effect in the use of the careful labeling style when the images are easy. However, a temporal effect was observed when the images were moderately difficult. The participants significantly increased their labeling speed in the second half of the image-labeling task. In addition, the participants spent more time in the second half when the images were extremely difficult, which is the same as the quick labeling style.
176
+
177
+ < g r a p h i c s >
178
+
179
+ Figure 10: Average time of the first and second 50 images in the careful labeling style. MNIST (1-50 images): mean = 2.24; SD $= {0.33}$ ; MNIST (51-100 images): mean $= {2.18}$ ; SD $= {0.29}$ . F-MNIST (1-50 images): mean $= {4.05};\mathrm{{SD}} = {0.57};\mathrm{F}$ -MNIST (51- 100 images): mean $= {2.53};\mathrm{{SD}} = {0.38}$ . K-MNIST (1-50 images): mean $= {4.21};\mathrm{{SD}} = {1.02};\mathrm{K}$ -MNIST (51-100 images): mean $= {5.52};\mathrm{{SD}} = {0.73}$ .
180
+
181
+ § ANNOTATION ACCURACY
182
+
183
+ Figure 11 shows the accuracy of the labeling process in the first half (1-50 images) and the second half (51-100 images) via the quick labeling style. The results indicated that the accuracy rates were 97.5% and 97.67% in the first and second halves of the MNIST dataset, 73.17% and 71% in the Fashion-MNIST dataset, as well as 56.67% and 59.5% in the Kuzushiji-MNIST dataset. The results of the paired t-test indicated that the difference was not significant $\left( {\mathrm{p} > {0.05}}\right)$ between the first and second halves of the labeling process for all datasets. This indicates that there is no temporal effect at any level of data difficulty when using the quick labeling style. The label accuracy was not significantly affected (improved) by different labeling styles.
184
+
185
+ < g r a p h i c s >
186
+
187
+ Figure 11: Average time of the first and second 50 images in the quick labeling style. MNIST (1-50 images): mean = 97.50; SD = 1.27; MNIST (51-100 images): mean = 97.67; SD = 1.53. F-MNIST (1-50 images): mean $= {73.17},\mathrm{{SD}} = {3.93}$ ; F-MNIST (51-100 images): mean $= {71},\mathrm{{SD}} = {4.31}$ . K-MNIST $(1 - {50}$ images): mean $= {56.67};\mathrm{{SD}} = {6.95}$ ; K-MNIST (51-100 images): mean $= {59.50};\mathrm{{SD}} = {5.47}$ .
188
+
189
+ Figure 12 shows the accuracy of the labeling process in the first half (1-50 images) and the second half (51-100 images) via the careful labeling style. The results indicated that the accuracy rates were 97.17% and 98% in the first and second halves of the MNIST dataset, 73.67% and 80% in the Fashion-MNIST dataset, as well as 62.83% and 57.33% in the Kuzushiji-MNIST dataset. The results of the paired t-test indicated that the difference was not significant $\left( {\mathrm{p} > {0.05}}\right)$ between the first and second halves of the labeling process for the MNIST and Kuzushiji-MNIST datasets. However, the difference was significant $\left( {\mathrm{p} < {0.05}}\right)$ for the Fashion-MNIST dataset. This indicates that there is no temporal effect when the images are easy and extremely difficult to use with the careful labeling style. However, a temporal effect was observed when the images were moderately difficult. The participants could significantly improve the label accuracy in the second half of the image-labeling task.
190
+
191
+ < g r a p h i c s >
192
+
193
+ Figure 12: Average time of the first and second 50 images in the careful labeling style. MNIST (1-50 images): mean = 97.17; SD = 1.83; MNIST (51-100 images): mean = 9; SD = 1.35. F-MNIST (1-50 images): mean $= {73.67};\mathrm{{SD}} = {4.95}$ ; F-MNIST (51-100 images): mean $= {80};\mathrm{{SD}} = {6.58}$ . K-MNIST $(1 - {50}$ images): mean $= {62.83};\mathrm{{SD}} = {8.13}$ ; K-MNIST (51-100 images): mean $= {57.22}$ ; $\mathrm{{SD}} = {7.22}$ .
194
+
195
+ In summary, the careful labeling style has a temporal effect during the labeling process in the task completion time and accuracy rate (i.e., reduced time and increased accuracy) only when the images are moderately difficult. When the images are too easy, there is no temporal effect. Interestingly, if the images are too difficult, the task completion time is longer in the second half.
196
+
197
+ § 6.4 QUESTIONNAIRE
198
+
199
+ Figure 13 shows how confident the participants felt in the given image labeling tasks. In the MNIST dataset, the results indicated that most of the participants felt extremely confident or confident when selecting a label for an image using either the quick labeling $\left( {\mathrm{n} = {10}}\right)$ or careful labeling $\left( {\mathrm{n} = 9}\right)$ styles, whereas none of the participants felt apprehensive. For the Fashion-MNIST dataset, the results indicated that only one participant felt extremely confident when selecting a label for an image through the quick labeling style and only two participants felt confident when selecting a label for an image through the careful labeling style. In the Kuzushiji-MNIST dataset, the results indicated that no participants felt confident or extremely confident when selecting a label for an image through either the quick labeling or careful labeling styles. More participants using the careful labeling style felt apprehensive $\left( {n = 4}\right)$ or extremely apprehensive $\left( {n = 7}\right)$ in comparison to the participants using the quick labeling style (apprehensive, $\mathrm{n} = 3$ ; extremely apprehensive, $\mathrm{n} = 6$ ). This indicates that the labeling styles do not affect the subjective impression of the participants' confidence during annotation. However, the ambiguities in the data affect the confidence of the participants in selecting a label for an image during annotation.
200
+
201
+ < g r a p h i c s >
202
+
203
+ Figure 13: Confidence of the participants when selecting a label from the MNIST, F-MNIST, and K-MNIST datasets.
204
+
205
+ § 6.5 RESULTS OF MACHINE LEARNING EXPERIMENT
206
+
207
+ Figure 14 presents the machine learning results (i.e., image classification accuracy) for the three datasets with the quick and careful labeling styles.
208
+
209
+ < g r a p h i c s >
210
+
211
+ Figure 14: Accuracy of machine learning models in the MNIST, F-MNIST, and K-MNIST datasets.
212
+
213
+ In the data analysis, we did not compute the accuracy for each participant because the dataset was too small. We combined all annotations, trained the model, and measured the performance of the model.
214
+
215
+ MNIST Dataset. The accuracy of the training data was ${97.58}\%$ for both the quick labeling and careful labeling styles (Figure 12). Based on the accuracy of the training data, the machine learning performance (accuracy) showed almost no differences between the two labeling styles (logistic regression, $\mathrm{{LQ}} = {88.63}\% ,\mathrm{{LC}} =$ ${88.57}\% ;\mathrm{K}$ -nearest neighbors, $\mathrm{{LQ}} = {87.81}\% ,\mathrm{{LC}} = {87.71}\%$ ; support vector machine, $\mathrm{{LQ}} = {92.44}\% ,\mathrm{{LC}} = {92.53}\%$ ). These results were not surprising because the label accuracy of the training data was the same.
216
+
217
+ Fashion-MNIST Dataset. The accuracy of the training data was ${72.08}\%$ for the quick labeling style and 76.83% for the careful labeling style (Figure 12), which is a significant difference between the two labeling styles. Based on the accuracy, the machine learning performance (accuracy) showed that there were differences between the two labeling styles (logistic regression, $\mathrm{{LQ}} = {66.51}\% ,\mathrm{{LC}} = {68.71}\% ;\mathrm{K}$ -nearest neighbors, $\mathrm{{LQ}} = {65.99}\%$ , $\mathrm{{LC}} = {70.11}\%$ ; support vector machine: $\mathrm{{LQ}} = {67.91}\% ,\mathrm{{LC}} =$ 72.66%). The differences were between 2.2% and 4.12%. Machine learning algorithms often work well even if the labels given to difficult data contain errors. Our results indicate that improving the label accuracy via the careful labeling style can also improve the accuracy of machine learning.
218
+
219
+ Kuzushiji-MNIST Dataset. The accuracy of the training data was ${58.08}\%$ for the quick labeling style and ${60.08}\%$ for the careful labeling style (Figure 12). There was a small difference of $2\%$ between the two labeling styles, but it was not significant according to the paired t-test analysis. Based on the accuracy of the training data, the machine learning performance (accuracy) showed that the differences were significantly small between the two labeling styles (logistic regression, $\mathrm{{LQ}} = {37.58}\% ,\mathrm{{LC}} =$ ${37.42}\% ;\mathrm{K}$ -nearest neighbors, $\mathrm{{LQ}} = {44.17}\% ,\mathrm{{LC}} = {44.23}\%$ ; support vector machine, $\mathrm{{LQ}} = {43.36}\% ,\mathrm{{LC}} = {43.47}\% )$ . This indicates that a small difference in the label accuracy in the training data cannot affect the machine learning performance.
220
+
221
+ § 7 DISCUSSION
222
+
223
+ § 7.1 EFFECTS OF LABELING STYLES IN ANNOTATION EFFICIENCY AND LABEL QUALITY
224
+
225
+ In psychology, decision-making is a cognitive process in which the cognitive styles of individuals affect the decision-making process as well as the decision outcomes and quality [37] [38] [39]. In manual data annotation, a labeling style can be considered as a decision-making process (i.e., selecting an appropriate label for an image) that may affect the outcomes and quality of the label. In general, the quick labeling style requires less time to complete an annotation task, whereas the careful labeling style requires more time, and it can result in higher quality data. However, it depends on the data difficulty and annotation tasks. Our results indicate that there was no significant difference between the quick and careful labeling styles in the task completion time and label quality when the data were easy (i.e., MNIST). However, there are differences between the two labeling styles when the data becomes difficult. For instance, the careful labeling style requires more time to complete a labeling task that contains moderately difficult images than the quick labeling style. Moreover, it significantly improves the label quality. However, if a labeling task contains extremely difficult images, the careful labeling style cannot improve the label quality and it requires longer time to complete the labeling task. These results indicate that the labeling style affects the annotation efficiency (task completion time) and label quality (accuracy rate) in non-expert data annotation. However, these effects are dependent on the data difficulty of the annotation task. In addition, the questionnaire results indicated that the subjective impression of the annotator's confidence during annotation was not affected by the labeling styles in any of the labeling tasks with different data ambiguities. However, the confidence of the annotator was affected by the data ambiguities (higher confidence in less difficult data and lower confidence in more difficult data).
226
+
227
+ § 7.2 TEMPORAL EFFECTS IN THE QUICK AND CAREFUL LABELING TASKS
228
+
229
+ The temporal effect has been used to analyze task performance during image-labeling tasks [16]. It describes ways in which people change their behavior over time, which is a method for analyzing the efficiency of an activity or study [42, 43, 44]. Our results indicated that there was a significant temporal effect (p < 0.05 during the labeling process using the careful labeling style when the images were moderately difficult (Fashion-MNIST). The participants could reduce the task completion time in the second half of the image labeling task by using the careful labeling style. This indicates that the careful labeling style not only improves the label quality but also causes a temporal effect during annotation in a labeling task containing moderately difficult images. Furthermore, there were significant temporal effects (p < 0.05) during annotation using the careful labeling style when the images were extremely difficult (Kuzushiji-MNIST). However, the participants spent longer time completing the second half of the image-labeling task than in the first half. In addition, there was no significant temporal effect $\left( {\mathrm{p} > {0.05}}\right)$ during annotation using both the quick and careful labeling styles when the images were easy (MNIST). However, the reason for the temporal effect has not been clearly demonstrated.
230
+
231
+ § 7.3 EFFECTS OF LABELING STYLES ON MACHINE LEARNING PERFORMANCE
232
+
233
+ The data quality plays a critical role in machine learning. Our user study demonstrated that the careful labeling style can significantly improve the label quality of the image labeling task, which contains moderately difficult images, and it slightly improves the label quality when the task contains extremely difficult images. The machine learning experiment also showed similar results. Labeled data collected via the careful labeling style can result in better machine learning performance (higher accuracy) than that collected via the quick labeling style when the images are moderately difficult (Fashion-MNIST). However, the machine learning performance showed almost no difference between the labeled data collected via the two labeling styles when the images were easy (MNIST), and small differences when the data were extremely difficult (Kuzushiji-MNIST). Machine learning algorithms often work well even if the labels given to difficult data contain errors. This indicates that different label qualities may result in no difference in the machine learning accuracy. In such cases, the labeling style may not be a variable in machine learning (only in manual data annotation). However, our results indicated that the improvement in label quality via the careful labeling style could increase machine learning accuracy when the data are moderately ambiguous. This finding indicates that the careful labeling style can benefit both data annotation and machine learning. However, this depends on data ambiguities. Our machine learning experiment used basic algorithms that only showed labeling style as a variable in the machine learning performance. Therefore, a machine learning experiment with advanced techniques (e.g., deep learning with a large-scale dataset) still needs to be implemented in the future.
234
+
235
+ § 7.4 THREE FACTORS FOR SELECTING AN APPROPRIATE LABELING STYLE FOR AN ANNOTATION TASK
236
+
237
+ Our study and machine learning experiment have shown that different labeling styles have their advantages and disadvantages for different annotation tasks. For instance, conducting a careful labeling task is costly (requires longer time to complete a task) than a quick labeling task; however, it cannot guarantee the improvement of the label quality at all levels of data difficulty. Therefore, it is important to decide a reasonable labeling style for an annotation task; otherwise, it may be a waste of time and money if the improvement is not clear. Here, we discuss three factors that should be carefully considered when selecting a labeling style for an annotation task.
238
+
239
+ § (1) DATA DIFFICULTY
240
+
241
+ Our results indicated that data difficulty is a crucial factor affecting annotation results when using different labeling styles. For instance, the careful labeling style only shows benefit (i.e., improves the label quality) when the data is easy and extremely difficult, whereas the quick labeling style only shows benefit (i.e., requires less time) when the data is easy. Based on the results, we suggest that the quick labeling style is a reasonable choice when conducting an easy annotation task. However, when an annotation task contains difficult data, a careful labeling style can be worthwhile.
242
+
243
+ § (2) ANNOTATOR TYPE AND TASK CONDITIONS
244
+
245
+ Although this study only focused on non-expert annotators, we believe that the annotator's experience (i.e., domain knowledge in the given task) is an important factor that may significantly affect the annotation results of using different labeling styles. For instance, the data difficulty depends on individual experiences and subjective impressions. We suggest that qualification is important and necessary for recruiting annotators when conducting annotation tasks with different labeling styles. In addition, the task condition (e.g., crowd tasks and in-person tasks) should be considered when deciding the labeling style for an annotation task. Crowdsourcing is a popular approach for conducting annotation tasks, such as Amazon Mechanical Turk [41]. However, the quality of crowd tasks is a critical issue that has been discussed for many years $\left\lbrack {{22},{35}}\right\rbrack$ . We believe that this issue also occurs when different labeling styles are used in a crowd task. Therefore, we recruited participants (annotators) for the user study via a professional company. This helps us explore the effects of labeling styles more precisely ( to prove the research concept). However, crowdsourcing remains an indispensable approach for conducting annotation tasks. We suggest that an online workflow should be carefully designed to control the annotation quality, even with different labeling styles.
246
+
247
+ § (3) INSTRUCTION FOR IMPLEMENTING THE LABELING STYLE
248
+
249
+ After deciding on the labeling style, it is important to ensure that annotators can follow and implement the labeling style precisely. In this study, we provided textual and oral instructions for each labeling style by an instructor before starting a formal task (including a trial). This is only for a user study. We believe that instructions for using a labeling style are not sufficient in a realistic annotation task. This is because some annotators may be inherently careful to follow an assigned labeling style, whereas others may be inherently sloppy. To avoid this kind of bias, we suggest that a specific labeling workflow (or labeling interface) be designed and provided to afford different labeling styles, or "FORCE" annotators to follow specific steps. For instance, a workflow that requires annotators to double-check or spend a certain amount of time before making a label decision when an annotation task is conducted via the careful labeling style should be designed.
250
+
251
+ § 8 LIMITATION AND FUTURE WORK
252
+
253
+ One limitation of this study is that the size of the training data (1200 labeled images collected via each labeling style for each dataset) was small in the machine learning experiment. This is the main reason for the significantly lower machine learning accuracy in our study compared to the benchmarks [24] [25] [26]. Another limitation is that we only used basic machine learning algorithms for training and testing our collected data. However, the main purpose of this study is not to pursue high accuracy of machine learning results, but to focus on the effects of the labeling styles. Our results indicate that the careful labeling style can improve the label accuracy in manual data annotation as well as increase machine learning accuracy. We believe that the labeling styles might have an even greater effect on large-scale labeling tasks and advanced machine learning techniques (e.g., deep learning). In the future, we plan to conduct a large-scale user study via crowdsourcing and test more machine learning algorithms.
254
+
255
+ Another limitation is the careful labeling style used in this study. In the current instruction (design) of the careful labeling style, the participants were asked to select a label for an image as carefully as possible without time limitations. This condition may be insufficient to conduct a precise label-careful task. A more specific condition or workflow (e.g., allowing modification or force to double-check) for a careful labeling task may be needed for further investigation. In addition to the labeling style, the level of data difficulty should be carefully defined. For instance, how to define the "too easy" and "too difficult" data for each annotator because different annotators may feel different about the same data. In the future, we will explore more details regarding the careful labeling style. For instance, the cause of the temporal effect and the effect of compensation were not clearly demonstrated in this study. Another interesting possibility is the dynamic control of the labeling styles during annotation. If a system can judge the difficulty of each data item before an annotation, it might be possible to ask an annotator to use an appropriate labeling style (using a careful labeling style for data with only moderate difficulty).
256
+
257
+ § 9 CONCLUSION
258
+
259
+ In this study, we investigated the effects of labeling style on nonexpert data annotation and machine learning. We conducted a user study to compare the quick labeling and careful labeling styles for a manual image annotation task, and we used the labeled data (as training data) to perform a machine learning experiment. Our results indicated that the labeling style is a variable in the data annotation process and machine learning performance. The careful labeling style improves the label accuracy only when the task is moderately difficult, whereas it only increases the cost without improving accuracy when the task is easy or extremely difficult. These findings provide insights for annotators when selecting an appropriate labeling style for an annotation task. This could be an alternative solution for improving non-expert annotations.
260
+
261
+ § ACKNOWLEDGMENTS
262
+
263
+ This work was supported by JST CREST Grant Number JP-MJCR17A1, and JST, ACT-X Grant Number JP-MJAX21AG, Japan.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/SHQU_yejZFv/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,505 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # I'm Not Sure: Designing for Ambiguity in Visual Analytics
2
+
3
+ Stan Nowak*
4
+
5
+ School of Interactive Arts
6
+
7
+ and Technology
8
+
9
+ Simon Fraser University
10
+
11
+ Lyn Bartram ${}^{ \dagger }$
12
+
13
+ School of Interactive Arts
14
+
15
+ and Technology
16
+
17
+ Simon Fraser University
18
+
19
+ ## Abstract
20
+
21
+ Ambiguity, the state in which alternative interpretations are plausible or even desirable, is an inexorable part of complex sensemaking. Its challenges are compounded when analysis involves risk, is constrained, and needs to be shared with others. We report on several studies with avalanche forecasters that illuminated these challenges and identified how visualization designs can better support ambiguity. Like many complex analysis domains, avalanche forecasting relies on highly heterogeneous and incomplete data where the relevance and meaning of such data is context-sensitive, dependant on the knowledge and experiences of the observer, and mediated by the complexities of communication and collaboration. In this paper, we characterize challenges of ambiguous interpretation emerging from data, analytic processes, and collaboration and communication and describe several management strategies for ambiguity. Our findings suggest several visual analytics design approaches that explicitly address ambiguity in complex sensemaking around risk.
22
+
23
+ Index Terms: Human-centered computing-Visualization-Visualization theory, concepts and paradigms
24
+
25
+ ## 1 INTRODUCTION
26
+
27
+ Our work addresses the challenges of complex and collaborative sensemaking in risk management: in particular, the domain of avalanche forecasting responsible for analysis and prediction of snow avalanches endangering human life and infrastructure. As is the case with explaining and predicting other hazards (such as weather or natural disasters), avalanche forecasting involves the consideration and evaluation of alternative potential explanations that account for data [29] and the communication of these predictions to audiences widely varying in expertise. In this way, sensemaking is deeply about managing ambiguity, the state of multiple alternative meanings, and beyond simply accounting for missing information. Existing forecasting tools and procedures do not capture all the cognitive work forecasters do [51], motivating the design of better tools to support them. Because ambiguity is an essential component of their work environment, avalanche forecasters are an ideal study group for visual analytics interventions that explicitly target these challenges.
28
+
29
+ Visual analytics, "the science of analytical reasoning facilitated by interactive visual interfaces" [16], is well-suited to address the ambiguous sensemaking needs of avalanche forecasters. While visualization research has largely been devoted to quantified uncertainty or data uncertainty $\left\lbrack {8,{11},{17},{20},{33},{34},{47},{50},{63},{70},{73}}\right\rbrack$ researchers are now considering broader issues of uncertainty related to reasoning [85], such as the interpretation of implicit errors [55, 65], the importance of "hunches" in data interpretation [45] or the role of alternatives in visual analysis [46]. We add to this growing body of work in our exploration of the challenges of sensemaking under ambiguity in risk analyses and the consequent implications for visualization designs.
30
+
31
+ In this paper we report work focused on two complementary threads of accommodating ambiguity in risk analysis and prediction. First, we seek a formative understanding of ambiguity in complex and critical sensemaking. Through a set of studies with Avalanche Canada, a public avalanche forecasting organization, we discovered the critical role ambiguity plays in sensemaking and its constant challenges for individual and collaborative analysis and communication. From these findings we characterized different sources of ambiguity and interpretative strategies, grouped into issues related to data, analytic process, and collaboration and communication.
32
+
33
+ Second, we describe how these findings informed initial visual analytics designs that explore better support for the challenges of ambiguous interpretation involving heterogeneous data-generating processes. We developed these tools in close and constant collaboration (participatory design) with forecasters. We then deployed them as design probes before redesigns were subsequently incorporated into daily practice, where we continue to observe their use. This ecological approach continues to surface challenges and affordances of supporting ambiguity in reasoning about risk in the collaborative and critical environment of forecasting. Our design findings highlight both the effective potential of visualizations and the caveats. Key issues are the importance of multiple levels of data granularity, appropriate context, the need for analytic provenance, and enrichment [3]: the ability to capture both data and insights throughout the process.
34
+
35
+ The key takeaway of our research is that ambiguity is distinct from data uncertainty, requiring solutions that go beyond reduction or removal. It is an essential component of sensemaking, but at the same time presents specific challenges for analysis, collaboration, and communication. We argue that ambiguity can and should be designed for and not away [60], that even simple design choices can serve to support or impede sensemaking involving ambiguity, and that there is a need for more explicit ambiguity support in visual analytics tools. In this paper, we contribute:
36
+
37
+ - Insights from 3 qualitative studies with avalanche forecasters surfacing issues of ambiguity in sensemaking;
38
+
39
+ - A characterization of sources and strategies for ambiguity in risk analysis and sensemaking; and
40
+
41
+ - A preliminary exploration of visual analytics design approaches to address ambiguity.
42
+
43
+ ## 2 BACKGROUND
44
+
45
+ ### 2.1 Public Avalanche Forecasting
46
+
47
+ Public avalanche forecasters assess avalanche hazards and communicate the associated risks to the public through daily bulletins. These natural disasters endanger the safety of humans and infrastructure and require careful professional assessment to inform risk management in mountainous avalanche-prone areas. Forecasters try to predict how present or future instabilities within the snowpack may react to natural triggers, such as the weight of new snow, or human triggers, such as the weight of a skier [54].
48
+
49
+ ---
50
+
51
+ *e-mail: Snowak@sfu.ca
52
+
53
+ ${}^{ \dagger }$ e-mail: Lyn@sfu.ca
54
+
55
+ ---
56
+
57
+ Avalanche forecasting is continuous and distributed across teams [51] of forecasters who monitor avalanche conditions over an entire winter season, iteratively updating their understanding with new information [54]. While many forecasters have the benefit of working in the field and directly observing avalanche conditions, public avalanche forecasters work remotely and rely heavily on field reports produced by other organizations [60]. In Canada, such reports are shared in the Canadian Avalanche Association's Industry Information Exchange (InfoEx) [25] by avalanche safety 'operators', such as those overseeing railway or transportation corridors, ski resorts, and helicopter skiing operations among others. While these data are structured and defined using formal measurement and reporting guidelines [2], they are gathered using a targeted sampling rather than a random sampling approach [54]. Operators actively seek instabilities in the snow. Consequently, forecasters have to glean enough context about this process to understand what such data mean (e.g. who reported it, where they went, what they saw, etc.).
58
+
59
+ Another challenge stems from the sparsity of data. For example, remote weather stations used to validate meteorological forecasts [60] are very sparsely distributed when compared to the variability and heterogeneity of mountain weather [48]. Forecasters mentally simulate the interactions of mountainous terrain and weathers systems and their effects on snowpack from limited data. This imaginative and speculative ability is a mark of competence and expertise in avalanche forecasting [1] as well as weather forecasting [67].
60
+
61
+ Forecasters formalize their judgements of avalanche hazards using a variety of qualitative measures such as a danger scale, likelihood scale, potential destructive size, as well as different avalanche types [76]. These assessments are then communicated to the public through daily bulletins that are supplemented with additional risk communications such as advice about how to avoid avalanche hazards. The public varies in levels of expertise and consequently varies in how they interpret even simple elements of bulletins such as danger scales $\left\lbrack {{21},{75}}\right\rbrack$ . Public avalanche forecasters rely heavily on their knowledge, experience, and expert judgment to assess and communicate avalanche hazards. The challenges of complexity, varied interpretation, and uncertainty are similar to those involved in risk prediction and communication of other extreme weather events and natural disasters [7].
62
+
63
+ ### 2.2 Sensemaking and Risk Prediction
64
+
65
+ Risk management work faces real-world time constraints, ill-defined goals, distributed tasks and responsibilities, uncertainty, and decision-making demands. The engineering of technological solutions to deal with these issues requires close consideration of the cognitive processes involved $\left\lbrack {{30},{74}}\right\rbrack$ . Frequently in these domains, for example in weather forecasting [29], several targeted sensemaking strategies are employed. Generally, these involve the setting of expectations to direct attention to cues that can signal threats and a concurrent sensitivity to cues that deviate from these expectations [82].
66
+
67
+ #### 2.2.1 Anticipatory Thinking
68
+
69
+ One example relevant to the forecasting of avalanches is anticipatory thinking: a functional form of mental preparation for potential risks including those that may be highly unlikely but could result in severe consequences [43]. Attention is actively managed and directed to subtle and context-sensitive cues that may signal threats. There are several types of anticipatory thinking. One, problem detection, describes the process by which observers first become aware of an issue that may require a course of action $\left\lbrack {{40},{41}}\right\rbrack$ . The ability to detect problems depends on how rich an observer's understanding of relevant patterns to compare against data is. This "pattern matching" often involves monitoring multiple patterns or "frames" concurrently. Anticipatory thinking also involves "trajectory tracking", the extrapolation of trends into multiple alternative future scenarios as well as planning for them. The imagination, exploration, and planning for alternative scenarios is also known as mental simulation [38]. These processes are vulnerable to psychological factors or biases such as a tendency to explain away disconfirming evidence. However, studies with expert weather forecasters show such biases are countered through the active adoption of a skeptical stance in analysis [40].
70
+
71
+ ### 2.3 Sensemaking and Ambiguity
72
+
73
+ Sensemaking - the process by which meaning is constructed based on available information and experience - is precipitated by information or events that violate expectations or are uncertain and ambiguous $\left\lbrack {{52},{81}}\right\rbrack$ . It is characterized by complexity. Complexity involves dynamically evolving rules and interacting parts [28] where comprehensive understanding is intractable [37] due to the epistemological limitations of human observation [24]. These limitations mean that complexity is more effectively dealt with holistically rather than through mechanistic reduction to the sum of parts. Sense-making addresses complexity and the concomitant uncertainties through the flexible construction of narratives [18] where informational cues help determine what is relevant and which narratives or explanations are coherent or acceptable to consider [12].
74
+
75
+ This "narrative mode" of thinking describes how signs, symbols, representations, and their relationships are tied together into coherent personal narratives authored by the observer [4]. A novel is merely ink until it is read by someone and the same applies to the analysis of data. Subplots and micro-narratives involving prior knowledge and personal experiences are involved in the reading and making sense of visualizations [61]. Just as a story involves competing narratives, so too, in general, does sensemaking. This is because sensemaking often starts with an existing explanation that is challenged by a viable alternative [39]. Sensemaking is thus more about resolving multiple potential meanings (ambiguity), rather than just accounting for missing or uncertain information.
76
+
77
+ ### 2.4 Ambiguity in Visualization Research
78
+
79
+ Visualization research has a longstanding tradition of characterizing uncertainties relevant to the design of visual analytics systems $\left\lbrack {9,{49},{50},{79},{85}}\right\rbrack$ . Most visualization research has focused on data uncertainties, but many acknowledge the importance and role of interpretation and knowledge in uncertainty $\left\lbrack {{20},{35},{49},{64},{85}}\right\rbrack$ . MacEachren discusses ambiguity through the lens of organizational decision-making describing it as a "lack of an appropriate 'frame of' reference through which to interpret the information" and describes equivocality as stemming from the diversity of possible interpretations [49]. Meanwhile, Boukhelifa et al. define ambiguity in terms of multiplicities in the relationship between entities and names in data as well as the differences in interpretation between collaborators [9]. Liu et al. present a framework for the exploration, interpretation, and management of alternatives in visual analytics [46]. They group alternatives into three types: cognitive (e.g. hypotheses, mental models, and interpretations), artifact (e.g. data, models, representations, or tools), and execution (e.g. methods, code, and parameters). Ambiguity is most closely related to their concept of cognitive alternatives. Researchers have discussed the challenges of ambiguity in natural language interfaces for visual analytic tools and developed dedicated mixed-initiative tools for user intent disambiguation [22,31]. Most prominent in existing visualization research is the discussion of ambiguity in collaborative visual analytics where sharing of analysis is often incomplete, lacking context, and therefore ambiguous [27].
80
+
81
+ There is much more to analysis than what is explicit in data. Data are incomplete records of the phenomena they are intended to represent and require prior knowledge as well speculation. This is closely related to the notion of "implicit errors", which are errors inherent to a dataset but not explicitly represented within it $\left\lbrack {{55},{65}}\right\rbrack$ . To better support sensemaking around implicit errors associated with infectious disease statistics, McCurdy et al. used structured annotations to help expert clinicians externalize knowledge about these errors [55]. In an application for archeological analyses, Panagioti-dou et al. developed visualization tools that explicitly represented implicit errors [65]. Lin et al. use the term data hunch to describe "a person's knowledge about how representative data is of a phenomenon of interest" and how issues like credibility, inclusion and exclusion criteria, or directionality and magnitude of biases are considered in the analysis of data [45]. The authors outline a design space for externalizing data hunches.
82
+
83
+ ![01963e60-2094-7243-a668-202f27e3257f_2_300_159_1192_374_0.jpg](images/01963e60-2094-7243-a668-202f27e3257f_2_300_159_1192_374_0.jpg)
84
+
85
+ Figure 1: A timeline displaying the sequence in which studies were executed. Study 1 developed a formative understanding of avalanche forecasting challenges and workflows represented in a thematic code structure. This code structure was applied to observational data in Study 2 to refine understanding. Findings from these studies were used to inform the design of visualization prototypes used in Study 3.
86
+
87
+ ## 3 APPROACH
88
+
89
+ We carried out 3 studies with forecasters at Avalanche Canada (Figure 1), a public avalanche forecasting organization. Our goal was to better understand the challenges of ambiguity in their sensemak-ing and to identify where visual analytics might help. We began with semi-structured interviews to understand how forecasters perceive and describe the challenges of their work (Study 1). We then conducted field observations of forecasters on site. Concurrently, we video-recorded forecasters' workstations and debriefed them about analytical reasoning involving the use of existing technologies (Study 2). This set of observations corroborated and enriched our understanding of the themes we identified in the interview study (Table 1). Subsequently, we implemented two fully functional visualization prototypes in collaboration with the avalanche forecasters and conducted retrospective interviews using these prototypes as design probes (Study 3). The purpose of this last study was to better understand how visual analytics interventions can address the challenges of ambiguity.
90
+
91
+ Studies 1 and 2 were conducted on-premises at Avalanche Canada while Study 3 was conducted remotely. In total, 12 avalanche forecasters participated in our studies (P1-P5 participated in Study 1, P2-P8 in Study 2, and P2-P6 / P9-P12 in Study 3). 10 were male and 2 were female, reflecting the gender balance of the organization and industry. The forecasters came from varied and mixed backgrounds. 8 had a background in professional mountain guiding, 3 in engineering, 2 in natural sciences, and 2 in business and communications.
92
+
93
+ We frame our findings according to issues of ambiguity dealing with data, analytic process, or collaboration and communication. Data are incomplete records of the phenomena they represent and require nuanced and varying interpretations depending on the needs and goals of analysis. Considering and evaluating alternative interpretations is an essential part of sensemaking: the analytic process of judging and adopting alternative interpretations presents potential analytic paths through data. These paths can be difficult to navigate as much of analysis is not explicitly captured. Finally, forecasters each hold unique perspectives and thus alternative interpretations that need to be resolved. They rely on communication strategies that simplify complexity to retain clarity. This can obfuscate context and introduce ambiguities that their collaborators have to reason through. This structure arose from findings from our studies; we apply it in our discussion of the design implications for potential visual analytics solutions.
94
+
95
+ ## 4 Study 1: FORECASTER WORK CHALLENGES 4.1 Procedure
96
+
97
+ We conducted semi-structured interviews with 5 professional avalanche forecasters on Avalanche Canada premises in Revelstoke, British Columbia. We asked about common work practices and challenges in avalanche forecasting, the role of data and evidence, the role of prior and tacit knowledge, issues of collaboration, and issues of uncertainty. Participants were asked questions like: "Can you walk me through a typical forecasting day?", "What are the biggest challenges in your work?", or "What are some common uncertainties you deal with?". The interviews were audio-recorded and then transcribed.
98
+
99
+ ### 4.2 Analysis
100
+
101
+ Data were analyzed using thematic analysis [10]. Transcripts were concurrently segmented [23] and coded according to emergent themes by one coder. The codes were then refined in two passes These themes were then grouped into thematic categories (Table 1). Inter-rater reliability was measured with one other coder who had a background in avalanche research and limited experience in qualitative research methods using a transcript sample representing 10 percent of all data [23]. Simple agreement for high-level themes was .89, Cohen's Kappa was .81, and Krippendorff's Alpha was .82 For the sub-themes, simple agreement was .75 , Cohen's Kappa was .70 , and Krippendorff's Alpha was .71 .
102
+
103
+ ### 4.3 Findings
104
+
105
+ #### 4.3.1 Data Challenges and Practices
106
+
107
+ The data used in avalanche forecasting are uncertain, have ambiguous expressions or meanings, and have biases. These characteristics lead to ambiguity and a need to consider alternative interpretations beyond what is explicit in data.
108
+
109
+ Forecasters told us one of their key challenges is the uncertainty involved in data sparsity or missingness. Data are often explicitly missing as is the case when remote sensors malfunction or fail to transmit. "[Weather stations] that have good weather or wind information are even less, and then that's if they're even reporting [...]" (P4). Missingness might also be implicit having to be inferred from the given situational context. "In a large storm that closes highways and grounds helicopters, it's very common the next day to not get any avalanche observations... but the weather and your personal experience very much suggests that there was going to be an avalanche cycle..." (P1).
110
+
111
+ Forecasters rely on contextual information to understand how to appropriately interpret data following circumstantial definitions. Some of these contingencies are officially documented or ingrained within formal procedures, while others are only learned through extensive experience and knowledge. "The [...] courses do quite a good job of standardizing those kinds of threshold amounts [...but] people who have spent a lot of time on the coast [...] may think a 30 centimeter storm doesn't really do very much..." (P1).
112
+
113
+ Common to many classifications of the complex natural world, avalanche classifications overlap and are not mutually exclusive. Technically accurate hazard assessments might include several overlapping avalanche types resulting in overly complex public communications. Instead, forecasters try to choose a subset of avalanche types based on what may inform optimal risk mitigation strategies by the public. "When you're modeling the natural world, you take shortcuts and there's simplifications[...] they don't occupy fully independent places [...] we sometimes have to have discussions about whether we want to be technically accurate, or whether we want to retain clarity [...] that starts to get quite complicated. [...] we look for ways to simplify..." (P1).
114
+
115
+ The nuances of evidential reasoning and interpretation of data in avalanche forecasting also extend to the risk-based conservative bias common to forecasters. Some may be more or less conservative, and forecasters have to factor in such considerations when weighing evidence. "[A]nother forecaster would have said something like: '[...]they always call that a little more than what it actually is.'[...that] may influence me to say: Okay, well, maybe I should not necessarily discredit it, but I put less weight into it..." (P3).
116
+
117
+ #### 4.3.2 Analytic Processes and Reasoning
118
+
119
+ Forecasters employ a variety of sensemaking strategies involving speculation and imagination. They integrate their prior knowledge, experiences, and contextual clues in data to synthesize understanding and explore risk implications.
120
+
121
+ Forecasters synthesize, evaluate, and integrate information using a simulation technique they described as mental projection. It is a process of imagining oneself in the field to understand conditions and their risk implications. "...that's a technique that a lot of people use to help forecast... kind of projecting yourself mentally, whether you close your eyes or you just have some kind of image of the kind of slopes, the kind of areas where the people are moving around [...] I think that experiential part there is really relevant to the process..." (P1). This might involve mentally converting biases such as wind data from weather stations in windy locations. "[T]here can actually not be that much wind in the park and you can have 60 kilometers an hour winds at that station. [...]taking an input and then adjusting it for myself..." (P2). It might also involve simulating alternative future scenarios and their risk implications. "If things are a little bit unusual, I [...] try and strip it down and build some kind of synthetic profile either in my mind, or sometimes even do it on the whiteboard [...] And then figure out the most likely, it's usually a set of scenarios..." (P1).
122
+
123
+ Forecasters describe their work as bayesian-like because they are constantly updating their mental models with new information and deliberately omitting weak or redundant evidence. They reported having to immerse themselves in data over several days of their shift to build confidence in their sense of understanding. This often involves undirected explorations of general background information. "...a day, you know, more likely two days to become fully sort of understanding of what's going on in your region [...] even if you can read it all in a day, it takes a little time for it to sort of percolate and for you to understand what that means..." (P1). To address identified gaps in understanding forecasters actively seek contextual sources of information. "I'll [...] look for keywords like 'oh ya... skiing, like, steep terrain in the Alpine, up to 40 degrees and just exposed features. No problem.' That tells me that not much is going on. Yeah, people are confident..." (P2). As they conduct their assessments, they iteratively update knowledge artifacts like the public bulletin to match their current understanding. "I'm pretty iteratively making small changes in the forecast [...] I'll just move that right into the forecasts, put it there, save, and I go back to what I was doing..." (P2).
124
+
125
+ Unlike forecasters, operators directly observe avalanche conditions in the field and thus have a richer understanding of the complexities involved. As a result, forecasters use subtle cues in data that can reveal the subjective hunches of operators to help them appropriately frame their understanding of avalanche conditions. "'Okay, are these guys still concerned about this?' That's what really matters to me more so than like the really nuanced low-level data..." (P2).
126
+
127
+ #### 4.3.3 Collaborative Challenges and Practices
128
+
129
+ Collaboration helps individual forecasters overcome the limitations of their own knowledge by drawing on the collective knowledge and experiences of their peers. At the same time, communicating the complexity of their assessments in simple terms is a constant challenge that creates ambiguities.
130
+
131
+ Forecasters vary in knowledge and experience which likely contributes to some variations in interpretation. However, this diversity is seen as an advantage as, collectively, it addresses the gaps in understanding any single forecaster may have. "[M]y experience may be different from you know... another forecaster's experience and I can learn from that person [...] there's those kinds of exchanges that happen..." (P1). Forecasters share knowledge and solicit their peers' perspectives in daily discussions. "At two o 'clock, we have our pow-wow where we all kind of go through our hazards and our problems. [...] it's kind of like a peer review session..." (P3).
132
+
133
+ Professional exchanges with partnering operations help avalanche forecasters enrich their understanding of how data are produced in a variety of operational contexts. "[W]hether that's highways or ski hill, snowcat skiing, heli-skiing [...] there's variability between the individual operators... And the only way to really fully understand is to go and spend a bit of time with that operator. [...] We have professional exchanges go on..." (P1). Forecasters also phone operators and reach out directly for clarification or if they are uncertain about how they should be thinking about conditions. "[If I] am potentially missing something or I just don't feel comfortable [...] I'll start picking the phone up and trying to find people in the area that can provide more, more insight..." (P3).
134
+
135
+ Collaboration allows forecasters to account for each other's knowledge gaps, at the same time, it presents challenges such as communication of analysis. Forecasting relies on the continuity of analysis. Shift-changes can disrupt this continuity and forecasters struggle with communicating relevant details as part of the hand-off process. "[T]here's a lot of variability in different people and [...] what sort of information they leave [...] that's the first place I'll look [...] hoping that the [...] previous forecaster has left enough information to start that picture..." (P3). To facilitate the hand-off process, forecasters produce knowledge artifacts like dedicated hand-off notes or detailed descriptions of snowpack stratigraphy. "[Talking about hand-off notes] I am trying to take that ease and control that I have at day four or five [...] and I give that to the next person, so they don't feel like they have to do their process of discovery from ground zero essentially..." (P2). This is seen as a separate and additional task often completed at the end of the day when forecasters are fatigued. This is why documentation used in support of hand-off and collaboration is often incomplete.
136
+
137
+ <table><tr><td/><td>S1 Theme</td><td>S1 Sub-Theme</td><td>Definition</td><td>S2 Observed Evidence $(\mathbf{O} =$ Observation, $\mathbf{C} = \mathbf{{CRD}}$ )</td></tr><tr><td rowspan="5">四</td><td rowspan="2">Missing Info</td><td>Explicit</td><td>Missing information is explicitly represented in data.</td><td/></tr><tr><td>Implicit</td><td>Missing information must be inferred from the situational context.</td><td>O</td></tr><tr><td rowspan="3">Data Representativen ess</td><td>Classification Overlap</td><td>Classifications are often not independent or mutually exclusive.</td><td>O</td></tr><tr><td>Conservative Bias</td><td>Avalanche professionals are conservative when faced with uncertainty in the field or in data.</td><td>O</td></tr><tr><td>Circumstantial Definitions</td><td>Official definitions and unofficial practices for reporting data depend on the situational context.</td><td>0</td></tr><tr><td rowspan="8">WISTISTISTISTIS</td><td rowspan="6">Analytic Practices</td><td>Subjective Hunches</td><td>Considering the behaviour, concerns, and hunches of others in the field to inform and guide analysis and interpretation.</td><td>C</td></tr><tr><td>Immersion</td><td>Forecasters spend several days forming a mental model through undirected review of contextual information.</td><td>C</td></tr><tr><td>Context-Seeking</td><td>Directed information search for supplementary contextual information.</td><td>C</td></tr><tr><td>Mental Projection</td><td>Forecasters assimilate information by imagining and mentally visualizing the interactions of avalanche conditions, weather, terrain, and people.</td><td/></tr><tr><td>Updating</td><td>Forecasters iterate over knowledge artifacts like their forecast as they conduct their analysis and update their own mental models.</td><td>C</td></tr><tr><td>Deliberate Omission</td><td>Forecasters manage information overload by ignoring certain data</td><td>C</td></tr><tr><td rowspan="4">Analytic Challenges</td><td>Lack of Good Representations</td><td>Forecasters lament a lack of good visual representations to alleviate some cognitive effort.</td><td>C</td></tr><tr><td>Ratings</td><td>It is challenging for forecasters to lower danger ratings as data reveal instability rather than stability.</td><td/></tr><tr><td rowspan="6">CollaborationandCommunication</td><td>Continuity</td><td>Forecasting relies on the continuity of analysis and monitoring. Shift-changes disrupt this continuity.</td><td>0</td></tr><tr><td>Translating Analysis</td><td>Forecasters struggle with communicating complex conditions with simple clarity to the public.</td><td>0</td></tr><tr><td>Collaborative</td><td>Data Production</td><td>Forecasters facilitate collaborative work by producing hand-off notes and other internal knowledge artifacts.</td><td>0</td></tr><tr><td>Sensemaking</td><td>Regular Discussions</td><td>Forecasters draw on each other's diverse knowledge through daily discussions.</td><td>0</td></tr><tr><td>Strategies</td><td>Reaching out Directly</td><td>Forecasters call or email field operators for further information when faced with critical information gaps.</td><td>O</td></tr><tr><td/><td>Professional Exchange</td><td>Forecasters work with other agencies and operators to gain a deeper understanding of the nuances of how data are produced and what they mean.</td><td/></tr></table>
138
+
139
+ Table 1: Thematic codes developed in Study 1 (semi-structured interviews) and applied to Study 2 (field observations and cued-recall debrief). Thematic codes are organized and color-coded according to their relevance to different sources of ambiguity.
140
+
141
+ Whether communicating to fellow forecasters or the public, capturing complexity and nuance in simple and understandable terms is a challenge. "To simplify it [...] that's when you are kind of having to use your own best judgment..." (P2). Forecasters must translate their understanding and cater it to an audience that varies in understanding and expertise. This often involves exploring alternative future scenarios, their implications, how an audience may interpret what the forecaster is saying, and subsequently choosing a simple communication strategy that comprehensively accounts for these alternatives. "So instead of trying to write my forecasts like: 'oh, if we get 10 centimeters it will probably be okay, but if we get 20, then it'll probably come unglued' [...] It's like 'just watch for conditions to change as you increase with elevation [...] if it starts to feel stiff or slabby underneath your feet [...] use that terrain feature to go around it..." (P2).
142
+
143
+ ## 5 STUDY 2: OBSERVING AVALANCHE ANALYTICS
144
+
145
+ The purpose of Study 2 was to observe forecaster workplace behaviours and their use of technology. We sought a richer understanding of the challenges faced by forecasters and how visual analytics interventions might help.
146
+
147
+ ### 5.1 Procedure
148
+
149
+ We conducted field observations on Avalanche Canada premises for a week., collecting field notes and audio recordings of daily discussions. At the same time, we gathered observations using cued-recall debrief (CRD), a situated recall method developed for use in complex decision-making contexts [62] and adapted for human-computer interaction [5]. 7 forecasters were observed in the field and 4 were debriefed using CRD. Camcorders positioned behind workstations in view of monitors and the desk surface captured recordings of forecaster's workday and their use of technology as well as artifacts such as hand-written notes. At regular intervals, video recordings were reviewed to identify timestamps where forecasters exhibited behaviours relevant to our research interests. At the end of the workday, recordings were played back to forecasters at marked timestamps, and forecasters were asked to explain their thought processes and actions. We asked questions like: "Can you explain what you were doing and thinking here?" These debrief interviews were video recorded and transcribed.
150
+
151
+ ### 5.2 Analysis
152
+
153
+ We applied the thematic coding scheme developed in Study 1 to notes and transcripts in Study 2 (Table 1). This allowed us to compare what forecasters say and what they actually do. Thematic coding was applied by one coder in two passes.
154
+
155
+ ### 5.3 Findings
156
+
157
+ #### 5.3.1 Analytic Tooling
158
+
159
+ Forecasters rely heavily on text tables and information from disparate web-based sources. They gather these resources in a map-based web portal that organizes hyperlinks to such resources spatially (Figure 2A). Data such as weather station telemetry representing meteorological conditions are investigated in a bottom-up manner. Telemetry from individual weather stations is viewed in a table format and iteratively synthesized into a holistic understanding of weather patterns. Similarly, professional field reports are generally viewed in text tables (Figure 3A). Forecasters scan down columns of tables to extract patterns and distributions from structured attributes such as avalanches sizes. At the same time, they read across rows of tables to extract details about individual reports to glean enough context to understand their significance. We observed forecasters repurposing web-browser features to accomplish simple analytic tasks. For instance, one forecaster opened several days of data in successive windows to investigate temporal patterns and make comparisons. This suggested forecasters could benefit from dedicated analytic tools to support such tasks. To our surprise, we found that the visualizations present in existing systems were seldom used. While it was clear the forecasters could benefit from dedicated analytic tools, the overwhelming use of text tables indicated this representational form held some comparative advantage in sensemaking.
160
+
161
+ #### 5.3.2 Talking About Data
162
+
163
+ Organizational knowledge relevant to the nuanced interpretation of data is in large part oral tradition exchanged through the shared practice and environment of work.
164
+
165
+ We observed several discussions that dealt with the topic of how to interpret particular reports. For instance, one discussion dealt with the interpretation of a report authored by an operator who was known to have a conservative bias and what the implications of this were for hazard assessments. In another discussion, a junior forecaster with a guiding background described how they are coming to understand the challenges of their new remote-work environment, noting the nature of what types of information may be missing. "After having worked this job [Avalanche Canada] ... I sort of realize the big holes the operators leave in their writeups [...] because they are having face to face conversations... and maybe not putting that information into their writeup... saying this layer [of snow] does not exist in our area may not be helpful to them, but it really helps us here in this office..." (P8). How classifications and circumstantial definitions are applied in hazard assessment and risk communication was also a frequent topic of conversation. "I like [X's] point yesterday, wind slabs in the alpine are kind of like cornices that you find always... it is just a winter mountain hazard... it goes on the bulletin when it is elevated to more than normal caution..." (P2).
166
+
167
+ ![01963e60-2094-7243-a668-202f27e3257f_5_151_145_1499_512_0.jpg](images/01963e60-2094-7243-a668-202f27e3257f_5_151_145_1499_512_0.jpg)
168
+
169
+ Figure 2: (A) Existing spatially oriented web portal linking to external weather station telemetry resources. Data from individual weather stations are commonly viewed in a table format and synthesized in a bottom-up manner. (B) WxObs visualization prototype showing numerical aggregates of weather station telemetry. Weather stations are viewed simultaneously using a conventional overview-first and top-down approach.
170
+
171
+ #### 5.3.3 Tacit Sensemaking and Analytic Processes
172
+
173
+ Early sensemaking processes, particularly those involving personal experiences or trust, may be difficult to articulate out of context and consequently, share with others.
174
+
175
+ When debriefing forecasters about their workday we found they relied on the subjective hunches of operators that they personally trusted and were more familiar with. This factored into how evidence was weighed and the confidence forecasters had in it. "I feel good about who was about in the operation. So, I felt that the test was valid and valid information that I should be thinking about..." (P3).
176
+
177
+ We also found forecasters exploring general contextual information to immerse themselves. They found it difficult to articulate how they were using the information, reflecting characteristics of early sensemaking processes [71]. "It was just to give me an orientation to get my mental picture for forecasting [...] just a little bit of context... I don't know what that does for me exactly..." (P4).
178
+
179
+ #### 5.3.4 Collaboration and Knowledge Artifacts
180
+
181
+ The bulletin serves as a knowledge artifact representing a forecasters' current understanding of avalanche conditions. The bulletin scaffolds analysis and guides information search, particularly during hand-off at shift changes. However, the reasons behind specific changes to the bulletin are not always explicitly captured leaving future collaborating forecasters to speculate about the reasoning that might have been involved.
182
+
183
+ Forecasters don't just iterate over their own bulletin over the course of the day, they often carry forward the previous day's bulletin even if another forecaster wrote it. We observed how forecasters update it as they formulate their own new understanding. "I import yesterday's forecast... and I tweak my forecast so it matches my now-cast..." (P6). The specific reasons behind these updates are not made explicit, leaving the forecasters coming on shift to seek contextual information to speculatively reconstruct their coworker's evidential reasoning process. "...so I reviewed a few avalanches to understand what was driving those avalanches and why [anonymized] added that persistent slab problem again..." (P6).
184
+
185
+ ## 6 CO-DESIGNING VISUAL ANALYTIC SUPPORT
186
+
187
+ These findings guided us in developing visualization prototypes to support core forecasting tasks. We deployed these visualizations as design probes to examine how visual analytics interventions may aid in addressing challenges of ambiguity. The first prototype (WxObs) aggregates weather observations from remote weather stations in order to help forecasters validate the previous day's weather forecast as well as to monitor evolving weather systems in real-time. The second prototype (AvObs) uses field-reported avalanche observations produced by avalanche safety operations sharing data in the InfoEx. Avalanche observations are treated as key indicators of avalanche hazards in avalanche forecasting. We designed and developed both prototypes through several iterations from paper sketches to computational implementation in collaboration with avalanche forecasters. Both tools were evaluated using think-aloud protocol throughout the design process to explore how the tools support reasoning.
188
+
189
+ ### 6.1 WxObs: Classic Design
190
+
191
+ Forecasters traditionally access weather station data through a spatially-linked web portal that redirects to external resources where data from individual weather stations are generally presented in text tables (Figure 2A). Forecasters use this information to synthesize patterns and distributions of various meteorological data such as precipitation totals, wind speeds, and temperatures. However, we found that their existing approach was challenged by the visual fragmentation and tediousness of accessing these disparate resources. We used a classic visual analytics linked and interactive multi-view design approach to streamline analysis and address this problem (Figure 2B).
192
+
193
+ We designed a conventional visual analytic display following Shneiderman's "Overview first, zoom and filter, then details on demand" visualization mantra [72]. Numerical aggregations of various weather stations telemetry across time and space were displayed in a variety of visualizations to provide forecasters with an "overview" of the data. Multiple "levels of detail" and "scales of resolution" of the data were captured across the display. All visualizations were linked together interactively supporting "brushing", "zooming", and "filtering" interactions across all corresponding displays. Individual marks visible in the spatial view allow tooltip interactions for "details-on-demand".
194
+
195
+ ![01963e60-2094-7243-a668-202f27e3257f_6_151_146_1497_425_0.jpg](images/01963e60-2094-7243-a668-202f27e3257f_6_151_146_1497_425_0.jpg)
196
+
197
+ Figure 3: (A) Existing InfoEx interface displaying avalanche observation reports in a table format. Individual reports are read and analyzed in a bottom-up manner. (B) AvObs visualization prototype displaying avalanche observation reports using glyphs placed in a variety of visualization contexts. Individual reports are visible allowing critical contextual details to be discerned to inform understanding when there is a multiplicity of interpretations.
198
+
199
+ ### 6.2 AvObs: Breaking with Classics
200
+
201
+ Our second prototype, the AvObs tool (Figure 3B), uses daily field-reported avalanche observations shared by avalanche safety operators on the InfoEx platform. These tables are generally viewed in a tabular format. When we started designing this tool with the avalanche forecasters, we used classic visualization principles based on effectiveness and expressiveness [59] and common conventions such as using numerical aggregations. We found that even simple numerical aggregations like counts were problematic and inappropriate.
202
+
203
+ #### 6.2.1 Disaggregated Data
204
+
205
+ We discovered several issues necessitating disaggregated views of data. First, the data have ambiguous expressions where the same data value may correspond to multiple meanings depending on context and the communicative intent of the author. Second, data are gathered using a targeted sampling approach rather than a random sampling approach. The data generating process is not uniform across the dataset and as a result, this challenges the methodological utility of aggregate measures.
206
+
207
+ #### 6.2.2 Glyphs for Ambiguous Data
208
+
209
+ Forecasters wanted to see individual reports while at the same time being able to discern general patterns in the data. To address this design constraint we used glyphs with circle marks representing individual reports in a packed layout within a variety of visualization contexts. Circle marks were encoded using important structured data attributes within reports. The size of circles encoded typical avalanche size and the color encoded the number of observed avalanches. Two color maps were used to distinguish numerical and categorical values reflecting the need to preserve raw forms of data. Brushing and linking as well as tooltip interactions reveal contextual details allowing forecasters to discern how to interpret individual reports. This glyph-based approach operates at multiple scales of resolution allowing forecasters to visually aggregate data to discern patterns. Glyphs are known to support several visual aggregation operations such as summarizing data, detecting outliers, detecting trends, or segmenting data into clusters [78].
210
+
211
+ #### 6.2.3 Desirable Difficulty
212
+
213
+ Early versions of the AvObs visualization prototype used bar charts that forecasters found difficult to interpret. They expressed concerns about visualizations giving them a false sense of precision and disarming the level of scrutiny forecasters usually apply to these data. We deliberately chose to use a visual design that we thought would break this sense of precision by introducing deliberate effort in decoding visualizations. We chose size and color as opposed to position which is commonly thought to be decoded more accurately [15] and, depending on the task, is often more perceptually salient [78]. In addition, combining visual features such as size and color is more difficult than using either alone [26]. In this way, we are explicitly violating the principle of perceptual effectiveness to provoke more deliberate consideration of the data, grounded in the concept of "desirable difficulty".
214
+
215
+ The benefits of introducing cognitive difficulties have been discussed in the context of geovisualization and risk-based decisions [13] and are well-documented in studies of human learning [84]. In visualization research, desirable difficulty has been framed as a trade-off between the cognitive efficiency derived from pre-attentive processing and improved learning through more active processing of information [32]. By reducing the fluency with which patterns in visualizations are read, more active and attentive processing of these patterns can stimulate "self-explanations" [14] where inferences about missing information are generated to fill in gaps or prior knowledge is integrated with new information to account for potential discrepancies. We conjecture our relatively more imprecise visualization design introduces visual complexity that induces additional effort, attention, and careful consideration of how perceived patterns should be interpreted. This is particularly important when ambiguity is a relevant consideration. By relying on quicker or more efficient information processing, one may be led to treat a visual display at face value and forego the consideration of alternative interpretations that may apply.
216
+
217
+ Beyond factors related to low-level perceptual processing, we conjecture that our chosen design serves as an effective metaphor for the messy nature of such data. Researchers have discussed how precise, easy-to-read, and minimalist designs can impart a sense of authority or objectivity [36] that may not always be warranted. The rhetorical force of visualizations to convince viewers that a clean visualization is an objective and perfectly truthful representation of the world can be detrimental when considering the messiness and complexity of many real-world data. Our deliberately messy design may serve as a reminder, much as tables do, that such data require additional scrutiny and interrogation from multiple perspectives.
218
+
219
+ ## 7 STUDY 3: EXPLORING VISUALIZATIONS
220
+
221
+ ### 7.1 Procedure
222
+
223
+ The visualization prototypes were evaluated using retrospective interviews. The avalanche observations prototype used simulated synthetic and historical data from past seasons and was never used operationally. The weather stations prototype used real-time data and was used operationally in the second half of the winter forecasting season. 7 forecasters had input on the design and development of prototypes while one simply commented on their experiences using them.
224
+
225
+ At the end of the forecasting season, we conducted semi-structured interviews asking forecasters to reflect on the prototypes, how they addressed the challenges of data, how they affected their work, and what needs remained unfulfilled. Interviews were conducted remotely using video conference tools. We used our prototypes as artifacts in the interview to prompt the forecaster's reflections. The interviews were video-recorded and transcribed. We summarize our key findings with quotes extracted from transcripts below
226
+
227
+ ### 7.2 Findings
228
+
229
+ #### 7.2.1 Many Possible Interpretations
230
+
231
+ The operational use of the WxObs prototype highlighted how analysis of weather station telemetry presents issues of data uncertainty that give rise to ambiguity. They are sparsely distributed relative to the large spatial areas they are used to represent [48] and they are subject to a variety of sensor and transmission errors caused by environmental factors. Presently, there is no comprehensive automated quality assurance procedure that accounts for all possible errors in the data [57]. Diagnosing errors and how individual weather stations come to represent broader weather patterns is a matter handled through the forecaster's judgment and interpretation. Forecasters normally use text tables to view each weather station's telemetry individually and progressively build up an understanding of weather patterns. This bottom-up approach stands in contrast to our top-down and overview-first visualization designs. Our visualization prototypes employed visualizations of aggregate measures, multiple granularities of data, interactions including brushing and filtering, and tooltips to view the details of individual weather station telemetry (Figure 2B). Our visualization prototype introduced a new and unfamiliar analytic approach that challenged forecasters. "I've always looked at the data in a pretty disaggregated way [...] What I'm having to learn is to kind of let go of that, needing to see the disaggregated view first so that I can aggregate the data in my brain so to speak..." (P12).
232
+
233
+ Similar challenges arose in the AvObs tool (Figure 3B). The human-reported avalanche observations follow reporting standards that, while structured, require a thorough understanding of context for interpretation. "...the InfoEx system and the standards... they kind of define the box that we all work in [...] how you use them... context drives that. You might use a certain approach... data that are obviously within that general framework or box that we've created, but you might not use them exactly the same way..." (P12). The same datum may be interpreted in a variety of ways and displays need to reveal the appropriate details for readers to discern what is appropriate.
234
+
235
+ #### 7.2.2 The Need for Raw Data
236
+
237
+ Both data sources and prototype tools highlight a need for fluid interaction with underlying raw data. In the WxObs tool, many who are used to seeing raw data in a tabular format raised issues of trust as they could not apply the same visual scanning strategies to detect errors in data. "[I]t largely stems from the trustworthiness of the data [...addressing the use of spreadsheets] I like things in their raw format just for my own sake [...] my own stamp of approval. [...] I guess it's easy for my eyes to decode differences or irregularities. You should be able to visualize the data and get the same output. I don't know why. I just use tables..." (P1). Others also used raw data tables but did so to scaffold the learning of data processing mechanics and the affordances of the visualizations as analytic tools "[...] having that [raw data table] side by side with the visualization helped me to interpret: Okay, what's the visualization trying to tell me here?" (P4).
238
+
239
+ Similar issues surfaced with the AvObs tool. Early design iterations employing bar charts were seen as an impediment to sensemak-ing. Meanwhile, the glyph-based design was thought to hold more methodological utility as it more closely resembled and supported their mental model of how to analyze these data. "I like seeing the individual events more than the aggregate... It seems like full of flaws and limitations to kind of summarize all the [avalanche] activity with one number..." (P11). Despite our prototype using individual marks to represent each individual report, some forecasters still wanted the ability to see table-based displays. We speculate that this, similar to the WxObs tool, is due to issues of trust and learning how tacit analytic procedures associated with existing table-based views are or are not supported in the AvObs tool.
240
+
241
+ #### 7.2.3 Forecaster Reflections
242
+
243
+ Forecasters who adopted the WxObs visualizations more readily in their work found the tool provided them with a richer and deeper understanding of meteorological phenomena than traditional data tables alone. Drawing a historical comparison to the role of computers in meteorology, forecasters view visualizations as a stepping stone in a transitional phase towards more data-driven modeling. "[T]here was a transitional phase there where the computer was more an aid to help the forecaster make some initial assumptions... then the forecaster would tweak the forecast and actually write the forecast manually still... and now we're to the point where that really isn't the case..." (P12).
244
+
245
+ Meanwhile, forecasters reported feeling satisfied with how the AvObs visualization prototype represented and supported their analytical processes. "[The visualization] helps to smooth the data [...] and just at a glance [...] but it's not smoothing where I can't then [...] tease out nuances[...] Ifeel like it's really true to the data, which is a collection of individual points, kind of disparate points from across a forecasting region..." (P2).
246
+
247
+ ## 8 DISCUSSION
248
+
249
+ Throughout our 3 studies, we found that critical issues of ambiguity arise in three contexts: the data, the process of analysis, and the challenges of communicating both data and interpretation to both coworkers and the general public. We unpack the role of ambiguity, the concomitant challenges, and strategies used to deal with ambiguity in each of these contexts. Our findings highlight the need for more effective design interventions. We discuss each in turn.
250
+
251
+ ### 8.1 Sources of Ambiguity
252
+
253
+ #### 8.1.1 Data
254
+
255
+ Ambiguity emerges from data because they are incomplete simplifications of the complex phenomena they represent. Ambiguity may be involved in the expression of data or how representative data are of phenomena of interest. Whether reasoning about multiple types of data uncertainty in weather station telemetry or what field-reported avalanche observations mean for avalanche conditions more broadly, forecasters use their knowledge, experience, and cues within the data to explore plausible explanations that account for what they see. Here, provoking alternative interpretations serves a productive purpose in analysis.
256
+
257
+ Forecasters try to capture relevant nuances of interpretation about specific data through daily discussions. Often this might serve to disambiguate meaning by providing an optimal or appropriate framing for the data. For instance, the understanding that weather stations at windy locations will need adjustment when trying to understand broader wind patterns. We note that the forecasters' corpus of organizational knowledge is predominantly oral tradition exchanged in application to the immediate demands of work. Such a mechanism for knowledge exchange is vulnerable to information loss.
258
+
259
+ #### 8.1.2 Analytic Process
260
+
261
+ Ambiguity both serves a productive purpose in analytic processes and presents challenges for the management and navigation of analyses. Alternative interpretations are explored as part of sensemaking often taking the form of alternative scenarios in risk analysis and risk prediction. Either through mental visualization or explicit sketches, forecasters provoke and imagine alternative scenarios to explore potential risks or explanations of data.
262
+
263
+ The judgments and analytic choices made during analysis represent alternative potential analytic paths through data. As forecasters weigh evidence and update their understanding of avalanche conditions, they iteratively adjust knowledge artifacts to match their understanding. However, the evidential reasoning process behind their judgments is often left uncaptured and may be difficult to reconstruct. This poses challenges for managing analysis as it may be unclear what work is completed and what remains to be done.
264
+
265
+ #### 8.1.3 Collaboration and Communication
266
+
267
+ Forecasters each hold a unique perspective and interpretive lens presenting a form of ambiguity. Forecasters use strategies like regular discussions or hand-off notes to exchange knowledge and disambiguate how to interpret each other's assessments by capturing their reasoning processes. However, given the additional effort of this task and the difficulty in anticipating what may be relevant, such information is often not completed. This leaves forecasters having to speculate about their colleagues' reasoning processes.
268
+
269
+ Forecasters translate their own complex understanding of avalanche conditions in simple terms to ensure that members of the public, whether novice or expert, can apply appropriate risk-management strategies. In doing so, forecasters mitigate the risks of potential scenarios the public might encounter or the confusion that might result from overly technical communications. Quite often, this means reconciling alternatives. For instance, in a situation where two avalanche problem types require the same risk mitigation strategies, forecasters will use one of them and supplement any further guidance that might be necessary using plain and actionable language. The myriad of ways to communicate hazards presents its own form of ambiguity. Moreover, individual forecasters differ in how they judge avalanche hazards and apply assessments [44, 77].
270
+
271
+ ### 8.2 Design Implications
272
+
273
+ #### 8.2.1 When to Break the Rules
274
+
275
+ Conventional visualization design principles value precision-based visual variable effectiveness rankings as a basis for design decisions. However, as others have highlighted [6], this is an oversimplification of how visualizations are used. Visual pattern detection and visual thinking extend far beyond the precise extraction of singular values, and more importantly, displays that optimize for precision may have detrimental effects on other types of operations. With the need for close scrutiny of data and the potential for alternative interpretations, overly precise displays can give a false sense of precision and forfeit the perceived need for further scrutiny.
276
+
277
+ Our research has also highlighted that while the traditional 'overview first' mantra certainly has value in this application, it leaves a need for more fluid access and control to underlying raw data without overly onerous interactions. The properties of these data, like their ambiguous expressions or the varying data-generating processes, challenge conventional visualization approaches which can hide critical details that cue appropriate framing for data. While our designs shifted some focus to these cues, the need for bottom-up raw-data-driven processes was still highlighted in the feedback we received.
278
+
279
+ When dealing with heterogeneous and ambiguous data, designers should consider design approaches that best support the sensemaking processes involved rather than relying on conventional visualization mantras with a one size fits all approach. This reflects a broader need for improved guidance of how the affordances of visualization design can support the relevant cognitive processes needed for specific problem solving and sensemaking tasks. To do so, a characterization of what tasks can be supported by visualizations needs to move beyond what can be measured in lab experiments (e.g. low-level perceptual processes or decoding statistical properties of data). We suggest that a "macrocognitive" lens [42], one that values ecological validity and the complexities involved rather than strict control of variables, may help researchers identify such tasks.
280
+
281
+ #### 8.2.2 Desirable Difficulty
282
+
283
+ Introducing cognitive difficulties in the context of visualization is thought to improve the memorability of insights [32]. Our research suggests that enabling or encouraging sensemaking around ambiguity is another beneficial outcome. There may be other benefits of introducing difficulties in visualization that remain to be identified.
284
+
285
+ #### 8.2.3 Access to Raw Data Supports Sensemaking
286
+
287
+ Through our design study, we learned that visual displays of heterogeneous and ambiguous data should aim to reveal the relevant contextual details necessary to discern appropriate interpretations. Abstractions like numerical aggregations can occlude such details and impede sensemaking. Instead, we recommend designs such as unit visualizations that support visual aggregations or those showing the relevant granularity of data alongside numerical aggregations (in tables, for instance). This allows alternative interpretations to be provoked when trying to understand how data come to represent a phenomenon of interest. In addition, access to raw data can support the process of learning and adopting new analytic tools by revealing underlying data processing mechanics [3]. Hasty transitions to new analytic systems risk the loss of a host of implicit procedural knowledge that may not be supported by new approaches. This can cause issues of trust. Showing raw data alongside more abstracted views of the same data can aid comprehension of new tools and allow users to evaluate their affordances.
288
+
289
+ #### 8.2.4 Capture Ambiguities Explicitly
290
+
291
+ We argue that design solutions need to extend beyond the representation of existing data. Managing an analysis with many contingencies and nuances of interpretation is difficult and is vulnerable to information loss, particularly when analysis is shared. To better serve the analysis at hand and to improve collaborative analysis, we suggest that the nuances of data interpretation should be captured explicitly during analysis. This would serve to characterize ambiguities through the externalization of relevant knowledge and the enrichment of data. We must take care these interventions remain lightweight and contextually anchored to avoid undue effort. We draw inspiration from the concept of "active reading", where knowledge generated during the process of reading is captured with external representations such as computationally-enabled markup and annotations [56]. Researchers have demonstrated that such techniques can be extended to analysis using visualizations $\left\lbrack {{68},{80}}\right\rbrack$ . Annotations are a general-purpose technique that has been applied as a strategy to deal with ambiguity [9] as well as implicit errors [55]. This suggests annotations could be more specifically tailored and extended to address the challenges of ambiguity. Other forms of markup [3], including annotations, employed for the nuanced interpretation of data are often embedded in the ubiquitous spreadsheet, perhaps the most widespread analytic tool. Tables are flexible and allow direct interaction with data which might explain why users often turn to them to support complex sensemaking. The affordances of tables are well-suited to deal with the challenges of ambiguity and may serve to guide the design of visual analytics systems in applications dealing with such challenges.
292
+
293
+ #### 8.2.5 Externalizations Can Be Vague
294
+
295
+ Ambiguity is often the start of a sensemaking process. At such early stages, understanding may be inchoate and difficult to articulate, calling into question the utility of highly detailed capture mechanisms such as annotations. In collaborative analysis, it is difficult to anticipate the needs of others. Collaborators might only form an intuition about a problem that may be important for others to be aware of [40]. This is because the relevance of any such problem is context-sensitive [82]. Standardized protocols for sharing analysis often fail because designers of such protocols cannot adequately account for and predict all the unique information or complexity that might arise [66]. These considerations are important whether collaboration is with others or oneself at a future point in time.
296
+
297
+ There may be more simple capture mechanisms that can address the difficulties of articulating complexity. Passive capture mechanisms such as interaction logs provide one lightweight and context-sensitive solution. Interaction logs have been used to infer reasoning processes [19] and are frequently discussed as approaches for documenting analytic provenance [83]. Interaction logs, however, only show behaviours and are indirect indicators of reasoning processes. User-controlled markup may still be necessary to capture what is relevant. Researchers in clinical healthcare settings have supplemented hand-off protocols with vague metrics like gut feelings about a patient, time spent with a patient, or how medical equipment in a room has been moved around to take advantage of practitioners' shared work environment and culture [58]. We can take inspiration from this work. To capitalize on the shared digital working environment, simple markup such as tagging of data or representations may be all that is necessary to signify ambiguity. Tags may signify important pieces of evidence, how evidence is weighed and relates to assessments, or may simply serve to raise awareness of ambiguity and prevent it from being lost and risking potential misinterpretation. Forecasters can use their shared working environment to maintain context and capture ambiguities without having to precisely articulate them. Awareness of uncertainty is critical for ensuring trust in findings [69] and we argue the same applies to awareness of ambiguity.
298
+
299
+ #### 8.2.6 Data Enrichment Requires Metadata Management
300
+
301
+ The use of more explicit data enrichment and ambiguity capture raises the question of how long captured data should persist as part of the working environment. Such markup may only be relevant for one working session and one individual. It might be relevant across several working days and for multiple collaborators. Or, it might take a more permanent form in a corpus of organizational knowledge. Designers should consider ways to control or account for the persistence of captured data.
302
+
303
+ Metadata created during analysis within a visual analytics system are bound to a representation rather than the underlying database. This raises questions about how such metadata may be queried, retrieved, or reused in contexts outside of the one they are created in and originally bound to. Designers need to consider how metadata can be reused and translated across analytic contexts.
304
+
305
+ #### 8.2.7 Unstructured Metadata Require Schematization
306
+
307
+ Ad-hoc data enrichment and ambiguity capture pose some practical challenges when scaling. Annotations tend to produce large amounts of unstructured data that can be difficult to reuse. Such data require a schematization mechanism to make them tractable for future reuse. Mechanisms for eliciting such data may be structured ahead of time, for example through survey-like questionnaires. Meta-data gathered at the time of elicitation, such as timestamps or application states [53], might also provide some structure. Alternatively, natural language processing approaches such as ontology-learning may lend themselves to schematizing such meta-data. However, we stress that the use of such algorithms should maintain transparency and give supervisory control to users. As we have learned in our design study, even simple statistical abstractions can obfuscate details paramount to reasoning about ambiguity. Further, highly complex technological solutions are more vulnerable to failure [82]. Consequently, the use of automation or algorithms should be carefully designed to make data processing transparent in support of human comprehension.
308
+
309
+ #### 8.2.8 Baby and the Bathwater
310
+
311
+ Our experiences developing visualization prototypes for avalanche forecasters have highlighted the costs associated with introducing new analytic tools. The forecasters have developed visual reasoning strategies for interrogating data in table formats. Many of these procedures and processes are likely tacit and simply a natural habit that has been developed. When introducing new tools, even basic visualizations, there is a transitional period. A process of evaluating what capabilities are gained or supported, and which ones might not be supported needs to occur in practice. Until a thorough understanding of how a new analytic tool fits within the broader sensemaking toolkit, issues of trust will persist.
312
+
313
+ Computationally-enabled analytic tools are becoming ever-more sophisticated and complex. While there are real benefits to such powerful tools, designers need to consider the learning and unlearning of procedures associated with the adoption of new approaches. This is a common and obvious concern in the implementation of new systems. However, it is one that should be given more attention as it is often forgotten. This is particularly important in applications involving risk-based decision-making and time constraints where there are severe consequences for misinformed decisions.
314
+
315
+ ## 9 LIMITATIONS
316
+
317
+ We note that while our first study had additional coders to test reliability, data from subsequent studies were analyzed by one coder only. Our comprehension of the challenges that forecasters face was incorporated in prototypes within our design study and the feedback forecasters provided throughout our close collaboration served as a form of validation of our understanding. This presents obvious limitations in the reliability of our findings. However, such challenges are common in the development of long-term, qualitative, and ethnographically inspired research aimed at deep domain understanding.
318
+
319
+ ## 10 CONCLUSION
320
+
321
+ We have presented findings from a set of qualitative studies with public avalanche forecasters. Our research highlights that ambiguity presents challenges and unmet needs in critical and complex sensemaking. We propose a formative characterization of ambiguity across three levels of abstraction in analysis: data, analytic process, and collaboration and communication. The key lesson of our research is that ambiguity should be explicitly considered and designed for. While even simple visualization design choices can serve to enable or impede sensemaking around ambiguity, we argue for more targeted and explicit approaches. Our findings may inform future research and the design of tools in other complex risk-management domains such as extreme weather forecasting or the forecasting of other natural disasters. This work represents a preliminary attempt to characterize ambiguity and define a design space for visual analytics, but many questions remain unexplored. Further study is necessary to evaluate our existing and proposed design solutions to more rigorously understand their impact and how they address the challenges of ambiguity.
322
+
323
+ ## ACKNOWLEDGMENTS
324
+
325
+ Thanks to Avalanche Canada, the Vancouver Institute for Visual Analytics (VIVA), the Big Data Initiative at Simon Fraser University (SFU), the SFU Avalanche Research Program, and our reviewers for their thoughtful feedback. This work was supported by Mitacs through the Mitacs Accelerate program and the Natural Sciences and Engineering Research Council Industry Research Chair in Avalanche Risk Management (grant no. IRC/5155322016), with industry support from Canadian Pacific Railway, HeliCat Canada, Canadian Avalanche Association, and Mike Wiegele Helicopter Skiing.
326
+
327
+ ## REFERENCES
328
+
329
+ [1] L. Adams. A systems approach to human factors and expert decision-making within the canadian avalanche phenomena. 284. Publisher: Citeseer.
330
+
331
+ [2] C. A. Association and others. Observation guidelines and recording standards for weather, snowpack and avalanches. [Revelstoke, BC]: Canadian Avalanche Assoc.
332
+
333
+ [3] L. Bartram, M. Correll, and M. Tory. Untidy data: The unreasonable effectiveness of tables.
334
+
335
+ [4] L. R. Beach. Narrative thinking and decision making: How the stories we tell ourselves shape our decisions, and vice versa.
336
+
337
+ [5] T. Bentley, L. Johnston, and K. von Baggo. Evaluation using cued-recall debrief to elicit information about a user's affective experiences. In Proceedings of the 17th Australia Conference on Computer-Human Interaction: Citizens Online: Considerations for Today and the Future, pp. 1-10.
338
+
339
+ [6] E. Bertini, M. Correll, and S. Franconeri. Why shouldn't all charts be scatter plots? beyond precision-driven visualizations. In 2020 IEEE Visualization Conference (VIS), pp. 206-210. IEEE. doi: 10. 1109/VIS47514.2020.00048
340
+
341
+ [7] K. J. Beven, S. Almeida, W. P. Aspinall, P. D. Bates, S. Blazkova, E. Borgomeo, K. Goda, J. W. Hall, J. C. Phillips, M. Simpson, P. J. Smith, D. B. Stephenson, T. Wagener, M. Watson, and K. L. Wilkins. Epistemic uncertainties and natural hazard risk assessment. 1. a review of different natural hazard areas. doi: 10.5194/nhess-2017-250
342
+
343
+ [8] A. M. Bisantz, D. Cao, M. Jenkins, P. R. Pennathur, M. Farry, E. Roth, S. S. Potter, and J. Pfautz. Comparing uncertainty visualizations for a dynamic decision-making task. 5(3):277-293. Publisher: SAGE Publications Sage CA: Los Angeles, CA.
344
+
345
+ [9] N. Boukhelifa, M.-E. Perrin, S. Huron, and J. Eagan. How data workers cope with uncertainty: A task characterisation study. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 3645-3656.
346
+
347
+ [10] V. Braun and V. Clarke. Thematic analysis. Publisher: American Psychological Association.
348
+
349
+ [11] K. Brodlie, R. A. Osorio, and A. Lopes. A review of uncertainty in data visualization. pp. 81-109. Publisher: Springer.
350
+
351
+ [12] A. D. Brown, P. Stacey, and J. Nandhakumar. Making sense of sense-making narratives. 61(8):1035-1062. Publisher: Sage Publications Sage UK: London, England.
352
+
353
+ [13] L. Cheong, C. Kinkeldey, I. Burfurd, S. Bleisch, and M. Duckham. Evaluating the impact of visualization of risk upon emergency route-planning. 34(5):1022-1050. Publisher: Taylor & Francis.
354
+
355
+ [14] M. T. Chi. Self-explaining expository texts: The dual processes of generating inferences and repairing mental models. In Advances in instructional psychology, pp. 161-238. Routledge.
356
+
357
+ [15] W. S. Cleveland and R. McGill. Graphical perception: Theory, experimentation, and application to the development of graphical methods. 79(387):531-554. Publisher: Taylor & Francis _eprint: https://www.tandfonline.com/doi/pdf/10.1080/01621459.1984.10478080.doi: 10.1080/01621459.1984.10478080
358
+
359
+ [16] K. A. Cook and J. J. Thomas. Illuminating the path: The research and development agenda for visual analytics.
360
+
361
+ [17] M. Correll and M. Gleicher. Error bars considered harmful: Exploring alternate encodings for mean and error. 20(12):2142-2151. Publisher: IEEE.
362
+
363
+ [18] G. Currie and A. D. Brown. A narratological approach to understanding
364
+
365
+ processes of organizing in a UK hospital. 56(5):563-586. Publisher: Sage Publications.
366
+
367
+ [19] W. Dou, D. H. Jeong, F. Stukes, W. Ribarsky, H. R. Lipford, and R. Chang. Recovering reasoning processes from user interactions. 29(3):52-61. Publisher: IEEE.
368
+
369
+ [20] M. Fernandes, L. Walls, S. Munson, J. Hullman, and M. Kay. Uncertainty displays using quantile dotplots or cdfs improve transit decision-making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-12.
370
+
371
+ [21] H. Finn. Examining risk literacy in a complex decision-making environment: A study of public avalanche bulletins.
372
+
373
+ [22] T. Gao, M. Dontcheva, E. Adar, Z. Liu, and K. G. Karahalios. DataTone: Managing ambiguity in natural language interfaces for data visualization. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, UIST '15, pp. 489-500. Association for Computing Machinery. doi: 10.1145/2807442.2807478
374
+
375
+ [23] C. Geisler and J. Swarts. Coding streams of language: Techniques for the systematic coding of text, talk, and other verbal data. WAC Clearinghouse Ft. Collins, CO.
376
+
377
+ [24] F. Grabowski and D. Strzalka. Simple, complicated and complex systems-the brief introduction. In 2008 Conference On Human System Interactions, pp. 570-573. IEEE.
378
+
379
+ [25] P. Haegeli, J. Obad, B. Harrison, B. Murray, J. Engblom, and J. Neufeld. InfoexTM 3.0-advancing the data analysis capabilities of canada's diverse avalanche community. In Proceedings of the International Snow Science Workshop, vol. 29, pp. 910-917.
380
+
381
+ [26] C. Healey and J. Enns. Attention and visual memory in visualization and computer graphics. 18(7):1170-1188. Conference Name: IEEE Transactions on Visualization and Computer Graphics. doi: 10.1109/ TVCG.2011.127
382
+
383
+ [27] J. Heer and M. Agrawala. Design considerations for collaborative visual analytics. 7(1):49-62. Publisher: SAGE Publications Sage UK: London, England.
384
+
385
+ [28] B. Hilligoss and S. D. Moffatt-Bruce. The limits of checklists: handoff and narrative thinking. 23(7):528-533. Publisher: BMJ Publishing Group Ltd.
386
+
387
+ [29] R. R. Hoffman, D. S. LaDue, H. M. Mogil, P. J. Roebber, and J. G. Trafton. Minding the weather: How expert forecasters think. MIT Press.
388
+
389
+ [30] E. Hollnagel and D. D. Woods. Joint cognitive systems: Foundations of cognitive systems engineering. CRC press.
390
+
391
+ [31] E. Hoque, V. Setlur, M. Tory, and I. Dykeman. Applying pragmatics principles for interaction with visual analytics. 24(1):309-318. Conference Name: IEEE Transactions on Visualization and Computer Graphics. doi: 10.1109/TVCG.2017.2744684
392
+
393
+ [32] J. Hullman, E. Adar, and P. Shah. Benefitting InfoVis with visual difficulties. 17(12):2213-2222. Conference Name: IEEE Transactions on Visualization and Computer Graphics. doi: 10.1109/TVCG.2011. 175
394
+
395
+ [33] J. Hullman, X. Qiao, M. Correll, A. Kale, and M. Kay. In pursuit of error: A survey of uncertainty visualization evaluation. 25(1):903-913. Publisher: IEEE.
396
+
397
+ [34] J. Hullman, P. Resnick, and E. Adar. Hypothetical outcome plots outperform error bars and violin plots for inferences about reliability of variable ordering. 10(11):e0142444. Publisher: Public Library of Science San Francisco, CA USA.
398
+
399
+ [35] A. Kale, M. Kay, and J. Hullman. Decision-making under uncertainty in research synthesis: Designing for the garden of forking paths. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-14.
400
+
401
+ [36] H. Kennedy, R. L. Hill, G. Aiello, and W. Allen. The work that visualisation conventions do. 19(6):715-735. Publisher: Routledge _eprint: https://doi.org/10.1080/1369118X.2016.1153126.doi: 10. 1080/1369118X.2016.1153126
402
+
403
+ [37] D. Kirsh. Thinking with external representations. 25(4):441-454. Publisher: Springer.
404
+
405
+ [38] G. Klein and B. W. Crandall. The role of mental simulation in problem solving and decision making. In Local applications of the ecological
406
+
407
+ approach to human-machine systems, pp. 324-358. CRC Press.
408
+
409
+ [39] G. Klein, J. K. Phillips, E. L. Rall, and D. A. Peluso. A data-frame theory of sensemaking. In Expertise out of context, pp. 118-160. Psychology Press.
410
+
411
+ [40] G. Klein, R. Pliske, B. Crandall, and D. D. Woods. Problem detection. 7(1):14-28. Publisher: Springer.
412
+
413
+ [41] G. Klein, R. M. Pliske, B. Crandall, and D. Woods. Features of problem detection. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 43, pp. 133-137. SAGE Publications Sage CA: Los Angeles, CA. Issue: 3.
414
+
415
+ [42] G. Klein, K. Ross, B. Moon, D. Klein, R. Hoffman, and E. Hollnagel. Macrocognition. 18(3):81-85. doi: 10.1109/MIS.2003.1200735
416
+
417
+ [43] G. Klein, D. Snowden, and C. L. Pin. Anticipatory thinking. In Informed by Knowledge, pp. 249-260. Psychology Press.
418
+
419
+ [44] B. Lazar, S. Trautmann, M. Cooperstein, E. Greene, and K. Birkeland. North american avalanche danger scale: Do backcountry forecasters apply it consistently. In Proceedings ISSW, pp. 457-465. Citeseer.
420
+
421
+ [45] H. Lin, D. Akbaba, M. Meyer, and A. Lex. Data hunches: Incorporating personal knowledge into visualizations.
422
+
423
+ [46] J. Liu, N. Boukhelifa, and J. R. Eagan. Understanding the role of alternatives in data analysis practices. 26(1):66-76. Publisher: IEEE.
424
+
425
+ [47] L. Liu, A. P. Boone, I. T. Ruginski, L. Padilla, M. Hegarty, S. H. Creem-Regehr, W. B. Thompson, C. Yuksel, and D. H. House. Uncertainty visualization by representative sampling from prediction ensembles. 23(9):2165-2178. Publisher: IEEE.
426
+
427
+ [48] J. Lundquist, M. Hughes, E. Gutmann, and S. Kapnick. Our skill in modeling mountain rain and snow is bypassing the skill of our observational networks. 100(12):2473-2490.
428
+
429
+ [49] A. M. MacEachren. Visual analytics and uncertainty: Its not about the data. Publisher: The Eurographics Association.
430
+
431
+ [50] A. M. MacEachren, A. Robinson, S. Hopper, S. Gardner, R. Murray, M. Gahegan, and E. Hetzler. Visualizing geospatial information uncertainty: What we know and what we need to know. 32(3):139-160. Publisher: Taylor & Francis.
432
+
433
+ [51] L. Maguire and J. Percival. Sensemaking in the snow: Exploring the cognitive work in avalanche forecasting. In International Snow Science Workshop, Innsbruck, Austria.
434
+
435
+ [52] S. Maitlis and M. Christianson. Sensemaking in organizations: Taking stock and moving forward. 8(1):57-125. Publisher: Routledge.
436
+
437
+ [53] A. Mathisen, T. Horak, C. N. Klokmose, K. Grønbæk, and N. Elmqvist. InsideInsights: Integrating data-driven reporting in collaborative visual analytics. In Computer Graphics Forum, vol. 38, pp. 649-661. Wiley Online Library. Issue: 3.
438
+
439
+ [54] D. McClung. The elements of applied avalanche forecasting, part i: The human issues. 26(2):111-129. Publisher: Springer.
440
+
441
+ [55] N. McCurdy, J. Gerdes, and M. Meyer. A framework for externalizing implicit error using visualization. 25(1):925-935. Publisher: IEEE.
442
+
443
+ [56] H. Mehta, A. Bradley, M. Hancock, and C. Collins. Metatation: Annotation as implicit interaction to bridge close and distant reading. 24(5):1-41. Publisher: ACM New York, NY, USA.
444
+
445
+ [57] E. Mekis, N. Donaldson, J. Reid, A. Zucconi, J. Hoover, Q. Li, R. Nitu, and S. Melo. An overview of surface-based precipitation observations at environment and climate change canada. 56(2):71-95. Publisher: Taylor & Francis.
446
+
447
+ [58] F. Mueller, S. Kethers, L. Alem, and R. Wilkinson. From the certainty of information transfer to the ambiguity of intuition. In Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments, pp. 63-70.
448
+
449
+ [59] T. Munzner. Visualization analysis and design. CRC press.
450
+
451
+ [60] S. Nowak, L. Bartram, and P. Haegeli. Designing for ambiguity: Visual analytics in avalanche forecasting. In 2020 IEEE Visualization Conference (VIS), pp. 81-85. IEEE.
452
+
453
+ [61] S. Nowak, L. Bartram, and T. Schiphorst. A micro-phenomenological lens for evaluating narrative visualization. In 2018 IEEE Evaluation and Beyond-Methodological Approaches for Visualization (BELIV), pp. 11-18. IEEE.
454
+
455
+ [62] M. M. Omodei and J. McLennan. Studying complex decision making in natural settings: using a head-mounted video camera to study com-
456
+
457
+ petitive orienteering. 79(3):1411-1425. Publisher: SAGE Publications
458
+
459
+ Sage CA: Los Angeles, CA.
460
+
461
+ [63] L. Padilla, M. Kay, and J. Hullman. Uncertainty visualization. Publisher: PsyArXiv.
462
+
463
+ [64] L. M. Padilla, M. Powell, M. Kay, and J. Hullman. Uncertain about uncertainty: How qualitative expressions of forecaster confidence impact decision-making with uncertainty visualizations. p. 3747. Publisher: Frontiers.
464
+
465
+ [65] G. Panagiotidou, R. Vandam, J. Poblome, and A. Vande Moere. Implicit error, uncertainty and confidence in visualization: an archaeological case study. Publisher: Institute of Electrical and Electronics Engineers.
466
+
467
+ [66] E. Patterson. Structuring flexibility: the potential good, bad and ugly in standardisation of handovers. 17(1):4-5. Publisher: BMJ Publishing Group Ltd.
468
+
469
+ [67] R. M. Pliske, B. Crandall, and G. Klein. Competence in weather forecasting. 40:68. Publisher: Cambridge University Press Cambridge, UK.
470
+
471
+ [68] H. Romat, N. Henry Riche, K. Hinckley, B. Lee, C. Appert, E. Pietriga, and C. Collins. ActiveInk: (th) inking with data. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-13.
472
+
473
+ [69] D. Sacha, H. Senaratne, B. C. Kwon, G. Ellis, and D. A. Keim. The role of uncertainty, awareness, and trust in visual analytics. 22(1):240-249. Publisher: IEEE.
474
+
475
+ [70] J. Sanyal, S. Zhang, G. Bhattacharya, P. Amburn, and R. Moorhead. A user study to compare four uncertainty visualization methods for $1\mathrm{\;d}$ and 2d datasets. 15(6):1209-1218. Publisher: IEEE.
476
+
477
+ [71] N. Sharma. Sensemaking handoff: When and how? 45(1):1-12. Publisher: Wiley Online Library.
478
+
479
+ [72] B. Shneiderman. The eyes have it: A task by data type taxonomy for information visualizations. In Proceedings of the 1996 IEEE Symposium on Visual Languages, VL '96, p. 336. IEEE Computer Society.
480
+
481
+ [73] M. Skeels, B. Lee, G. Smith, and G. G. Robertson. Revealing uncertainty for information visualization. 9(1):70-81. Publisher: SAGE Publications Sage UK: London, England.
482
+
483
+ [74] P. J. Smith and R. R. Hoffman. Cognitive Systems Engineering: The Future for a Changing World. Crc Press.
484
+
485
+ [75] A. St Clair. Exploring the effectiveness of avalanche risk communication: a qualitative study of avalanche bulletin use among backcountry recreationists.
486
+
487
+ [76] G. Statham, P. Haegeli, E. Greene, K. Birkeland, C. Israelson, B. Trem-per, C. Stethem, B. McMahon, B. White, and J. Kelly. A conceptual model of avalanche hazard. 90(2):663-691. Publisher: Springer.
488
+
489
+ [77] G. Statham, S. Holeczi, and B. Shandro. Consistency and accuracy of public avalanche forecasts in western canada. In Proceedings ISSW, pp. 1491-1496.
490
+
491
+ [78] D. A. Szafir, S. Haroz, M. Gleicher, and S. Franconeri. Four types of ensemble coding in data visualizations. 16(5):11-11. Publisher: The Association for Research in Vision and Ophthalmology.
492
+
493
+ [79] J. Thomson, E. Hetzler, A. MacEachren, M. Gahegan, and M. Pavel. A typology for visualizing uncertainty. In Visualization and Data Analysis 2005, vol. 5669, pp. 146-157. International Society for Optics and Photonics.
494
+
495
+ [80] J. Walny, S. Huron, C. Perin, T. Wun, R. Pusch, and S. Carpendale. Active reading of visualizations. 24(1):770-780. Publisher: IEEE.
496
+
497
+ [81] K. E. Weick. Sensemaking in organizations, vol. 3. Sage.
498
+
499
+ [82] D. Woods. Escape from data overload.
500
+
501
+ [83] K. Xu, S. Attfield, T. Jankun-Kelly, A. Wheat, P. H. Nguyen, and N. Selvaraj. Analytic provenance for sensemaking: A research agenda. 35(3):56-64. Publisher: IEEE.
502
+
503
+ [84] C. L. Yue, A. D. Castel, and R. A. Bjork. When disfluency is-and is not-a desirable difficulty: The influence of typeface clarity on metacognitive judgments and memory. 41(2):229-241. Publisher: Springer.
504
+
505
+ [85] T. Zuk and S. Carpendale. Visualization of uncertainty and reasoning. In International symposium on smart graphics, pp. 164-177. Springer.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/SHQU_yejZFv/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,383 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § I'M NOT SURE: DESIGNING FOR AMBIGUITY IN VISUAL ANALYTICS
2
+
3
+ Stan Nowak*
4
+
5
+ School of Interactive Arts
6
+
7
+ and Technology
8
+
9
+ Simon Fraser University
10
+
11
+ Lyn Bartram ${}^{ \dagger }$
12
+
13
+ School of Interactive Arts
14
+
15
+ and Technology
16
+
17
+ Simon Fraser University
18
+
19
+ § ABSTRACT
20
+
21
+ Ambiguity, the state in which alternative interpretations are plausible or even desirable, is an inexorable part of complex sensemaking. Its challenges are compounded when analysis involves risk, is constrained, and needs to be shared with others. We report on several studies with avalanche forecasters that illuminated these challenges and identified how visualization designs can better support ambiguity. Like many complex analysis domains, avalanche forecasting relies on highly heterogeneous and incomplete data where the relevance and meaning of such data is context-sensitive, dependant on the knowledge and experiences of the observer, and mediated by the complexities of communication and collaboration. In this paper, we characterize challenges of ambiguous interpretation emerging from data, analytic processes, and collaboration and communication and describe several management strategies for ambiguity. Our findings suggest several visual analytics design approaches that explicitly address ambiguity in complex sensemaking around risk.
22
+
23
+ Index Terms: Human-centered computing-Visualization-Visualization theory, concepts and paradigms
24
+
25
+ § 1 INTRODUCTION
26
+
27
+ Our work addresses the challenges of complex and collaborative sensemaking in risk management: in particular, the domain of avalanche forecasting responsible for analysis and prediction of snow avalanches endangering human life and infrastructure. As is the case with explaining and predicting other hazards (such as weather or natural disasters), avalanche forecasting involves the consideration and evaluation of alternative potential explanations that account for data [29] and the communication of these predictions to audiences widely varying in expertise. In this way, sensemaking is deeply about managing ambiguity, the state of multiple alternative meanings, and beyond simply accounting for missing information. Existing forecasting tools and procedures do not capture all the cognitive work forecasters do [51], motivating the design of better tools to support them. Because ambiguity is an essential component of their work environment, avalanche forecasters are an ideal study group for visual analytics interventions that explicitly target these challenges.
28
+
29
+ Visual analytics, "the science of analytical reasoning facilitated by interactive visual interfaces" [16], is well-suited to address the ambiguous sensemaking needs of avalanche forecasters. While visualization research has largely been devoted to quantified uncertainty or data uncertainty $\left\lbrack {8,{11},{17},{20},{33},{34},{47},{50},{63},{70},{73}}\right\rbrack$ researchers are now considering broader issues of uncertainty related to reasoning [85], such as the interpretation of implicit errors [55, 65], the importance of "hunches" in data interpretation [45] or the role of alternatives in visual analysis [46]. We add to this growing body of work in our exploration of the challenges of sensemaking under ambiguity in risk analyses and the consequent implications for visualization designs.
30
+
31
+ In this paper we report work focused on two complementary threads of accommodating ambiguity in risk analysis and prediction. First, we seek a formative understanding of ambiguity in complex and critical sensemaking. Through a set of studies with Avalanche Canada, a public avalanche forecasting organization, we discovered the critical role ambiguity plays in sensemaking and its constant challenges for individual and collaborative analysis and communication. From these findings we characterized different sources of ambiguity and interpretative strategies, grouped into issues related to data, analytic process, and collaboration and communication.
32
+
33
+ Second, we describe how these findings informed initial visual analytics designs that explore better support for the challenges of ambiguous interpretation involving heterogeneous data-generating processes. We developed these tools in close and constant collaboration (participatory design) with forecasters. We then deployed them as design probes before redesigns were subsequently incorporated into daily practice, where we continue to observe their use. This ecological approach continues to surface challenges and affordances of supporting ambiguity in reasoning about risk in the collaborative and critical environment of forecasting. Our design findings highlight both the effective potential of visualizations and the caveats. Key issues are the importance of multiple levels of data granularity, appropriate context, the need for analytic provenance, and enrichment [3]: the ability to capture both data and insights throughout the process.
34
+
35
+ The key takeaway of our research is that ambiguity is distinct from data uncertainty, requiring solutions that go beyond reduction or removal. It is an essential component of sensemaking, but at the same time presents specific challenges for analysis, collaboration, and communication. We argue that ambiguity can and should be designed for and not away [60], that even simple design choices can serve to support or impede sensemaking involving ambiguity, and that there is a need for more explicit ambiguity support in visual analytics tools. In this paper, we contribute:
36
+
37
+ * Insights from 3 qualitative studies with avalanche forecasters surfacing issues of ambiguity in sensemaking;
38
+
39
+ * A characterization of sources and strategies for ambiguity in risk analysis and sensemaking; and
40
+
41
+ * A preliminary exploration of visual analytics design approaches to address ambiguity.
42
+
43
+ § 2 BACKGROUND
44
+
45
+ § 2.1 PUBLIC AVALANCHE FORECASTING
46
+
47
+ Public avalanche forecasters assess avalanche hazards and communicate the associated risks to the public through daily bulletins. These natural disasters endanger the safety of humans and infrastructure and require careful professional assessment to inform risk management in mountainous avalanche-prone areas. Forecasters try to predict how present or future instabilities within the snowpack may react to natural triggers, such as the weight of new snow, or human triggers, such as the weight of a skier [54].
48
+
49
+ *e-mail: Snowak@sfu.ca
50
+
51
+ ${}^{ \dagger }$ e-mail: Lyn@sfu.ca
52
+
53
+ Avalanche forecasting is continuous and distributed across teams [51] of forecasters who monitor avalanche conditions over an entire winter season, iteratively updating their understanding with new information [54]. While many forecasters have the benefit of working in the field and directly observing avalanche conditions, public avalanche forecasters work remotely and rely heavily on field reports produced by other organizations [60]. In Canada, such reports are shared in the Canadian Avalanche Association's Industry Information Exchange (InfoEx) [25] by avalanche safety 'operators', such as those overseeing railway or transportation corridors, ski resorts, and helicopter skiing operations among others. While these data are structured and defined using formal measurement and reporting guidelines [2], they are gathered using a targeted sampling rather than a random sampling approach [54]. Operators actively seek instabilities in the snow. Consequently, forecasters have to glean enough context about this process to understand what such data mean (e.g. who reported it, where they went, what they saw, etc.).
54
+
55
+ Another challenge stems from the sparsity of data. For example, remote weather stations used to validate meteorological forecasts [60] are very sparsely distributed when compared to the variability and heterogeneity of mountain weather [48]. Forecasters mentally simulate the interactions of mountainous terrain and weathers systems and their effects on snowpack from limited data. This imaginative and speculative ability is a mark of competence and expertise in avalanche forecasting [1] as well as weather forecasting [67].
56
+
57
+ Forecasters formalize their judgements of avalanche hazards using a variety of qualitative measures such as a danger scale, likelihood scale, potential destructive size, as well as different avalanche types [76]. These assessments are then communicated to the public through daily bulletins that are supplemented with additional risk communications such as advice about how to avoid avalanche hazards. The public varies in levels of expertise and consequently varies in how they interpret even simple elements of bulletins such as danger scales $\left\lbrack {{21},{75}}\right\rbrack$ . Public avalanche forecasters rely heavily on their knowledge, experience, and expert judgment to assess and communicate avalanche hazards. The challenges of complexity, varied interpretation, and uncertainty are similar to those involved in risk prediction and communication of other extreme weather events and natural disasters [7].
58
+
59
+ § 2.2 SENSEMAKING AND RISK PREDICTION
60
+
61
+ Risk management work faces real-world time constraints, ill-defined goals, distributed tasks and responsibilities, uncertainty, and decision-making demands. The engineering of technological solutions to deal with these issues requires close consideration of the cognitive processes involved $\left\lbrack {{30},{74}}\right\rbrack$ . Frequently in these domains, for example in weather forecasting [29], several targeted sensemaking strategies are employed. Generally, these involve the setting of expectations to direct attention to cues that can signal threats and a concurrent sensitivity to cues that deviate from these expectations [82].
62
+
63
+ § 2.2.1 ANTICIPATORY THINKING
64
+
65
+ One example relevant to the forecasting of avalanches is anticipatory thinking: a functional form of mental preparation for potential risks including those that may be highly unlikely but could result in severe consequences [43]. Attention is actively managed and directed to subtle and context-sensitive cues that may signal threats. There are several types of anticipatory thinking. One, problem detection, describes the process by which observers first become aware of an issue that may require a course of action $\left\lbrack {{40},{41}}\right\rbrack$ . The ability to detect problems depends on how rich an observer's understanding of relevant patterns to compare against data is. This "pattern matching" often involves monitoring multiple patterns or "frames" concurrently. Anticipatory thinking also involves "trajectory tracking", the extrapolation of trends into multiple alternative future scenarios as well as planning for them. The imagination, exploration, and planning for alternative scenarios is also known as mental simulation [38]. These processes are vulnerable to psychological factors or biases such as a tendency to explain away disconfirming evidence. However, studies with expert weather forecasters show such biases are countered through the active adoption of a skeptical stance in analysis [40].
66
+
67
+ § 2.3 SENSEMAKING AND AMBIGUITY
68
+
69
+ Sensemaking - the process by which meaning is constructed based on available information and experience - is precipitated by information or events that violate expectations or are uncertain and ambiguous $\left\lbrack {{52},{81}}\right\rbrack$ . It is characterized by complexity. Complexity involves dynamically evolving rules and interacting parts [28] where comprehensive understanding is intractable [37] due to the epistemological limitations of human observation [24]. These limitations mean that complexity is more effectively dealt with holistically rather than through mechanistic reduction to the sum of parts. Sense-making addresses complexity and the concomitant uncertainties through the flexible construction of narratives [18] where informational cues help determine what is relevant and which narratives or explanations are coherent or acceptable to consider [12].
70
+
71
+ This "narrative mode" of thinking describes how signs, symbols, representations, and their relationships are tied together into coherent personal narratives authored by the observer [4]. A novel is merely ink until it is read by someone and the same applies to the analysis of data. Subplots and micro-narratives involving prior knowledge and personal experiences are involved in the reading and making sense of visualizations [61]. Just as a story involves competing narratives, so too, in general, does sensemaking. This is because sensemaking often starts with an existing explanation that is challenged by a viable alternative [39]. Sensemaking is thus more about resolving multiple potential meanings (ambiguity), rather than just accounting for missing or uncertain information.
72
+
73
+ § 2.4 AMBIGUITY IN VISUALIZATION RESEARCH
74
+
75
+ Visualization research has a longstanding tradition of characterizing uncertainties relevant to the design of visual analytics systems $\left\lbrack {9,{49},{50},{79},{85}}\right\rbrack$ . Most visualization research has focused on data uncertainties, but many acknowledge the importance and role of interpretation and knowledge in uncertainty $\left\lbrack {{20},{35},{49},{64},{85}}\right\rbrack$ . MacEachren discusses ambiguity through the lens of organizational decision-making describing it as a "lack of an appropriate 'frame of' reference through which to interpret the information" and describes equivocality as stemming from the diversity of possible interpretations [49]. Meanwhile, Boukhelifa et al. define ambiguity in terms of multiplicities in the relationship between entities and names in data as well as the differences in interpretation between collaborators [9]. Liu et al. present a framework for the exploration, interpretation, and management of alternatives in visual analytics [46]. They group alternatives into three types: cognitive (e.g. hypotheses, mental models, and interpretations), artifact (e.g. data, models, representations, or tools), and execution (e.g. methods, code, and parameters). Ambiguity is most closely related to their concept of cognitive alternatives. Researchers have discussed the challenges of ambiguity in natural language interfaces for visual analytic tools and developed dedicated mixed-initiative tools for user intent disambiguation [22,31]. Most prominent in existing visualization research is the discussion of ambiguity in collaborative visual analytics where sharing of analysis is often incomplete, lacking context, and therefore ambiguous [27].
76
+
77
+ There is much more to analysis than what is explicit in data. Data are incomplete records of the phenomena they are intended to represent and require prior knowledge as well speculation. This is closely related to the notion of "implicit errors", which are errors inherent to a dataset but not explicitly represented within it $\left\lbrack {{55},{65}}\right\rbrack$ . To better support sensemaking around implicit errors associated with infectious disease statistics, McCurdy et al. used structured annotations to help expert clinicians externalize knowledge about these errors [55]. In an application for archeological analyses, Panagioti-dou et al. developed visualization tools that explicitly represented implicit errors [65]. Lin et al. use the term data hunch to describe "a person's knowledge about how representative data is of a phenomenon of interest" and how issues like credibility, inclusion and exclusion criteria, or directionality and magnitude of biases are considered in the analysis of data [45]. The authors outline a design space for externalizing data hunches.
78
+
79
+ < g r a p h i c s >
80
+
81
+ Figure 1: A timeline displaying the sequence in which studies were executed. Study 1 developed a formative understanding of avalanche forecasting challenges and workflows represented in a thematic code structure. This code structure was applied to observational data in Study 2 to refine understanding. Findings from these studies were used to inform the design of visualization prototypes used in Study 3.
82
+
83
+ § 3 APPROACH
84
+
85
+ We carried out 3 studies with forecasters at Avalanche Canada (Figure 1), a public avalanche forecasting organization. Our goal was to better understand the challenges of ambiguity in their sensemak-ing and to identify where visual analytics might help. We began with semi-structured interviews to understand how forecasters perceive and describe the challenges of their work (Study 1). We then conducted field observations of forecasters on site. Concurrently, we video-recorded forecasters' workstations and debriefed them about analytical reasoning involving the use of existing technologies (Study 2). This set of observations corroborated and enriched our understanding of the themes we identified in the interview study (Table 1). Subsequently, we implemented two fully functional visualization prototypes in collaboration with the avalanche forecasters and conducted retrospective interviews using these prototypes as design probes (Study 3). The purpose of this last study was to better understand how visual analytics interventions can address the challenges of ambiguity.
86
+
87
+ Studies 1 and 2 were conducted on-premises at Avalanche Canada while Study 3 was conducted remotely. In total, 12 avalanche forecasters participated in our studies (P1-P5 participated in Study 1, P2-P8 in Study 2, and P2-P6 / P9-P12 in Study 3). 10 were male and 2 were female, reflecting the gender balance of the organization and industry. The forecasters came from varied and mixed backgrounds. 8 had a background in professional mountain guiding, 3 in engineering, 2 in natural sciences, and 2 in business and communications.
88
+
89
+ We frame our findings according to issues of ambiguity dealing with data, analytic process, or collaboration and communication. Data are incomplete records of the phenomena they represent and require nuanced and varying interpretations depending on the needs and goals of analysis. Considering and evaluating alternative interpretations is an essential part of sensemaking: the analytic process of judging and adopting alternative interpretations presents potential analytic paths through data. These paths can be difficult to navigate as much of analysis is not explicitly captured. Finally, forecasters each hold unique perspectives and thus alternative interpretations that need to be resolved. They rely on communication strategies that simplify complexity to retain clarity. This can obfuscate context and introduce ambiguities that their collaborators have to reason through. This structure arose from findings from our studies; we apply it in our discussion of the design implications for potential visual analytics solutions.
90
+
91
+ § 4 STUDY 1: FORECASTER WORK CHALLENGES 4.1 PROCEDURE
92
+
93
+ We conducted semi-structured interviews with 5 professional avalanche forecasters on Avalanche Canada premises in Revelstoke, British Columbia. We asked about common work practices and challenges in avalanche forecasting, the role of data and evidence, the role of prior and tacit knowledge, issues of collaboration, and issues of uncertainty. Participants were asked questions like: "Can you walk me through a typical forecasting day?", "What are the biggest challenges in your work?", or "What are some common uncertainties you deal with?". The interviews were audio-recorded and then transcribed.
94
+
95
+ § 4.2 ANALYSIS
96
+
97
+ Data were analyzed using thematic analysis [10]. Transcripts were concurrently segmented [23] and coded according to emergent themes by one coder. The codes were then refined in two passes These themes were then grouped into thematic categories (Table 1). Inter-rater reliability was measured with one other coder who had a background in avalanche research and limited experience in qualitative research methods using a transcript sample representing 10 percent of all data [23]. Simple agreement for high-level themes was .89, Cohen's Kappa was .81, and Krippendorff's Alpha was .82 For the sub-themes, simple agreement was .75, Cohen's Kappa was .70, and Krippendorff's Alpha was .71 .
98
+
99
+ § 4.3 FINDINGS
100
+
101
+ § 4.3.1 DATA CHALLENGES AND PRACTICES
102
+
103
+ The data used in avalanche forecasting are uncertain, have ambiguous expressions or meanings, and have biases. These characteristics lead to ambiguity and a need to consider alternative interpretations beyond what is explicit in data.
104
+
105
+ Forecasters told us one of their key challenges is the uncertainty involved in data sparsity or missingness. Data are often explicitly missing as is the case when remote sensors malfunction or fail to transmit. "[Weather stations] that have good weather or wind information are even less, and then that's if they're even reporting [...]" (P4). Missingness might also be implicit having to be inferred from the given situational context. "In a large storm that closes highways and grounds helicopters, it's very common the next day to not get any avalanche observations... but the weather and your personal experience very much suggests that there was going to be an avalanche cycle..." (P1).
106
+
107
+ Forecasters rely on contextual information to understand how to appropriately interpret data following circumstantial definitions. Some of these contingencies are officially documented or ingrained within formal procedures, while others are only learned through extensive experience and knowledge. "The [...] courses do quite a good job of standardizing those kinds of threshold amounts [...but] people who have spent a lot of time on the coast [...] may think a 30 centimeter storm doesn't really do very much..." (P1).
108
+
109
+ Common to many classifications of the complex natural world, avalanche classifications overlap and are not mutually exclusive. Technically accurate hazard assessments might include several overlapping avalanche types resulting in overly complex public communications. Instead, forecasters try to choose a subset of avalanche types based on what may inform optimal risk mitigation strategies by the public. "When you're modeling the natural world, you take shortcuts and there's simplifications[...] they don't occupy fully independent places [...] we sometimes have to have discussions about whether we want to be technically accurate, or whether we want to retain clarity [...] that starts to get quite complicated. [...] we look for ways to simplify..." (P1).
110
+
111
+ The nuances of evidential reasoning and interpretation of data in avalanche forecasting also extend to the risk-based conservative bias common to forecasters. Some may be more or less conservative, and forecasters have to factor in such considerations when weighing evidence. "[A]nother forecaster would have said something like: '[...]they always call that a little more than what it actually is.'[...that] may influence me to say: Okay, well, maybe I should not necessarily discredit it, but I put less weight into it..." (P3).
112
+
113
+ § 4.3.2 ANALYTIC PROCESSES AND REASONING
114
+
115
+ Forecasters employ a variety of sensemaking strategies involving speculation and imagination. They integrate their prior knowledge, experiences, and contextual clues in data to synthesize understanding and explore risk implications.
116
+
117
+ Forecasters synthesize, evaluate, and integrate information using a simulation technique they described as mental projection. It is a process of imagining oneself in the field to understand conditions and their risk implications. "...that's a technique that a lot of people use to help forecast... kind of projecting yourself mentally, whether you close your eyes or you just have some kind of image of the kind of slopes, the kind of areas where the people are moving around [...] I think that experiential part there is really relevant to the process..." (P1). This might involve mentally converting biases such as wind data from weather stations in windy locations. "[T]here can actually not be that much wind in the park and you can have 60 kilometers an hour winds at that station. [...]taking an input and then adjusting it for myself..." (P2). It might also involve simulating alternative future scenarios and their risk implications. "If things are a little bit unusual, I [...] try and strip it down and build some kind of synthetic profile either in my mind, or sometimes even do it on the whiteboard [...] And then figure out the most likely, it's usually a set of scenarios..." (P1).
118
+
119
+ Forecasters describe their work as bayesian-like because they are constantly updating their mental models with new information and deliberately omitting weak or redundant evidence. They reported having to immerse themselves in data over several days of their shift to build confidence in their sense of understanding. This often involves undirected explorations of general background information. "...a day, you know, more likely two days to become fully sort of understanding of what's going on in your region [...] even if you can read it all in a day, it takes a little time for it to sort of percolate and for you to understand what that means..." (P1). To address identified gaps in understanding forecasters actively seek contextual sources of information. "I'll [...] look for keywords like 'oh ya... skiing, like, steep terrain in the Alpine, up to 40 degrees and just exposed features. No problem.' That tells me that not much is going on. Yeah, people are confident..." (P2). As they conduct their assessments, they iteratively update knowledge artifacts like the public bulletin to match their current understanding. "I'm pretty iteratively making small changes in the forecast [...] I'll just move that right into the forecasts, put it there, save, and I go back to what I was doing..." (P2).
120
+
121
+ Unlike forecasters, operators directly observe avalanche conditions in the field and thus have a richer understanding of the complexities involved. As a result, forecasters use subtle cues in data that can reveal the subjective hunches of operators to help them appropriately frame their understanding of avalanche conditions. "'Okay, are these guys still concerned about this?' That's what really matters to me more so than like the really nuanced low-level data..." (P2).
122
+
123
+ § 4.3.3 COLLABORATIVE CHALLENGES AND PRACTICES
124
+
125
+ Collaboration helps individual forecasters overcome the limitations of their own knowledge by drawing on the collective knowledge and experiences of their peers. At the same time, communicating the complexity of their assessments in simple terms is a constant challenge that creates ambiguities.
126
+
127
+ Forecasters vary in knowledge and experience which likely contributes to some variations in interpretation. However, this diversity is seen as an advantage as, collectively, it addresses the gaps in understanding any single forecaster may have. "[M]y experience may be different from you know... another forecaster's experience and I can learn from that person [...] there's those kinds of exchanges that happen..." (P1). Forecasters share knowledge and solicit their peers' perspectives in daily discussions. "At two o 'clock, we have our pow-wow where we all kind of go through our hazards and our problems. [...] it's kind of like a peer review session..." (P3).
128
+
129
+ Professional exchanges with partnering operations help avalanche forecasters enrich their understanding of how data are produced in a variety of operational contexts. "[W]hether that's highways or ski hill, snowcat skiing, heli-skiing [...] there's variability between the individual operators... And the only way to really fully understand is to go and spend a bit of time with that operator. [...] We have professional exchanges go on..." (P1). Forecasters also phone operators and reach out directly for clarification or if they are uncertain about how they should be thinking about conditions. "[If I] am potentially missing something or I just don't feel comfortable [...] I'll start picking the phone up and trying to find people in the area that can provide more, more insight..." (P3).
130
+
131
+ Collaboration allows forecasters to account for each other's knowledge gaps, at the same time, it presents challenges such as communication of analysis. Forecasting relies on the continuity of analysis. Shift-changes can disrupt this continuity and forecasters struggle with communicating relevant details as part of the hand-off process. "[T]here's a lot of variability in different people and [...] what sort of information they leave [...] that's the first place I'll look [...] hoping that the [...] previous forecaster has left enough information to start that picture..." (P3). To facilitate the hand-off process, forecasters produce knowledge artifacts like dedicated hand-off notes or detailed descriptions of snowpack stratigraphy. "[Talking about hand-off notes] I am trying to take that ease and control that I have at day four or five [...] and I give that to the next person, so they don't feel like they have to do their process of discovery from ground zero essentially..." (P2). This is seen as a separate and additional task often completed at the end of the day when forecasters are fatigued. This is why documentation used in support of hand-off and collaboration is often incomplete.
132
+
133
+ max width=
134
+
135
+ X S1 Theme S1 Sub-Theme Definition S2 Observed Evidence $(\mathbf{O} =$ Observation, $\mathbf{C} = \mathbf{{CRD}}$ )
136
+
137
+ 1-5
138
+ 5*四 2*Missing Info Explicit Missing information is explicitly represented in data. X
139
+
140
+ 3-5
141
+ Implicit Missing information must be inferred from the situational context. O
142
+
143
+ 2-5
144
+ 3*Data Representativen ess Classification Overlap Classifications are often not independent or mutually exclusive. O
145
+
146
+ 3-5
147
+ Conservative Bias Avalanche professionals are conservative when faced with uncertainty in the field or in data. O
148
+
149
+ 3-5
150
+ Circumstantial Definitions Official definitions and unofficial practices for reporting data depend on the situational context. 0
151
+
152
+ 1-5
153
+ 8*WISTISTISTISTIS 6*Analytic Practices Subjective Hunches Considering the behaviour, concerns, and hunches of others in the field to inform and guide analysis and interpretation. C
154
+
155
+ 3-5
156
+ Immersion Forecasters spend several days forming a mental model through undirected review of contextual information. C
157
+
158
+ 3-5
159
+ Context-Seeking Directed information search for supplementary contextual information. C
160
+
161
+ 3-5
162
+ Mental Projection Forecasters assimilate information by imagining and mentally visualizing the interactions of avalanche conditions, weather, terrain, and people. X
163
+
164
+ 3-5
165
+ Updating Forecasters iterate over knowledge artifacts like their forecast as they conduct their analysis and update their own mental models. C
166
+
167
+ 3-5
168
+ Deliberate Omission Forecasters manage information overload by ignoring certain data C
169
+
170
+ 2-5
171
+ 4*Analytic Challenges Lack of Good Representations Forecasters lament a lack of good visual representations to alleviate some cognitive effort. C
172
+
173
+ 3-5
174
+ Ratings It is challenging for forecasters to lower danger ratings as data reveal instability rather than stability. X
175
+
176
+ 1-1
177
+ 3-5
178
+ 6*CollaborationandCommunication Continuity Forecasting relies on the continuity of analysis and monitoring. Shift-changes disrupt this continuity. 0
179
+
180
+ 3-5
181
+ Translating Analysis Forecasters struggle with communicating complex conditions with simple clarity to the public. 0
182
+
183
+ 2-5
184
+ Collaborative Data Production Forecasters facilitate collaborative work by producing hand-off notes and other internal knowledge artifacts. 0
185
+
186
+ 2-5
187
+ Sensemaking Regular Discussions Forecasters draw on each other's diverse knowledge through daily discussions. 0
188
+
189
+ 2-5
190
+ Strategies Reaching out Directly Forecasters call or email field operators for further information when faced with critical information gaps. O
191
+
192
+ 2-5
193
+ X Professional Exchange Forecasters work with other agencies and operators to gain a deeper understanding of the nuances of how data are produced and what they mean. X
194
+
195
+ 1-5
196
+
197
+ Table 1: Thematic codes developed in Study 1 (semi-structured interviews) and applied to Study 2 (field observations and cued-recall debrief). Thematic codes are organized and color-coded according to their relevance to different sources of ambiguity.
198
+
199
+ Whether communicating to fellow forecasters or the public, capturing complexity and nuance in simple and understandable terms is a challenge. "To simplify it [...] that's when you are kind of having to use your own best judgment..." (P2). Forecasters must translate their understanding and cater it to an audience that varies in understanding and expertise. This often involves exploring alternative future scenarios, their implications, how an audience may interpret what the forecaster is saying, and subsequently choosing a simple communication strategy that comprehensively accounts for these alternatives. "So instead of trying to write my forecasts like: 'oh, if we get 10 centimeters it will probably be okay, but if we get 20, then it'll probably come unglued' [...] It's like 'just watch for conditions to change as you increase with elevation [...] if it starts to feel stiff or slabby underneath your feet [...] use that terrain feature to go around it..." (P2).
200
+
201
+ § 5 STUDY 2: OBSERVING AVALANCHE ANALYTICS
202
+
203
+ The purpose of Study 2 was to observe forecaster workplace behaviours and their use of technology. We sought a richer understanding of the challenges faced by forecasters and how visual analytics interventions might help.
204
+
205
+ § 5.1 PROCEDURE
206
+
207
+ We conducted field observations on Avalanche Canada premises for a week., collecting field notes and audio recordings of daily discussions. At the same time, we gathered observations using cued-recall debrief (CRD), a situated recall method developed for use in complex decision-making contexts [62] and adapted for human-computer interaction [5]. 7 forecasters were observed in the field and 4 were debriefed using CRD. Camcorders positioned behind workstations in view of monitors and the desk surface captured recordings of forecaster's workday and their use of technology as well as artifacts such as hand-written notes. At regular intervals, video recordings were reviewed to identify timestamps where forecasters exhibited behaviours relevant to our research interests. At the end of the workday, recordings were played back to forecasters at marked timestamps, and forecasters were asked to explain their thought processes and actions. We asked questions like: "Can you explain what you were doing and thinking here?" These debrief interviews were video recorded and transcribed.
208
+
209
+ § 5.2 ANALYSIS
210
+
211
+ We applied the thematic coding scheme developed in Study 1 to notes and transcripts in Study 2 (Table 1). This allowed us to compare what forecasters say and what they actually do. Thematic coding was applied by one coder in two passes.
212
+
213
+ § 5.3 FINDINGS
214
+
215
+ § 5.3.1 ANALYTIC TOOLING
216
+
217
+ Forecasters rely heavily on text tables and information from disparate web-based sources. They gather these resources in a map-based web portal that organizes hyperlinks to such resources spatially (Figure 2A). Data such as weather station telemetry representing meteorological conditions are investigated in a bottom-up manner. Telemetry from individual weather stations is viewed in a table format and iteratively synthesized into a holistic understanding of weather patterns. Similarly, professional field reports are generally viewed in text tables (Figure 3A). Forecasters scan down columns of tables to extract patterns and distributions from structured attributes such as avalanches sizes. At the same time, they read across rows of tables to extract details about individual reports to glean enough context to understand their significance. We observed forecasters repurposing web-browser features to accomplish simple analytic tasks. For instance, one forecaster opened several days of data in successive windows to investigate temporal patterns and make comparisons. This suggested forecasters could benefit from dedicated analytic tools to support such tasks. To our surprise, we found that the visualizations present in existing systems were seldom used. While it was clear the forecasters could benefit from dedicated analytic tools, the overwhelming use of text tables indicated this representational form held some comparative advantage in sensemaking.
218
+
219
+ § 5.3.2 TALKING ABOUT DATA
220
+
221
+ Organizational knowledge relevant to the nuanced interpretation of data is in large part oral tradition exchanged through the shared practice and environment of work.
222
+
223
+ We observed several discussions that dealt with the topic of how to interpret particular reports. For instance, one discussion dealt with the interpretation of a report authored by an operator who was known to have a conservative bias and what the implications of this were for hazard assessments. In another discussion, a junior forecaster with a guiding background described how they are coming to understand the challenges of their new remote-work environment, noting the nature of what types of information may be missing. "After having worked this job [Avalanche Canada] ... I sort of realize the big holes the operators leave in their writeups [...] because they are having face to face conversations... and maybe not putting that information into their writeup... saying this layer [of snow] does not exist in our area may not be helpful to them, but it really helps us here in this office..." (P8). How classifications and circumstantial definitions are applied in hazard assessment and risk communication was also a frequent topic of conversation. "I like [X's] point yesterday, wind slabs in the alpine are kind of like cornices that you find always... it is just a winter mountain hazard... it goes on the bulletin when it is elevated to more than normal caution..." (P2).
224
+
225
+ < g r a p h i c s >
226
+
227
+ Figure 2: (A) Existing spatially oriented web portal linking to external weather station telemetry resources. Data from individual weather stations are commonly viewed in a table format and synthesized in a bottom-up manner. (B) WxObs visualization prototype showing numerical aggregates of weather station telemetry. Weather stations are viewed simultaneously using a conventional overview-first and top-down approach.
228
+
229
+ § 5.3.3 TACIT SENSEMAKING AND ANALYTIC PROCESSES
230
+
231
+ Early sensemaking processes, particularly those involving personal experiences or trust, may be difficult to articulate out of context and consequently, share with others.
232
+
233
+ When debriefing forecasters about their workday we found they relied on the subjective hunches of operators that they personally trusted and were more familiar with. This factored into how evidence was weighed and the confidence forecasters had in it. "I feel good about who was about in the operation. So, I felt that the test was valid and valid information that I should be thinking about..." (P3).
234
+
235
+ We also found forecasters exploring general contextual information to immerse themselves. They found it difficult to articulate how they were using the information, reflecting characteristics of early sensemaking processes [71]. "It was just to give me an orientation to get my mental picture for forecasting [...] just a little bit of context... I don't know what that does for me exactly..." (P4).
236
+
237
+ § 5.3.4 COLLABORATION AND KNOWLEDGE ARTIFACTS
238
+
239
+ The bulletin serves as a knowledge artifact representing a forecasters' current understanding of avalanche conditions. The bulletin scaffolds analysis and guides information search, particularly during hand-off at shift changes. However, the reasons behind specific changes to the bulletin are not always explicitly captured leaving future collaborating forecasters to speculate about the reasoning that might have been involved.
240
+
241
+ Forecasters don't just iterate over their own bulletin over the course of the day, they often carry forward the previous day's bulletin even if another forecaster wrote it. We observed how forecasters update it as they formulate their own new understanding. "I import yesterday's forecast... and I tweak my forecast so it matches my now-cast..." (P6). The specific reasons behind these updates are not made explicit, leaving the forecasters coming on shift to seek contextual information to speculatively reconstruct their coworker's evidential reasoning process. "...so I reviewed a few avalanches to understand what was driving those avalanches and why [anonymized] added that persistent slab problem again..." (P6).
242
+
243
+ § 6 CO-DESIGNING VISUAL ANALYTIC SUPPORT
244
+
245
+ These findings guided us in developing visualization prototypes to support core forecasting tasks. We deployed these visualizations as design probes to examine how visual analytics interventions may aid in addressing challenges of ambiguity. The first prototype (WxObs) aggregates weather observations from remote weather stations in order to help forecasters validate the previous day's weather forecast as well as to monitor evolving weather systems in real-time. The second prototype (AvObs) uses field-reported avalanche observations produced by avalanche safety operations sharing data in the InfoEx. Avalanche observations are treated as key indicators of avalanche hazards in avalanche forecasting. We designed and developed both prototypes through several iterations from paper sketches to computational implementation in collaboration with avalanche forecasters. Both tools were evaluated using think-aloud protocol throughout the design process to explore how the tools support reasoning.
246
+
247
+ § 6.1 WXOBS: CLASSIC DESIGN
248
+
249
+ Forecasters traditionally access weather station data through a spatially-linked web portal that redirects to external resources where data from individual weather stations are generally presented in text tables (Figure 2A). Forecasters use this information to synthesize patterns and distributions of various meteorological data such as precipitation totals, wind speeds, and temperatures. However, we found that their existing approach was challenged by the visual fragmentation and tediousness of accessing these disparate resources. We used a classic visual analytics linked and interactive multi-view design approach to streamline analysis and address this problem (Figure 2B).
250
+
251
+ We designed a conventional visual analytic display following Shneiderman's "Overview first, zoom and filter, then details on demand" visualization mantra [72]. Numerical aggregations of various weather stations telemetry across time and space were displayed in a variety of visualizations to provide forecasters with an "overview" of the data. Multiple "levels of detail" and "scales of resolution" of the data were captured across the display. All visualizations were linked together interactively supporting "brushing", "zooming", and "filtering" interactions across all corresponding displays. Individual marks visible in the spatial view allow tooltip interactions for "details-on-demand".
252
+
253
+ < g r a p h i c s >
254
+
255
+ Figure 3: (A) Existing InfoEx interface displaying avalanche observation reports in a table format. Individual reports are read and analyzed in a bottom-up manner. (B) AvObs visualization prototype displaying avalanche observation reports using glyphs placed in a variety of visualization contexts. Individual reports are visible allowing critical contextual details to be discerned to inform understanding when there is a multiplicity of interpretations.
256
+
257
+ § 6.2 AVOBS: BREAKING WITH CLASSICS
258
+
259
+ Our second prototype, the AvObs tool (Figure 3B), uses daily field-reported avalanche observations shared by avalanche safety operators on the InfoEx platform. These tables are generally viewed in a tabular format. When we started designing this tool with the avalanche forecasters, we used classic visualization principles based on effectiveness and expressiveness [59] and common conventions such as using numerical aggregations. We found that even simple numerical aggregations like counts were problematic and inappropriate.
260
+
261
+ § 6.2.1 DISAGGREGATED DATA
262
+
263
+ We discovered several issues necessitating disaggregated views of data. First, the data have ambiguous expressions where the same data value may correspond to multiple meanings depending on context and the communicative intent of the author. Second, data are gathered using a targeted sampling approach rather than a random sampling approach. The data generating process is not uniform across the dataset and as a result, this challenges the methodological utility of aggregate measures.
264
+
265
+ § 6.2.2 GLYPHS FOR AMBIGUOUS DATA
266
+
267
+ Forecasters wanted to see individual reports while at the same time being able to discern general patterns in the data. To address this design constraint we used glyphs with circle marks representing individual reports in a packed layout within a variety of visualization contexts. Circle marks were encoded using important structured data attributes within reports. The size of circles encoded typical avalanche size and the color encoded the number of observed avalanches. Two color maps were used to distinguish numerical and categorical values reflecting the need to preserve raw forms of data. Brushing and linking as well as tooltip interactions reveal contextual details allowing forecasters to discern how to interpret individual reports. This glyph-based approach operates at multiple scales of resolution allowing forecasters to visually aggregate data to discern patterns. Glyphs are known to support several visual aggregation operations such as summarizing data, detecting outliers, detecting trends, or segmenting data into clusters [78].
268
+
269
+ § 6.2.3 DESIRABLE DIFFICULTY
270
+
271
+ Early versions of the AvObs visualization prototype used bar charts that forecasters found difficult to interpret. They expressed concerns about visualizations giving them a false sense of precision and disarming the level of scrutiny forecasters usually apply to these data. We deliberately chose to use a visual design that we thought would break this sense of precision by introducing deliberate effort in decoding visualizations. We chose size and color as opposed to position which is commonly thought to be decoded more accurately [15] and, depending on the task, is often more perceptually salient [78]. In addition, combining visual features such as size and color is more difficult than using either alone [26]. In this way, we are explicitly violating the principle of perceptual effectiveness to provoke more deliberate consideration of the data, grounded in the concept of "desirable difficulty".
272
+
273
+ The benefits of introducing cognitive difficulties have been discussed in the context of geovisualization and risk-based decisions [13] and are well-documented in studies of human learning [84]. In visualization research, desirable difficulty has been framed as a trade-off between the cognitive efficiency derived from pre-attentive processing and improved learning through more active processing of information [32]. By reducing the fluency with which patterns in visualizations are read, more active and attentive processing of these patterns can stimulate "self-explanations" [14] where inferences about missing information are generated to fill in gaps or prior knowledge is integrated with new information to account for potential discrepancies. We conjecture our relatively more imprecise visualization design introduces visual complexity that induces additional effort, attention, and careful consideration of how perceived patterns should be interpreted. This is particularly important when ambiguity is a relevant consideration. By relying on quicker or more efficient information processing, one may be led to treat a visual display at face value and forego the consideration of alternative interpretations that may apply.
274
+
275
+ Beyond factors related to low-level perceptual processing, we conjecture that our chosen design serves as an effective metaphor for the messy nature of such data. Researchers have discussed how precise, easy-to-read, and minimalist designs can impart a sense of authority or objectivity [36] that may not always be warranted. The rhetorical force of visualizations to convince viewers that a clean visualization is an objective and perfectly truthful representation of the world can be detrimental when considering the messiness and complexity of many real-world data. Our deliberately messy design may serve as a reminder, much as tables do, that such data require additional scrutiny and interrogation from multiple perspectives.
276
+
277
+ § 7 STUDY 3: EXPLORING VISUALIZATIONS
278
+
279
+ § 7.1 PROCEDURE
280
+
281
+ The visualization prototypes were evaluated using retrospective interviews. The avalanche observations prototype used simulated synthetic and historical data from past seasons and was never used operationally. The weather stations prototype used real-time data and was used operationally in the second half of the winter forecasting season. 7 forecasters had input on the design and development of prototypes while one simply commented on their experiences using them.
282
+
283
+ At the end of the forecasting season, we conducted semi-structured interviews asking forecasters to reflect on the prototypes, how they addressed the challenges of data, how they affected their work, and what needs remained unfulfilled. Interviews were conducted remotely using video conference tools. We used our prototypes as artifacts in the interview to prompt the forecaster's reflections. The interviews were video-recorded and transcribed. We summarize our key findings with quotes extracted from transcripts below
284
+
285
+ § 7.2 FINDINGS
286
+
287
+ § 7.2.1 MANY POSSIBLE INTERPRETATIONS
288
+
289
+ The operational use of the WxObs prototype highlighted how analysis of weather station telemetry presents issues of data uncertainty that give rise to ambiguity. They are sparsely distributed relative to the large spatial areas they are used to represent [48] and they are subject to a variety of sensor and transmission errors caused by environmental factors. Presently, there is no comprehensive automated quality assurance procedure that accounts for all possible errors in the data [57]. Diagnosing errors and how individual weather stations come to represent broader weather patterns is a matter handled through the forecaster's judgment and interpretation. Forecasters normally use text tables to view each weather station's telemetry individually and progressively build up an understanding of weather patterns. This bottom-up approach stands in contrast to our top-down and overview-first visualization designs. Our visualization prototypes employed visualizations of aggregate measures, multiple granularities of data, interactions including brushing and filtering, and tooltips to view the details of individual weather station telemetry (Figure 2B). Our visualization prototype introduced a new and unfamiliar analytic approach that challenged forecasters. "I've always looked at the data in a pretty disaggregated way [...] What I'm having to learn is to kind of let go of that, needing to see the disaggregated view first so that I can aggregate the data in my brain so to speak..." (P12).
290
+
291
+ Similar challenges arose in the AvObs tool (Figure 3B). The human-reported avalanche observations follow reporting standards that, while structured, require a thorough understanding of context for interpretation. "...the InfoEx system and the standards... they kind of define the box that we all work in [...] how you use them... context drives that. You might use a certain approach... data that are obviously within that general framework or box that we've created, but you might not use them exactly the same way..." (P12). The same datum may be interpreted in a variety of ways and displays need to reveal the appropriate details for readers to discern what is appropriate.
292
+
293
+ § 7.2.2 THE NEED FOR RAW DATA
294
+
295
+ Both data sources and prototype tools highlight a need for fluid interaction with underlying raw data. In the WxObs tool, many who are used to seeing raw data in a tabular format raised issues of trust as they could not apply the same visual scanning strategies to detect errors in data. "[I]t largely stems from the trustworthiness of the data [...addressing the use of spreadsheets] I like things in their raw format just for my own sake [...] my own stamp of approval. [...] I guess it's easy for my eyes to decode differences or irregularities. You should be able to visualize the data and get the same output. I don't know why. I just use tables..." (P1). Others also used raw data tables but did so to scaffold the learning of data processing mechanics and the affordances of the visualizations as analytic tools "[...] having that [raw data table] side by side with the visualization helped me to interpret: Okay, what's the visualization trying to tell me here?" (P4).
296
+
297
+ Similar issues surfaced with the AvObs tool. Early design iterations employing bar charts were seen as an impediment to sensemak-ing. Meanwhile, the glyph-based design was thought to hold more methodological utility as it more closely resembled and supported their mental model of how to analyze these data. "I like seeing the individual events more than the aggregate... It seems like full of flaws and limitations to kind of summarize all the [avalanche] activity with one number..." (P11). Despite our prototype using individual marks to represent each individual report, some forecasters still wanted the ability to see table-based displays. We speculate that this, similar to the WxObs tool, is due to issues of trust and learning how tacit analytic procedures associated with existing table-based views are or are not supported in the AvObs tool.
298
+
299
+ § 7.2.3 FORECASTER REFLECTIONS
300
+
301
+ Forecasters who adopted the WxObs visualizations more readily in their work found the tool provided them with a richer and deeper understanding of meteorological phenomena than traditional data tables alone. Drawing a historical comparison to the role of computers in meteorology, forecasters view visualizations as a stepping stone in a transitional phase towards more data-driven modeling. "[T]here was a transitional phase there where the computer was more an aid to help the forecaster make some initial assumptions... then the forecaster would tweak the forecast and actually write the forecast manually still... and now we're to the point where that really isn't the case..." (P12).
302
+
303
+ Meanwhile, forecasters reported feeling satisfied with how the AvObs visualization prototype represented and supported their analytical processes. "[The visualization] helps to smooth the data [...] and just at a glance [...] but it's not smoothing where I can't then [...] tease out nuances[...] Ifeel like it's really true to the data, which is a collection of individual points, kind of disparate points from across a forecasting region..." (P2).
304
+
305
+ § 8 DISCUSSION
306
+
307
+ Throughout our 3 studies, we found that critical issues of ambiguity arise in three contexts: the data, the process of analysis, and the challenges of communicating both data and interpretation to both coworkers and the general public. We unpack the role of ambiguity, the concomitant challenges, and strategies used to deal with ambiguity in each of these contexts. Our findings highlight the need for more effective design interventions. We discuss each in turn.
308
+
309
+ § 8.1 SOURCES OF AMBIGUITY
310
+
311
+ § 8.1.1 DATA
312
+
313
+ Ambiguity emerges from data because they are incomplete simplifications of the complex phenomena they represent. Ambiguity may be involved in the expression of data or how representative data are of phenomena of interest. Whether reasoning about multiple types of data uncertainty in weather station telemetry or what field-reported avalanche observations mean for avalanche conditions more broadly, forecasters use their knowledge, experience, and cues within the data to explore plausible explanations that account for what they see. Here, provoking alternative interpretations serves a productive purpose in analysis.
314
+
315
+ Forecasters try to capture relevant nuances of interpretation about specific data through daily discussions. Often this might serve to disambiguate meaning by providing an optimal or appropriate framing for the data. For instance, the understanding that weather stations at windy locations will need adjustment when trying to understand broader wind patterns. We note that the forecasters' corpus of organizational knowledge is predominantly oral tradition exchanged in application to the immediate demands of work. Such a mechanism for knowledge exchange is vulnerable to information loss.
316
+
317
+ § 8.1.2 ANALYTIC PROCESS
318
+
319
+ Ambiguity both serves a productive purpose in analytic processes and presents challenges for the management and navigation of analyses. Alternative interpretations are explored as part of sensemaking often taking the form of alternative scenarios in risk analysis and risk prediction. Either through mental visualization or explicit sketches, forecasters provoke and imagine alternative scenarios to explore potential risks or explanations of data.
320
+
321
+ The judgments and analytic choices made during analysis represent alternative potential analytic paths through data. As forecasters weigh evidence and update their understanding of avalanche conditions, they iteratively adjust knowledge artifacts to match their understanding. However, the evidential reasoning process behind their judgments is often left uncaptured and may be difficult to reconstruct. This poses challenges for managing analysis as it may be unclear what work is completed and what remains to be done.
322
+
323
+ § 8.1.3 COLLABORATION AND COMMUNICATION
324
+
325
+ Forecasters each hold a unique perspective and interpretive lens presenting a form of ambiguity. Forecasters use strategies like regular discussions or hand-off notes to exchange knowledge and disambiguate how to interpret each other's assessments by capturing their reasoning processes. However, given the additional effort of this task and the difficulty in anticipating what may be relevant, such information is often not completed. This leaves forecasters having to speculate about their colleagues' reasoning processes.
326
+
327
+ Forecasters translate their own complex understanding of avalanche conditions in simple terms to ensure that members of the public, whether novice or expert, can apply appropriate risk-management strategies. In doing so, forecasters mitigate the risks of potential scenarios the public might encounter or the confusion that might result from overly technical communications. Quite often, this means reconciling alternatives. For instance, in a situation where two avalanche problem types require the same risk mitigation strategies, forecasters will use one of them and supplement any further guidance that might be necessary using plain and actionable language. The myriad of ways to communicate hazards presents its own form of ambiguity. Moreover, individual forecasters differ in how they judge avalanche hazards and apply assessments [44, 77].
328
+
329
+ § 8.2 DESIGN IMPLICATIONS
330
+
331
+ § 8.2.1 WHEN TO BREAK THE RULES
332
+
333
+ Conventional visualization design principles value precision-based visual variable effectiveness rankings as a basis for design decisions. However, as others have highlighted [6], this is an oversimplification of how visualizations are used. Visual pattern detection and visual thinking extend far beyond the precise extraction of singular values, and more importantly, displays that optimize for precision may have detrimental effects on other types of operations. With the need for close scrutiny of data and the potential for alternative interpretations, overly precise displays can give a false sense of precision and forfeit the perceived need for further scrutiny.
334
+
335
+ Our research has also highlighted that while the traditional 'overview first' mantra certainly has value in this application, it leaves a need for more fluid access and control to underlying raw data without overly onerous interactions. The properties of these data, like their ambiguous expressions or the varying data-generating processes, challenge conventional visualization approaches which can hide critical details that cue appropriate framing for data. While our designs shifted some focus to these cues, the need for bottom-up raw-data-driven processes was still highlighted in the feedback we received.
336
+
337
+ When dealing with heterogeneous and ambiguous data, designers should consider design approaches that best support the sensemaking processes involved rather than relying on conventional visualization mantras with a one size fits all approach. This reflects a broader need for improved guidance of how the affordances of visualization design can support the relevant cognitive processes needed for specific problem solving and sensemaking tasks. To do so, a characterization of what tasks can be supported by visualizations needs to move beyond what can be measured in lab experiments (e.g. low-level perceptual processes or decoding statistical properties of data). We suggest that a "macrocognitive" lens [42], one that values ecological validity and the complexities involved rather than strict control of variables, may help researchers identify such tasks.
338
+
339
+ § 8.2.2 DESIRABLE DIFFICULTY
340
+
341
+ Introducing cognitive difficulties in the context of visualization is thought to improve the memorability of insights [32]. Our research suggests that enabling or encouraging sensemaking around ambiguity is another beneficial outcome. There may be other benefits of introducing difficulties in visualization that remain to be identified.
342
+
343
+ § 8.2.3 ACCESS TO RAW DATA SUPPORTS SENSEMAKING
344
+
345
+ Through our design study, we learned that visual displays of heterogeneous and ambiguous data should aim to reveal the relevant contextual details necessary to discern appropriate interpretations. Abstractions like numerical aggregations can occlude such details and impede sensemaking. Instead, we recommend designs such as unit visualizations that support visual aggregations or those showing the relevant granularity of data alongside numerical aggregations (in tables, for instance). This allows alternative interpretations to be provoked when trying to understand how data come to represent a phenomenon of interest. In addition, access to raw data can support the process of learning and adopting new analytic tools by revealing underlying data processing mechanics [3]. Hasty transitions to new analytic systems risk the loss of a host of implicit procedural knowledge that may not be supported by new approaches. This can cause issues of trust. Showing raw data alongside more abstracted views of the same data can aid comprehension of new tools and allow users to evaluate their affordances.
346
+
347
+ § 8.2.4 CAPTURE AMBIGUITIES EXPLICITLY
348
+
349
+ We argue that design solutions need to extend beyond the representation of existing data. Managing an analysis with many contingencies and nuances of interpretation is difficult and is vulnerable to information loss, particularly when analysis is shared. To better serve the analysis at hand and to improve collaborative analysis, we suggest that the nuances of data interpretation should be captured explicitly during analysis. This would serve to characterize ambiguities through the externalization of relevant knowledge and the enrichment of data. We must take care these interventions remain lightweight and contextually anchored to avoid undue effort. We draw inspiration from the concept of "active reading", where knowledge generated during the process of reading is captured with external representations such as computationally-enabled markup and annotations [56]. Researchers have demonstrated that such techniques can be extended to analysis using visualizations $\left\lbrack {{68},{80}}\right\rbrack$ . Annotations are a general-purpose technique that has been applied as a strategy to deal with ambiguity [9] as well as implicit errors [55]. This suggests annotations could be more specifically tailored and extended to address the challenges of ambiguity. Other forms of markup [3], including annotations, employed for the nuanced interpretation of data are often embedded in the ubiquitous spreadsheet, perhaps the most widespread analytic tool. Tables are flexible and allow direct interaction with data which might explain why users often turn to them to support complex sensemaking. The affordances of tables are well-suited to deal with the challenges of ambiguity and may serve to guide the design of visual analytics systems in applications dealing with such challenges.
350
+
351
+ § 8.2.5 EXTERNALIZATIONS CAN BE VAGUE
352
+
353
+ Ambiguity is often the start of a sensemaking process. At such early stages, understanding may be inchoate and difficult to articulate, calling into question the utility of highly detailed capture mechanisms such as annotations. In collaborative analysis, it is difficult to anticipate the needs of others. Collaborators might only form an intuition about a problem that may be important for others to be aware of [40]. This is because the relevance of any such problem is context-sensitive [82]. Standardized protocols for sharing analysis often fail because designers of such protocols cannot adequately account for and predict all the unique information or complexity that might arise [66]. These considerations are important whether collaboration is with others or oneself at a future point in time.
354
+
355
+ There may be more simple capture mechanisms that can address the difficulties of articulating complexity. Passive capture mechanisms such as interaction logs provide one lightweight and context-sensitive solution. Interaction logs have been used to infer reasoning processes [19] and are frequently discussed as approaches for documenting analytic provenance [83]. Interaction logs, however, only show behaviours and are indirect indicators of reasoning processes. User-controlled markup may still be necessary to capture what is relevant. Researchers in clinical healthcare settings have supplemented hand-off protocols with vague metrics like gut feelings about a patient, time spent with a patient, or how medical equipment in a room has been moved around to take advantage of practitioners' shared work environment and culture [58]. We can take inspiration from this work. To capitalize on the shared digital working environment, simple markup such as tagging of data or representations may be all that is necessary to signify ambiguity. Tags may signify important pieces of evidence, how evidence is weighed and relates to assessments, or may simply serve to raise awareness of ambiguity and prevent it from being lost and risking potential misinterpretation. Forecasters can use their shared working environment to maintain context and capture ambiguities without having to precisely articulate them. Awareness of uncertainty is critical for ensuring trust in findings [69] and we argue the same applies to awareness of ambiguity.
356
+
357
+ § 8.2.6 DATA ENRICHMENT REQUIRES METADATA MANAGEMENT
358
+
359
+ The use of more explicit data enrichment and ambiguity capture raises the question of how long captured data should persist as part of the working environment. Such markup may only be relevant for one working session and one individual. It might be relevant across several working days and for multiple collaborators. Or, it might take a more permanent form in a corpus of organizational knowledge. Designers should consider ways to control or account for the persistence of captured data.
360
+
361
+ Metadata created during analysis within a visual analytics system are bound to a representation rather than the underlying database. This raises questions about how such metadata may be queried, retrieved, or reused in contexts outside of the one they are created in and originally bound to. Designers need to consider how metadata can be reused and translated across analytic contexts.
362
+
363
+ § 8.2.7 UNSTRUCTURED METADATA REQUIRE SCHEMATIZATION
364
+
365
+ Ad-hoc data enrichment and ambiguity capture pose some practical challenges when scaling. Annotations tend to produce large amounts of unstructured data that can be difficult to reuse. Such data require a schematization mechanism to make them tractable for future reuse. Mechanisms for eliciting such data may be structured ahead of time, for example through survey-like questionnaires. Meta-data gathered at the time of elicitation, such as timestamps or application states [53], might also provide some structure. Alternatively, natural language processing approaches such as ontology-learning may lend themselves to schematizing such meta-data. However, we stress that the use of such algorithms should maintain transparency and give supervisory control to users. As we have learned in our design study, even simple statistical abstractions can obfuscate details paramount to reasoning about ambiguity. Further, highly complex technological solutions are more vulnerable to failure [82]. Consequently, the use of automation or algorithms should be carefully designed to make data processing transparent in support of human comprehension.
366
+
367
+ § 8.2.8 BABY AND THE BATHWATER
368
+
369
+ Our experiences developing visualization prototypes for avalanche forecasters have highlighted the costs associated with introducing new analytic tools. The forecasters have developed visual reasoning strategies for interrogating data in table formats. Many of these procedures and processes are likely tacit and simply a natural habit that has been developed. When introducing new tools, even basic visualizations, there is a transitional period. A process of evaluating what capabilities are gained or supported, and which ones might not be supported needs to occur in practice. Until a thorough understanding of how a new analytic tool fits within the broader sensemaking toolkit, issues of trust will persist.
370
+
371
+ Computationally-enabled analytic tools are becoming ever-more sophisticated and complex. While there are real benefits to such powerful tools, designers need to consider the learning and unlearning of procedures associated with the adoption of new approaches. This is a common and obvious concern in the implementation of new systems. However, it is one that should be given more attention as it is often forgotten. This is particularly important in applications involving risk-based decision-making and time constraints where there are severe consequences for misinformed decisions.
372
+
373
+ § 9 LIMITATIONS
374
+
375
+ We note that while our first study had additional coders to test reliability, data from subsequent studies were analyzed by one coder only. Our comprehension of the challenges that forecasters face was incorporated in prototypes within our design study and the feedback forecasters provided throughout our close collaboration served as a form of validation of our understanding. This presents obvious limitations in the reliability of our findings. However, such challenges are common in the development of long-term, qualitative, and ethnographically inspired research aimed at deep domain understanding.
376
+
377
+ § 10 CONCLUSION
378
+
379
+ We have presented findings from a set of qualitative studies with public avalanche forecasters. Our research highlights that ambiguity presents challenges and unmet needs in critical and complex sensemaking. We propose a formative characterization of ambiguity across three levels of abstraction in analysis: data, analytic process, and collaboration and communication. The key lesson of our research is that ambiguity should be explicitly considered and designed for. While even simple visualization design choices can serve to enable or impede sensemaking around ambiguity, we argue for more targeted and explicit approaches. Our findings may inform future research and the design of tools in other complex risk-management domains such as extreme weather forecasting or the forecasting of other natural disasters. This work represents a preliminary attempt to characterize ambiguity and define a design space for visual analytics, but many questions remain unexplored. Further study is necessary to evaluate our existing and proposed design solutions to more rigorously understand their impact and how they address the challenges of ambiguity.
380
+
381
+ § ACKNOWLEDGMENTS
382
+
383
+ Thanks to Avalanche Canada, the Vancouver Institute for Visual Analytics (VIVA), the Big Data Initiative at Simon Fraser University (SFU), the SFU Avalanche Research Program, and our reviewers for their thoughtful feedback. This work was supported by Mitacs through the Mitacs Accelerate program and the Natural Sciences and Engineering Research Council Industry Research Chair in Avalanche Risk Management (grant no. IRC/5155322016), with industry support from Canadian Pacific Railway, HeliCat Canada, Canadian Avalanche Association, and Mike Wiegele Helicopter Skiing.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/SIclhxYV6f9/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,303 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Simultaneous Worlds: Supporting Fluid Exploration of Multiple Data Sets via Physical Models
2
+
3
+ Category: Research
4
+
5
+ ![01963e60-e0e9-74e6-9ce9-e52ff4b3e536_0_214_383_1365_313_0.jpg](images/01963e60-e0e9-74e6-9ce9-e52ff4b3e536_0_214_383_1365_313_0.jpg)
6
+
7
+ Figure 1: Examples of three opportunities for integrating visualizations and architectural scale models on tabletops. (Left) Satellite imagery shown situated with a physical model. (Center) Multiple data visualizations composed together using the geometry and position of a model. (Right) Individual buildings from a model are used to manipulate and author visualizations.
8
+
9
+ ## Abstract
10
+
11
+ We take the well-established use of physical scale models in architecture and identify new opportunities for using them to interactively visualize and examine multiple streams of geospatial data. Overlaying, comparing, or integrating visualizations of complementary data sets in the same physical space is often challenging given the constraints of various data types and the limited design space of possible visual encodings. Our vision of "simultaneous worlds" uses physical models as a substrate upon which visualizations of multiple data streams can be dynamically and concurrently integrated. To explore the potential of this concept, we created three design explorations that use an illuminated campus model to integrate visualizations about building energy use, climate, and movement paths on a university campus. We use a research through design approach, documenting how our interdisciplinary collaborations with domain experts, students, and architects informed our designs. Based on our observations, we characterize the benefits of models for 1) situating visualizations, 2) composing visualizations, and 3) manipulating and authoring visualizations. Our work highlights the potential of physical models to support embodied exploration of spatial and non-spatial visualizations through fluid interactions.
12
+
13
+ Keywords: Information visualization, interactive surfaces, data physicalization, architectural models
14
+
15
+ Index Terms: Human-centered computing-Visualization-Visualization techniques-Treemaps; Human-centered computing-Visualization-Visualization design and evaluation methods
16
+
17
+ ## 1 INTRODUCTION
18
+
19
+ Although data sets are often examined in isolation, they are rarely generated that way. Rather, every piece of data represents one small element in a larger picture and captures only one of many perspectives of the places, people, and phenomena it seeks to characterize. Overlaying, comparing, or integrating visualizations of multiple complementary data sets in the same physical space is often challenging [8], given the unique constraints of various data types and the limited design space of possible visual encodings. Moreover, for data sets that reference the physical world, much of the surrounding context remains unrecorded, and can be appreciated only by visualizing the data in-situ, where physical and temporal scales can make observation difficult. For example, it is impossible to simultaneously experience summer and winter climate conditions at the same location. Similarly, in the physical world, it is impossible to observe large scale systems, such as an entire campus or urban area, directly. As a result, it is difficult for viewers to examine many data sets at once, and viewers often miss out on ambient and environmental data that might provide context and support interpretation.
20
+
21
+ Our work proposes the concept of "simultaneous worlds", (Figure 2) which highlights how physical architectural models can provide context for and support transitions between multiple data visualizations. To explore the potential of this concept, we built a tangible table-top system using scale models of a university campus. Our tabletop system juxtaposes visualizations of operational data such as heating and cooling costs alongside ambient and contextual data sets including environmental conditions, occupancy and movement logs, and historical aerial photos. The interactive table uses rear-projection to overlay visualizations of this data with transparent trackable architectural models.
22
+
23
+ We explored this particular system with several sets of stakehold-ers. These included campus energy managers (who were interested in contextualizing data about energy use and weather), architects (who were interested in understanding patterns of human movement on campus), and students (for whom these kinds of physical models could increase awareness around important topics like energy use). In our explorations, we wanted to emphasize the broad utility of our tabletop system for use with other data visualizations including human movement and occupancy data.
24
+
25
+ For our first contribution, we examine three avenues via which physical architectural models can support data exploration and showcase the benefits they provide (Figure 1). We explore how architectural models can situate data, improving viewers' ability to identify locations and connect data to them. We then highlight how visualization developers can use models to anchor composite visualizations that combine multiple datasets and visualizations together in the same space. Finally, we show how physical models can support fluid, tangible interactions which allow viewers to explore and reconfigure spatial visualizations. We then illustrate these concepts via two example data analysis tools, one of which uses our system to visualize campus climate and energy use, and one visualizing human movement across the university.
26
+
27
+ Our second contribution is the documentation of our design process using a research through design approach. We conducted this research as an iterative design-oriented exploration of the potential of simultaneous worlds. We collected reflections from a variety of stakeholders, including campus architects and energy managers, who participated in the design of the system. Throughout the process, we collected reflections, framings, and opportunities, using these qualitative and observational practices to guide our research work-resulting in a set of framings and prototypes that more deeply illustrate the potential of architectural models to serve as tangible and context-specific interfaces for data visualizations.
28
+
29
+ ![01963e60-e0e9-74e6-9ce9-e52ff4b3e536_1_161_165_706_1200_0.jpg](images/01963e60-e0e9-74e6-9ce9-e52ff4b3e536_1_161_165_706_1200_0.jpg)
30
+
31
+ Figure 2: Multiple geospatial and visualization layers can all be visualized in the context of the same physical architectural model. These layers serve as "Simultaneous Worlds", supporting integrated exploration and reasoning.
32
+
33
+ Our initial findings show that the models provide immediate and familiar symbols that allow the user to quickly understand visual encodings in a variety of different visualizations without annotation or lengthy explanation, and provide additional benefits related to the geometric and spatial characteristics of the model. We conclude with a discussion on additional possible application areas and considerations for applying the concept of simultaneous worlds for visualization researchers and designers of tabletop systems.
34
+
35
+ ## 2 RELATED WORK
36
+
37
+ Traditionally, most data visualization tools have focused on creating new visual representations that support the intentional exploration of specific data sets of interest. Yet, in practice, interpreting data and making informed decisions often calls for additional context-which situates the data with respect to locations, events, and phenomena not captured in the data itself. To address this, we explore how physical models can serve as a substrate for data analysis tasks, providing a common set of anchors upon which to display both operational data that drives analyses and ambient data which provides context to them. Our work sits at the intersection of research on physical architectural models, situated data visualizations, and tangible interfaces.
38
+
39
+ ### 2.1 Architectural Models and Tabletops
40
+
41
+ Digitally-augmented physical and architectural models have a relatively long history in HCI, including examples like the Metadesk [29] and URP [30] which provide some of the earliest demonstrations of the value of tangible computing. A diverse range of subsequent projects have also explored how physical modeling [2, 22], shape-changing displays [10], and augmented architectural models [28] can support physical planning, drive social engagement, and present data specific to urban settings. The classic tabletop literature has established the collaborative advantages of physical tabletop systems, allowing for shared ownership of the territory of the work space, as well as ease of use in navigation, locomotion, and turn-taking [26].
42
+
43
+ Although the current trend in urban analytics focuses on the exploration of 3D digital models in virtual reality, physical models provide an immersive experience of data within the context of a "real-world" environment that doesn't rely on VR equipment. Chandler et al. characterize the benefits of analysing urban data within a 3D model over 2D maps [19, Chapter 11] but also note some of the challenges associated with supporting collaborative discussion in virtual environments. Physical models may provide useful alternatives to these tools in a variety of application areas-including maps for emergency response, real estate development, and neighborhood planning which could leverage the collaborative benefit of tabletop models with site-specific data visualizations.
44
+
45
+ ### 2.2 Situated Visualizations
46
+
47
+ Work on situated visualizations (visualizations displayed in related environments [32]) and embedded visualizations (visualizations deeply connected to specific spaces, objects, and entities [33]) highlight how visualizing data in the physical world can help provide environmental and ambient context like weather and traffic conditions. Mobile and augmented reality visualization tools [31, 32], which overlay data on top of physical referents in a viewer's surroundings represent one popular approach. Viewers in physical spaces can also observe environmental traces like paths, physical wear, and decay-and these traces give a sense of ongoing ambient processes-or create indexical visualizations [21] and Autographic Visualizations [20] that expressly illustrate ambient data in the environment.
48
+
49
+ However, in many cases, the distance, size, or physical inaccessibility of relevant environments can make it difficult or impossible to display data on top of them to support in-situ analysis and decision-making. Moreover, ambient data that could provide context about spaces and phenomena may not be visible to the naked eye and may span larger timescales or geographic extents than a viewer can reasonably observe. In response, we examine how architectural scale models [5] can serve as facsimiles or proxies for real-world environments [33], providing anchors upon which both operational and ambient data can be examined and integrated.
50
+
51
+ ### 2.3 Tangibles on Tabletops
52
+
53
+ According to Ishii, "the key idea of Tangible User Interfaces (TUIs) is to give physical forms to digital information...to serve as both representation and controls for their digital counterparts" [14]. Many projects embody either representation or control, but not both. Most often, tangibles are used as tools for interaction and control, such as TZee objects [34] and Lumino [3]. Other projects in architecture and urban planning also consider tangibles. The MIT CityScope uses projection onto Lego objects for urban planning and other scenarios [2], while Maquil et al.'s ColorTable [18] uses simple primitive forms for both representing generic road and wall forms and as input devices. The generic forms used in ColorTable, however, do not show important details such as height, context, or real-world scale-three variables identified in immersive analytics [19] as essential for urban design analysis. Within most of the existing tangible tabletop projects, there is a missed opportunity to encode meaning in the material and geometric properties of the object. More recent systems like Ens et al.'s Uplift [8], meanwhile, have mostly focused on physical models as a background for augmented reality visualizations above the tabletop. Ren and Heiecker further support the use of physical models over VR experiences in their 2021 study that revealed faster, more confident answers and long term memorability with physical models for data visualization [24]. Our approach is to use site-specific architectural models to display and contain the visualizations of a specific place. Like other projects, we track the models to allow people to use them as interaction handles, and by doing so, control aspects of the displayed visualizations.
54
+
55
+ From a technical perspective, occlusion is a significant problem with top down projection systems, not only because the arms of the user block the projected images, but also because the tangibles occlude whatever is on the illuminated surface beneath them. Most TUIs use visible markers for detection by a computer vision system, which requires opaque objects and top mounted projection. Tangible 3D Tabletops by Dalsgaard et al. [7] uses two projectors to project images onto 3D cubes to represent buildings, plus a bottom projector to project visuals below. Using this system, the designer can project architectural details onto the sides of the blocks, however image quality is limited by the resolution of the projector and the size of the cubes.
56
+
57
+ ### 2.4 Data Physicalization
58
+
59
+ Data Physicalization [15] is an emerging research area that studies the use of material and geometric encodings to capture data. While this is a closely related area, we do not consider this work a data physicalization project as our simultaneous worlds prototypes never encode data using the physical form or properties of the model. Instead, the data visualizations remain strictly $2\mathrm{D}$ while the physical models provide context, define the shape of the visualizations, and serve as interactive handles for them.
60
+
61
+ ## 3 SIMULTANEOUS WORLDS
62
+
63
+ We introduce the concept of "simultaneous worlds" in which architectural models and data visualizations inhabit the same physical space. Using a research through design framework [35], we document our iterative design process for a 3D interactive campus model. Based on ongoing conversations with energy, building, and operations managers, as well as students and architects over the course of approximately two years, we built and revised two interactive model systems (Figure 5). We also demonstrated the system publicly seven times during its development, including as part of a citywide art and science festival, at department and educational showcase events, and lab demo days. In all cases, our intent was to examine the ability of the model to facilitate an understanding of the data more quickly, and to expand the possibilities of connections between energy use and their own experiences on campus, whether as a student or an administrator. Based on our observations, the paper seeks to highlight interesting potential areas of opportunity for integrating architectural models and visualizations. Through this lens, we illustrate how "simultaneous worlds" offers opportunities for situating spatial and non-spatial datasets and supporting complex reasoning about real world spaces.
64
+
65
+ Our work illustrates the potential for even tighter integration between data visualization and more complex architectural models than URP [30] and CityScope [2], which highlighted the potential for using simple building shapes to serve as a canvas for an interface to data and simulations. In particular, we highlight how translucent architectural models can provide a substrate for compositing visualizations of multiple complementary datasets including climate information, building automation logs, and human movement traces on tabletop displays. While each distinct information model or representation can exist on its own, we demonstrate how the physical geometry of the models can help connect related visualizations in an integrated fashion. We conducted three design explorations that examine the potential for physical architectural models to help situate, compose, and support interaction with data visualizations.
66
+
67
+ ### 3.1 Tabletop Implementation and Setup
68
+
69
+ We explored these concepts in the context of a bottom-projected tabletop which we built to accommodate a ${26} \times {46}$ inch 1:1700 scale architectural model of a ${2.13}{\mathrm{\;{km}}}^{2}$ university campus. This model provided a platform on which to visualize a wide variety of readily-available environmental, social, and infrastructure-related datasets.
70
+
71
+ ### 3.2 Acrylic Campus Model
72
+
73
+ The physical table consists of a projector, a laminated screen with a base map etched into the surface, and a frame made of ${80}/{20}$ building materials. Our system (Figure 3-left and middle) uses an acrylic model placed on the illuminated tabletop which displays a variety of different spatial visualizations. We constructed the scale model using a mix of digital fabrication and hand-building techniques. The unique outline of every floor of each of the buildings was laser cut from a $1/8$ inch acrylic sheet (which, at this scale, was roughly equivalent to the height of one floor). We then stacked and glued these layers together with a clear adhesive. We also etched the surface of the tabletop to include the footprints of each building along with roads, parking lots, trails, and other important physical elements of campus architecture. The tabletop is bottom-projected, with the visualizations visible through and around the model. Due to the translucent and internally reflective nature of the acrylic, the visualizations displayed on the surface reflect up through the building masses, filling the volumes with color.
74
+
75
+ ![01963e60-e0e9-74e6-9ce9-e52ff4b3e536_2_149_1690_1497_375_0.jpg](images/01963e60-e0e9-74e6-9ce9-e52ff4b3e536_2_149_1690_1497_375_0.jpg)
76
+
77
+ Figure 3: Detail images showing system diagram (left), photo of table (middle), and close-up of building tracker markers (right)
78
+
79
+ ![01963e60-e0e9-74e6-9ce9-e52ff4b3e536_3_152_148_718_547_0.jpg](images/01963e60-e0e9-74e6-9ce9-e52ff4b3e536_3_152_148_718_547_0.jpg)
80
+
81
+ Figure 4: Scale model with our campus movement visualization.
82
+
83
+ ### 3.3 Touch Surface and Tangible Interaction
84
+
85
+ To track the position of buildings on the tabletop, we developed a custom tracking system using a single Microsoft Kinect V2 and OpenCV 3 on the Unity game engine. The approach is similar to motion tracking systems like the Vicon or OptiTrack. We attach between three and seven small retro-reflective stickers to each of the buildings as tracking markers (Figure 3-right), then illuminate and track them using a Kinect mounted immediately above the tabletop. We use k-means clustering in OpenCV to group the marker positions detected by the Kinect. We then use OpenCV's machine learning tools to train a recognizer to identify buildings based on their total number of markers, the positions of the markers on their perimeters, and the compactness of the cluster. This process estimates the total number of tracked buildings on the table, and outputs positions and ids for individual recognized buildings. The system broadcasts update events via WebSockets whenever a building is placed on the table, removed from the table, or changes position.
86
+
87
+ ### 3.4 Visualizations
88
+
89
+ We implemented two visualization systems for the model. The first (Figure 1-center) is an energy use visualization, developed using Processing, which combines building automation logs and ambient weather data from the university campus collected over a two year period. The second movement visualization (Figure 4), developed using HTML, Javascript, and Mapbox GL, showcases location data from several hundred university students collected between 2013 and 2017. We describe both visualizations in more detail later in the paper.
90
+
91
+ ## 4 COLLABORATIONS
92
+
93
+ Throughout the design process of the tabletop model, we systematically consulted with domain experts including campus operations and energy managers at major project milestones (Figure 5). We collected feedback from these experts via periodic rounds of semi-structured interviews as well as informal demo sessions. After the final system was complete, we also invited four architects to reflect on the system, and discussed the impact of using physical architectural models for visualizing campus-specific data. We also demonstrated the system publicly seven times during its development, including as part of a citywide art and science festival, at department and educational showcase events, and during lab demo days.
94
+
95
+ <table><tr><td>Design Phase</td><td>1st Prototype (4 months)</td><td>2nd Prototype (12 months)</td><td>Reflection (8 months)</td></tr><tr><td>Knowledge Production</td><td>define discover construct</td><td>sythesize refine re-construct</td><td>assess reflect sythesize</td></tr><tr><td>Collaborators</td><td>Energy Manager Facilities Director Sustainability Manager</td><td>Energy Manager Operations Manager Sustainability Director</td><td>Energy Manager 4 Architects 16 students 7 public demos</td></tr></table>
96
+
97
+ Figure 5: Design phases and knowledge gathered through collaborations in each phase.
98
+
99
+ ### 4.1 Collaborations with Domain Experts
100
+
101
+ We consulted repeatedly with domain experts including the university's energy manager, operations managers, and personnel from the office of sustainability. We met with a total of five experts over two years, including multiple iterations with the energy manager and sustainability staff. These stakeholders provided access to initial raw data as well as consultation and feedback on the project, helping us to tailor the visualization design to their requirements.
102
+
103
+ In addition to this ongoing engagement, we held informal debriefing meetings near the end of the development process with members of the energy management team to collect additional feedback. We began each semi-structured interview with a demo of the current features and possible interactions, and collected responses and interactions of the participants through notes and video.
104
+
105
+ Our first interview with the campus energy manager was particularly influential, providing a deeper understanding of the campus's existing methods of energy data analysis and what the managers were looking for in a new visualization system for energy use data. The campus's existing web-based dashboard did not engage users or receive as much traffic as the team had hoped and staff felt the tool was unlikely to raise students' awareness of their energy use on campus. Additionally, the operations team discussed challenges they faced in stakeholder meetings with non-technical university administrators which were often grounded in static reports and spreadsheets. In particular, the energy manager highlighted the challenge of communicating different types of energy data, each with different units, and expressed a desire for visualizations that could communicate multiple variables simultaneously.
106
+
107
+ After the initial prototype was built, meetings with the university's energy manager, facilities director, and members of the office of sustainability also offered particularly fruitful insights. All staff responded positively to our initial environmental and energy visualizations, and provided detailed feedback which we used to refine the design. Throughout the design process, the initial prototypes functioned as "physical hypotheses" to test the feasibility of our concept and provide direction for future iterations.
108
+
109
+ ### 4.2 Collaborations with Architects
110
+
111
+ Near the end of the project, we also demonstrated the final system to four architects, who provided feedback about the use of site-specific models and data for public engagement. As with our previous engagements, we began with a demo of the system, then followed a semi-structured interview protocol. We tailored the rest of the conversations based on the background and expertise of each architect, and recorded audio of the conversations which we later transcribed.
112
+
113
+ ### 4.3 Analysis
114
+
115
+ Throughout the multi-year deployment, we used an ongoing qualitative synthesis approach in which two of the authors regularly reviewed new notes, interview transcripts, and feedback from collaborators. During this process, we maintained and updated a working set of top-level research themes. Over the first two phases, these emergent themes - as well as more specific input from our domain expert collaborators-guided our prototyping efforts and prompted our exploration of the potential for architectural models to support 1) situating, 2) composing, and 3) interacting with geospatial visualizations. In the third phase, we used the results from our interviews with architects and members of the public to refine our higher-level themes as well as identify further opportunities and challenges for integrating visualizations and architectural models. We also used our system as the basis for a small quantitative study, which we describe in section 5.2.
116
+
117
+ ## 5 ARCHITECTURAL MODELS FOR DATA VISUALIZATION
118
+
119
+ In the following sections, we describe three unique opportunities for integrating physical architectural models and data visualizations on tabletops and illustrate these benefits via our system implementations. Within each section, we critically reflect on these opportunities using feedback and observations from throughout the design process.
120
+
121
+ ### 5.1 Situating Visualizations
122
+
123
+ The physical characteristics of a scale architectural model can preserve important details about their original referents (including the buildings' size, height, orientation, and layout) that could make it easier to reason about data from them. As such, situating visualizations within and on top of these models can help analysts retain many of the benefits of examining data in the original setting. Moreover, scale models can permit situated analysis and observations from scales and perspectives that are impossible to access in the physical world.
124
+
125
+ Including the geometric details of the building in terms of height, volume and facade provides valuable information about a particular building such as window and exit locations, which are vital for many types of urban design and architectural analysis [5]. Similarly, the empty space around the $3\mathrm{D}$ model is also representative of places in the real world such as courtyards, parking lots and other spaces that are familiar to the viewer through their experience of the campus. The area around a model surfaces different associations about a space and the buildings within it, and sets up relationships of inside/outside, boundaries, and other spatial relationships. Explorations like Allahverdi et al.'s Landscaper [1] and Buur et al.'s noise curves [4] highlight some of the advantages of incorporating physical representations of data with site-specific physical models. Both highlight the value of maps and models which serve as proxies for locations and make it possible to situate real world data. However, they also emphasize how the lack of depth information in 2D maps can obscure important details that are relevant to analysis.
126
+
127
+ ### 5.2 Recognizability of Maps and Physical Models
128
+
129
+ To understand how the presence of a physical model might impact viewers' ability to interpret the campus layout, we conducted a between-subjects study in which we asked participants to use either a map or model to identify campus buildings. We recruited 16 participants (four female / twelve male, ages 21 to 42) half of whom had spent less than one year on the campus and half of whom had spent at least one year or more. Using either a map with outlines of all buildings on the campus (map condition) or the same map projected underneath our physical campus model (model condition), we gave participants two minutes to identify and name as many buildings as possible. We provided participants with paper strips listing the names of all campus buildings and asked them to place the names on the tabletop directly on top of the matching buildings. After two minutes had elapsed, a researcher counted the number of correctly-placed names.
130
+
131
+ ![01963e60-e0e9-74e6-9ce9-e52ff4b3e536_4_922_164_720_156_0.jpg](images/01963e60-e0e9-74e6-9ce9-e52ff4b3e536_4_922_164_720_156_0.jpg)
132
+
133
+ Figure 6: Number of campus buildings correctly identified by participants using only the map (top) and participants using the physical model (bottom). Participants who had spent less than one year on campus appear in blue, while those with more than one year appear in orange. Error bars show 95% bootstrapped confidence intervals.
134
+
135
+ The results from our models study with 16 students (Figure 6) suggest that participants who had access to the model tended to be able to more accurately identify campus buildings. While some participants fared poorly in both conditions, only one participant in the map condition was able to correctly identify more than 11 buildings. By contrast, five of the eight participants in the model condition were able to identify 14 or more. Anecdotally, individuals in the model condition reported that they were able to rely on the heights of buildings as well as their visual signatures, allowing them to more readily align their mental model of the campus with the representation on the tabletop.
136
+
137
+ These findings suggest that physical models can more easily serve as a stand-in for real-world geography, allowing viewers to understand the locations referenced in visualizations and helping them access their own mental model of those spaces, providing context that could help them interpret data. P6 noted that "the model helped me see what I see everyday", while P16 explained that "without the height I wouldn't have been able to tell which one was MacKimmie Tower" (a tall landmark building on campus). Another participant, P15, had been on campus for more than two years, and said that the model helped with "the odd shaped buildings you're used to seeing; that's the tall one, the shape of the buildings helped to see which was which".
138
+
139
+ While our models capture the relative heights and geometry of campus buildings, they still fail to represent much of the finer-grained detail of the buildings themselves, including construction materials, facades, or surrounding greenery. However, our experiences projecting satellite imagery onto the model (as in Figure 1- left) highlight how additional imagery can align well with simple transparent models, providing texture and detail that can give an even richer sense of the real-world environment and further contextualize data.
140
+
141
+ ### 5.3 Composing Visualizations
142
+
143
+ Any single analysis often involves data from a variety of sources. Visualization designers typically look for ways to join datasets directly using some shared information (an explicit shared key, dates, etc.) in order to visualize them together as a single view. When this is not possible, designers often generate multiple independent visualizations and display them together, using dashboards and overlays to support visual comparison between them.
144
+
145
+ Spatial and environmental datasets often present a unique challenge, since they frequently use different levels of hierarchical organization which can make it difficult to join datasets directly. Many architecturally-relevant datasets refer to specific point locations (latitude-longitude) or spatio-temporal paths (like the walking trajectories of individuals or vehicles). However, others may refer to regions, buildings, rooms, and other architectural elements with very different scales. This can make it challenging to simultaneously visualize datasets with different scales together (such as building-level energy use and city- or county-level weather data). Moreover, other important pieces of data relevant to the analysis (such as the current price of electricity) may have no spatial component at all.
146
+
147
+ While most of these datasets can be plotted spatially, simply overlaying them one on top of the other quickly reduces their legibility. We illustrate how designers can use the physical geometry of scale architectural models to compose multiple visualizations together and facilitate transitions between them using the shared context of the model. Specifically, we examine how models and their subcomponents can anchor, bound, and define the geometry of visual marks, providing new opportunities for integrating multiple simultaneous views. These approaches allow designers to create composite visualizations [16] that encode more diverse combinations of data, while also creating strong associations between the components of the physical model and the related visualizations, reducing the need for labels and annotations.
148
+
149
+ Anchoring. Using an anchoring approach, the physical positions of an architectural model and its sub-elements define the position of visual marks. Simple examples include positioning visual marks at the centroids of buildings (Figure 7a) or connecting visual marks (or even whole visualizations) to pieces of a model using call-outs or connecting lines. Because anchoring only specifies the position of the visual marks and not their form, it can create a strong visual connection between the visualization and the model while still permitting a wide range of different visual encodings.
150
+
151
+ Bounding. In contrast, a bounding approach uses the shape of a model and/or its sub-elements to separate and contain visualizations. This approach uses the edges and sub-components of the model to divide space to simultaneously show multiple different visualizations both outside (Figure 7b) and inside (Figure 7c). This division of space makes it possible to composite multiple separate visualizations together while creating strong visual associations between visualizations and individual pieces of the model. Bounding can also be used to carve out positive and negative spaces in and around visualizations, creating a stronger sense of alignment between the model and the visualization(s).
152
+
153
+ Defining Geometry. Alternatively, designers can also use the shape of the model to define the geometry of visual marks themselves, creating visualizations that extend the model. For example, colored strokes around the outside (Figure 7d) or inside of a model component can encode categorical or quantitative data related to that element. Similarly, designers can use the geometry of model components as the basis for data-driven shadows (Figure 7e) or extrusions that extend beyond the bounds of the model. While systems like URP [30] and MetaDesk [29] have used these kinds of cast shadows to support light and shadow studies in urban environments, we instead use a shadow metaphor to simultaneously visualize multiple abstract data streams around individual buildings.
154
+
155
+ ![01963e60-e0e9-74e6-9ce9-e52ff4b3e536_5_150_1715_702_258_0.jpg](images/01963e60-e0e9-74e6-9ce9-e52ff4b3e536_5_150_1715_702_258_0.jpg)
156
+
157
+ Figure 7: Three approaches for using physical architectural models to compose visualizations. Pieces of a model can (a) anchor visual marks, (b, c) bound and mask visual marks, or dictate the geometry and encodings of visual elements like (d) borders and (e) shadows.
158
+
159
+ #### 5.3.1 Prototype Visualizations
160
+
161
+ Our two example visualizations each use a combination of these operations to create composite visualizations that showcase multiple datasets in and around the model.
162
+
163
+ Visualizing campus climate and energy use. The first visualization (Figure 8-left) uses a bounding approach to simultaneously visualize building management data and energy use data for individual campus structures with daily climate data. We created this visualization by integrating daily heating and cooling cost data for individual campus buildings with daily minimum and maximum outdoor temperatures covering a 1-year period from 2016 to 2017. By default, we use the interior of individual buildings to visualize daily heating and cooling costs in that structure, which we encode using a red-blue color ramp. Meanwhile, we use the area around the buildings to visualize a temperature gradient for the same day. Viewers can also toggle the visualization to display electricity and water use inside individual buildings and use a time-series plot below the map to scroll through or play back the entire year's worth of data. Seeing the climate data and energy use together might allow for easy anomaly detection. For instance, viewers can quickly detect if a building is showing high cooling levels even during cold weather events, signalling potential mechanical issues or data quality concerns.
164
+
165
+ We also examine the use of anchoring and geometry approaches via a second style of visualization in the work area to the right of the main campus map. Here, viewers can examine individual buildings outside of the geographic constraints of the main map, allowing them to display more datasets simultaneously. Here, we encode buildings' overall energy use via the size of a circle anchored at the building's center. Buildings also cast data-driven shadows showing their heating, cooling, electricity, and water use independently.
166
+
167
+ Visualizing human movement on campus. Our second example (Figure 8-right) uses the model to visualize human movement across the campus. We based this visualization on anonymized smartphone location data collected by Galpern et al. in 2017 [11]. This dataset includes 5,530 unique paths drawn from the location histories of 208 students and provides a snapshot of movement patterns across the university over a 4-year period. Because plotting the entire set of paths results in considerable visual clutter, we use the geometry of the model to aggregate and simplify these paths. By default, we bound the visualization using the outlines of campus buildings-showing individual paths colored by the movement direction in outdoor areas, but use solid colors to encode aggregated occupancy inside each building. Viewers can also manipulate the visualization to access additional data by interacting directly with the building models.
168
+
169
+ ### 5.4 Manipulating and Authoring Visualizations
170
+
171
+ Because of their size and shape, architectural models can also serve as graspable tokens, which viewers can use to interactively control visualizations associated with them. Depending on scale and level of detail, models can also be broken down in a variety of different ways, separating pieces into city sectors, blocks, buildings, or even parts of buildings such as floors or staircases to create additional controls. Viewers can then interact directly with the model, moving and arranging pieces to perform a variety of analytic operations.
172
+
173
+ Broadly, tangible interactions allow people to "grasp & manipulate bits by coupling bits with physical objects" [14] and offer a number of benefits, including making user interfaces "more manipulable by using physical artifacts" [9]. Interacting with physical objects can offer a tactile and embodied way of exploring the relationships in complex representations, providing "scaffolds" or cognitive aids that help people solve problems that would be more difficult using "brain-internal computation" [6]. Moreover, tangible interaction can be a valuable tool for embodied sensemaking [13].
174
+
175
+ The use of tangibles on tabletops has been widely explored in other domains, but presents a particular set of challenges and opportunities for architectural models. On one hand, architectural tabletop models are a core component of architectural design practice, where scale models are still routinely crafted and manipulated by hand and serve as a locus of design exploration [12]. However, in contrast to other instances of tangibles on tabletops, architectural models as input devices have limited degrees of freedom-constrained by the physical characteristics of the models themselves. (For example, most building models have a natural up and down, and thus are unlikely to support rotation around multiple axes.) Despite these constraints, physical models offer a rich set of possible interactions via which viewers can reconfigure models to gain new information, while simultaneously leveraging the recognizable form and physical properties of the pieces themselves.
176
+
177
+ Based on these insights, we used our two prototypes to examine four specific interaction techniques (Figure 9) which use buildings as physical interaction tokens. These include several interactions in which viewers interact with models to alter visualizations while preserving the original spatial layout. We also showcase how models can support grouping, reorganizing, and re-configuring visualizations when these spatial constraints are relaxed.
178
+
179
+ Reveal. In a reveal interaction, picking up a piece of the model can be used to hide or show information in the visualization. These simple interactions can work well when models are placed in a fixed geospatial layout (like the campus map) and translating or rotating them would disrupt that configuration. In these cases, reveal interactions can trigger queries and filters or change the properties of the underlying visualizations that do not impact their layout. For example, our movement visualization introduces a reveal interaction (Figure 9a) in which lifting a building off of the tabletop hides the occupancy data for that building and reveals the raw movement paths underneath. This particular interaction builds on the intuition of lifting a physical object to reveal the area or objects beneath it.
180
+
181
+ Assemble. Conversely, in an assemble interaction, repositioning pieces of the model on the map serves as a mechanism for constructing new visualizations that selectively reveal information associated with individual pieces while still retaining a fixed spatial layout. We explore this concept in our movement visualization by allowing viewers to clear the tabletop of all models, then selectively re-add buildings to reveal only the paths that pass through all of them (Figure 9b). These new views make it possible to examine distinct subsets of the data, letting viewers examine specific flow paths and bottlenecks on campus, while reducing clutter both on the tabletop and in the visualization.
182
+
183
+ Extract. In an extract interaction, pieces of the model can be repositioned to create new visualizations which ignore the spatial constraints of the original model, allowing viewers to create dramatically different visual configurations. By extracting models from their geospatial context and placing them into a more flexible space, viewers can surface additional information that might be hidden or occluded in a spatial layout. In both of our example visualizations, we support these interactions by including a work area to the right of the main map. In the movement visualization, we use this extract interaction to display building names and encode information about the number of paths that pass through the building. Meanwhile, in the climate visualization, placing objects into the work area reveals data-driven shadows which simultaneously reveal information about that building's heating, cooling, electricity, and water use (Figure 9c). Viewers can also re-position buildings in the work area to create clusters, orderings, and other layouts. Because the models retain the recognizable form of the original building, they remain easy to identify and reason about even when removed from their original geospatial locations. As in prior systems like reacTable [17], tangible models could also be used to dynamically construct new kinds of charts, including network visualizations.
184
+
185
+ ![01963e60-e0e9-74e6-9ce9-e52ff4b3e536_6_152_1077_1496_398_0.jpg](images/01963e60-e0e9-74e6-9ce9-e52ff4b3e536_6_152_1077_1496_398_0.jpg)
186
+
187
+ Figure 8: Screenshots of our visualizations without the physical model. The campus climate visualization (left) showing daily heating (red) and cooling energy (blue) for individual campus buildings with that day’s temperature gradient in the background. The work area at right shows heating, cooling, water and electrical usage for specific buildings. The movement visualization (right) overlays movement paths with occupancy data from inside the buildings-shown here with paths visible both inside and outside of buildings.
188
+
189
+ ![01963e60-e0e9-74e6-9ce9-e52ff4b3e536_6_151_1642_1499_398_0.jpg](images/01963e60-e0e9-74e6-9ce9-e52ff4b3e536_6_151_1642_1499_398_0.jpg)
190
+
191
+ Figure 9: Four interaction techniques (reveal, assemble, extract, and reorient) that use manipulation of physical models to interactively control the layout and detail of the visualizations.
192
+
193
+ Reorient. Similarly, in a reorient interaction, pieces of the model can be rotated on the tabletop to provide additional input to the system. While this may be impossible in geospatial layouts where pieces appear close together, the rotation of pieces in non-spatial layouts can provide a rich, continuous input mechanism associated directly with a specific building. In our movement visualization, we examine how these rotation interactions could be used to filter the underlying data based on direction of travel (Figure 9d). Viewers can rotate models that they have placed in the work area like dials. These buildings then serve as simple angular selection widgets [25] that filter the main visualization to show only the traffic that passes through that building in a specific range of orientations.
194
+
195
+ ## 6 Discussion
196
+
197
+ Our ongoing process of design, reflection, consultation, and validation surfaced a variety of implications for future systems that integrate visualizations and architectural models. In our discussion, we offer four takeaways for layering multiple data sets with physical model. While we have discussed the potential for simultaneous worlds in the context of architectural models, we also highlight how the concept may hold promise in other application domains.
198
+
199
+ ### 6.1 Models and Data Granularity
200
+
201
+ Physical models are static objects limited to the scale at which they are fabricated, which in turn limits the scale and granularity of the data visualizations they anchor. This can present challenges if the level of detail of the data and visualization are not compatible.
202
+
203
+ In our energy visualizations we map energy use to the color of the building outline on a building-by-building basis. This suits the granularity of the current data, which is monitored by a single meter in each building. Both the energy managers and sustainability directors preferred this scale because it matched the scale they used to analyze the university systems. They also emphasized the value of seeing energy use data together with a building's facade and volume, since this combination reveals relationships between energy use and size that are not visible in spreadsheet data-with the energy manager emphasizing, "we never see the data this way, it brings up other things to think about."
204
+
205
+ The architects, however, questioned the usefulness of visualizing data for entire buildings, as they felt that seeing information on a room-by-room basis would be more likely to show inefficiencies and other issues. This way, operators who are used to a "normal pattern of usage" for each building could see outliers or anomalies within each building. Additionally, one architect suggested that the system might be used as a control panel in other applications, for example for airport security and logistics - again suggesting that showing data on a floor-by-floor or room-by-room basis might be more useful for detecting patterns.
206
+
207
+ However, using models as a substrate for more granular data poses several practical challenges. For example, surfacing more granular data on a campus-scale model composed of individual buildings would likely reduce designers' ability to use models as interactive controls, since each building would now visualize multiple data points. However, breaking models into smaller pieces make them more difficult to manipulate and to track. In these cases, designers may need to explore hybrid solutions that use models to show data at one level of abstraction but support exploration of more granular data using other representations. For example a tabletop system like ours might show building-level aggregates on the model itself, while displaying floor-by-floor or room-by-room data in the work area next to the model.
208
+
209
+ ### 6.2 Optimizing City-Scale Systems
210
+
211
+ Campus, city, and neighborhood-scale architectural models present opportunities for understanding and evaluating larger meta-systems together with their component pieces. Interactive and modular physical models can help facilitate this interplay-but their physical nature limits the potential for analyses that bridge more distant scales.
212
+
213
+ In our discussions, the campus operations manager appreciated the combined visualizations with multiple types of data at one time, noting that "we can see the sustainability of the campus as a whole, which compliments and expands our understanding of energy use for whole building optimization, which considers energy management of the campus as a whole." Going forward, the goal of the university is to identify excess energy and store it for future use or transfer to other areas nearby when required. For example, if a building such as an ice skating arena is producing excess heat, the operations department might store or transfer the heat to use in other nearby buildings.
214
+
215
+ Integrated visualizations like our climate tool could make it easy to monitor where on campus excess cooling or heating is occurring, and which buildings close by are in need of that type of energy. The facilities manager noted that the geospatial layout made it easy to see the energy loop from the plant to each building and back again. These considerations are especially important when evaluating potential building upgrades, where even incremental reductions in energy use in large, well-designed buildings can translate into substantial overall savings. Like the architect's considerations for more granularity, the operations manager considered aspects on a different scale than we designed our model for. The operations manager needed to keep an eye on the big picture. Thus, they were more inclined to consider campus design as a system. In terms of using architectural models, this suggests considering ways that multiple pieces of a model might connect physically to be considered as one.
216
+
217
+ ### 6.3 Collaboration around Scale Models
218
+
219
+ The use of tabletop tools as collaboration platforms is well-documented $\left\lbrack {{26},{27}}\right\rbrack$ and our feedback supports these findings. Moreover, our interviews with stakeholders and the response to our public demos suggests that situating visualizations using a model provides a strong and engaging entry point and encourages viewers to treat the tabletop as a collaborative tool. Additionally, the administrators we worked with felt that the combination of visualization and model would make it easier for non-technical users to understand the relationship between buildings' form and their energy use. Administrators also highlighted the potential for using the model as a "control center" on which to visualize diverse situational and operational data. They noted that such a display could be used by multiple departments on campus to help develop a better understanding of their operations and thus aid interdepartmental meetings. Similarly, the operations and sustainability managers emphasized how this kind of model could help frame public discussions around proposed buildings and retrofits, allowing stakeholders to more readily appreciate both the physical interplay of buildings and their relative energy footprints.
220
+
221
+ ### 6.4 Using Scale Models for Exploration and Simulation
222
+
223
+ Our prototypes all used detailed models of the current campus to visualize historical and real-time data. However, our discussions with architects highlighted the potential for models to support the exploration of predictive simulations and future scenarios, and we consider this a particularly rich area for future work. In particular, one of the architects surfaced an essential distinction between models that reflect the current state of the world and those that embody alternative possibilities. Currently, most tangible tabletop systems designed for planning or educational purposes [27] represent abstract scenarios using generic or primitive tangible forms. These abstract scenarios can allow participants to model and imagine many different potential campus designs free from the constraints of the current configuration. However, energy managers and other administrators are interested in how the model reflects the campus operationally, as it is in real life.
224
+
225
+ Considering how to support both monitoring and simulation scenarios is an ongoing challenge with numerous trade-offs. For example, representing buildings using generic geometric primitives frees viewers to imagine a variety of possible design alternatives, but may miss opportunities for situating predictive simulations in an accurate architectural context. Going forward, modular, reconfigurable, or shape-changing models have the potential to address these challenges, allowing the same models to transition between abstract and realistic forms as needed. This suggests the next exciting steps for integrating architectural models and tabletop user interfaces might look to recent work in shape-changing interfaces [23]. We encourage future work to understand how we might build models that integrate possibilities for simulating "what might be".
226
+
227
+ ## 7 CONCLUSION
228
+
229
+ We present a design exploration examining how physical architectural models on tabletops can anchor visualizations of multiple complementary datasets. Our explorations detail how physical models offer new potential for supporting observations that integrate contextual and ambient information from multiple "Simultaneous Worlds". Specifically, our work highlights how physical models can help situate and compose multiple visualizations together, while also serving as tangible tokens that allow viewers to manipulate and author new representations. The layering of heterogeneous data streams around a physical model creates new opportunities for situated data analysis, which we are actively exploring in our continuing research. Through both our design explorations and reflections on the development process, we hope to lay the groundwork for even richer integration between visualizations, physical architectural models, and interactions, including ones that extend beyond the context of tabletops. Moreover, we hope this work provides inspiration for other forms of physical data models that support situated and embedded visualization with embodied and fluid interactions.
230
+
231
+ ## REFERENCES
232
+
233
+ [1] K. Allahverdi, H. Djavaherpour, A. Mahdavi-Amiri, and F. Samavati. Landscaper: A modeling system for $3\mathrm{\;d}$ printing scale models of landscapes. Computer Graphics Forum, 37(3):439-451, 2018. doi: 10. 1111/cgf. 13432
234
+
235
+ [2] L. Alonso, Y. R. Zhang, A. Grignard, A. Noyman, Y. Sakai, M. ElKat-sha, R. Doorley, and K. Larson. Cityscope: A data-driven interactive simulation tool for urban design. use case volpe. In A. J. Morales, C. Gershenson, D. Braha, A. A. Minai, and Y. Bar-Yam, eds., Unifying Themes in Complex Systems IX, pp. 253-261. Springer, Cham, 2018. doi: 10.1007/978-3-319-96661-8_27
236
+
237
+ [3] P. Baudisch, T. Becker, and F. Rudeck. Lumino: Tangible blocks for tabletop computers based on glass fiber bundles. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '10, pp. 1165-1174. ACM, New York, NY, USA, 2010. doi: 10. 1145/1753326.1753500
238
+
239
+ [4] J. Buur, S. S. Mosleh, and C. Fyhn. Physicalizations of big data in ethnographic context. Ethnographic Praxis in Industry Conference Proceedings, 2018(1):86-103, 2018. doi: 10.1111/1559-8918.2018. 01198
240
+
241
+ [5] Z. Chen, H. Qu, and Y. Wu. Immersive urban analytics through exploded views. In Workshop on Immersive Analytics: Exploring Future Visualization and Interaction Technologies for Data Analytics, pp. 1-5. ACM, Niagara Falls, ON, CA, 2017.
242
+
243
+ [6] A. Clark. Being there: Putting brain, body and world together again. MIT Press, Cambridge, MA, US, 1997.
244
+
245
+ [7] P. Dalsgaard and K. Halskov. Tangible 3d tabletops: Combining tangible tabletop interaction and 3d projection. In Proceedings of the 7th Nordic Conference on Human-Computer Interaction: Making Sense Through Design, NordiCHI '12, pp. 109-118. ACM, New York, NY, USA, 2012. doi: 10.1145/2399016.2399033
246
+
247
+ [8] B. Ens, S. Goodwin, A. Prouzeau, F. Anderson, F. Y. Wang, S. Gratzl, Z. Lucarelli, B. Moyle, J. Smiley, and T. Dwyer. Uplift: A tangible and immersive tabletop system for casual collaborative visual analytics. IEEE Transactions on Visualization and Computer Graphics, 27(2):1193-1203, 2020.
248
+
249
+ [9] G. W. Fitzmaurice, H. Ishii, and W. A. S. Buxton. Bricks: Laying the foundations for graspable user interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '95, pp. 442-449. ACM Press/Addison-Wesley Publishing Co., New York, NY, USA, 1995. doi: 10.1145/223904.223964
250
+
251
+ [10] S. Follmer, D. Leithinger, A. Olwal, A. Hogge, and H. Ishii. inform: Dynamic physical affordances and constraints through shape and object actuation. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology, UIST '13, pp. 417-426. ACM, New York, NY, USA, 2013. doi: 10.1145/2501988.2502032
252
+
253
+ [11] P. Galpern, A. Ladle, F. A. Uribe, B. Sandalack, and P. Doyle-Baker. Assessing urban connectivity using volunteered mobile phone gps locations. Applied geography, 93:37-46, 2018. doi: 10.1016/j.apgeog. 2018.02.009
254
+
255
+ [12] C. Hull and W. Willett. Building with data: Architectural models as inspiration for data physicalization. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17, pp. 1217-1228. ACM, New York, NY, USA, 2017. doi: 10.1145/3025453. 3025850
256
+
257
+ [13] C. Hummels and J. van Dijk. Seven principles to design for embodied sensemaking. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction, TEI '15, pp. 21-28. ACM, New York, NY, USA, 2015. doi: 10.1145/2677199.2680577
258
+
259
+ [14] H. Ishii and B. Ullmer. Tangible bits: Towards seamless interfaces between people, bits and atoms. In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, CHI '97, pp. 234-241. ACM, New York, NY, USA, 1997. doi: 10.1145/258549. 258715
260
+
261
+ [15] Y. Jansen, P. Dragicevic, P. Isenberg, J. Alexander, A. Karnik, J. Kil-dal, S. Subramanian, and K. Hornbæk. Opportunities and challenges for data physicalization. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI '15, pp. 3227-3236. ACM, New York, NY, USA, 2015. doi: 10.1145/2702123. 2702180
262
+
263
+ [16] W. Javed and N. Elmqvist. Exploring the design space of composite visualization. In 2012 IEEE Pacific Visualization Symposium, pp. 1-8. IEEE, New York, NY, US, Feb 2012. doi: 10.1109/PacificVis.2012. 6183556
264
+
265
+ [17] S. Jordà, G. Geiger, M. Alonso, and M. Kaltenbrunner. The reactable: Exploring the synergy between live music performance and tabletop tangible interfaces. In Proceedings of the 1st International Conference on Tangible and Embedded Interaction, TEI '07, pp. 139-146. ACM, New York, NY, USA, 2007. doi: 10.1145/1226969.1226998
266
+
267
+ [18] V. Maquil, O. Zephir, and E. Ras. Creating metaphors for tangible user interfaces in collaborative urban planning: Questions for designers and developers. In J. Dugdale, C. Masclet, M. A. Grasso, J.-F. Boujut, and P. Hassanaly, eds., From Research to Practice in the Design of Cooperative Systems: Results and Open Challenges, pp. 137-151. Springer London, London, UK, 2012.
268
+
269
+ [19] K. Marriott, F. Schreiber, T. Dwyer, K. Klein, N. H. Riche, T. Itoh, W. Stuerzlinger, and B. H. Thomas. Immersive Analytics, vol. 11190. Springer, Switzerland, 2018.
270
+
271
+ [20] D. Offenhuber. Data by proxy - material traces as autographic visualizations. IEEE Transactions on Visualization and Computer Graphics, 26(1):98-108, 2019. doi: 10.1109/TVCG.2019.2934788
272
+
273
+ [21] D. Offenhuber and O. Telhan. Indexical visualization-the data-less information display. In U. Ekman, J. D. Bolter, L. Diaz, M. Søndergaard, and M. Engberg, eds., Ubiquitous Computing, Complexity and Cul-
274
+
275
+ ture, vol. 288. Routledge, New York, NY, US, 2015. doi: 10.4324/ 9781315781129
276
+
277
+ [22] B. Piper, C. Ratti, and H. Ishii. Illuminating clay: A 3-d tangible interface for landscape analysis. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '02, pp. 355-362. ACM, New York, NY, USA, 2002. doi: 10.1145/503376.503439
278
+
279
+ [23] M. K. Rasmussen, E. W. Pedersen, M. G. Petersen, and K. Hornbæk. Shape-changing interfaces: A review of the design space and open research questions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '12, p. 735-744. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10. 1145/2207676.2207781
280
+
281
+ [24] H. Ren and E. Hornecker. Comparing understanding and memorization in physicalization and vr visualization. In Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction, TEI '21, pp. 1-7, 2021. doi: 10.1145/3430524.3442446
282
+
283
+ [25] R. Scheepens, C. Hurter, H. Van De Wetering, and J. J. Van Wijk. Visualization, selection, and analysis of traffic flows. IEEE Transactions on Visualization and Computer Graphics, 22(1):379 - 388, 2016. doi: 10.1109/TVCG.2015.2467112
284
+
285
+ [26] S. D. Scott and S. Carpendale. Theory of tabletop territoriality. In Tabletops - Horizontal Interactive Displays, pp. 375-406. Springer, New York, NY, USA, 2010. doi: 10.1145/1978942.1979143
286
+
287
+ [27] O. Shaer and E. Hornecker. Tangible user interfaces: Past, present, and future directions. Foundations and Trends in Human-Computer Interaction, 3(1 - 2):4 - 137, 2010. doi: 10.1561/1100000026
288
+
289
+ [28] D. Smit, M. Murer, V. van Rheden, T. Grah, and M. Tscheligi. The evolution of a scale model as an impromptu design tool. In Proceedings of the 2017 Conference on Designing Interactive Systems, DIS '17, pp. 233-245. ACM, New York, NY, USA, 2017. doi: 10.1145/3064663. 3064797
290
+
291
+ [29] B. Ullmer and H. Ishii. The metadesk: Models and prototypes for tangible user interfaces. In Proceedings of the 10th Annual ACM Symposium on User Interface Software and Technology, UIST '97, pp. 223-232. ACM, New York, NY, USA, 1997. doi: 10.1145/263407. 263551
292
+
293
+ [30] J. Underkoffler and H. Ishii. Urp: A luminous-tangible workbench for urban planning and design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '99, pp. 386-393. ACM, New York, NY, USA, 1999. doi: 10.1145/302979.303114
294
+
295
+ [31] R. van Krevelen and R. Poelman. A Survey of Augmented Reality Technologies, Applications and Limitations. International Journal of Virtual Reality, 9(2):1-20, Nov. 2015.
296
+
297
+ [32] S. White and S. Feiner. Sitelens: Situated visualization techniques for urban site visits. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '09, pp. 1117-1120. ACM, New York, NY, USA, 2009. doi: 10.1145/1518701.1518871
298
+
299
+ [33] W. Willett, Y. Jansen, and P. Dragicevic. Embedded data representations. IEEE Transactions on Visualization and Computer Graphics, 23(1):461-470, Jan 2017. doi: 10.1109/TVCG.2016.2598608
300
+
301
+ [34] C. Williams, X. D. Yang, G. Partridge, J. Millar-Usiskin, A. Major, and P. Irani. Tzee: Exploiting the lighting properties of multi-touch tabletops for tangible 3d interactions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '11, pp. 1363-1372. ACM, New York, NY, USA, 2011. doi: 10.1145/1978942. 1979143
302
+
303
+ [35] J. Zimmerman, J. Forlizzi, and S. Evenson. Research through design as a method for interaction design research in hci. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '07, p. 493-502. Association for Computing Machinery, New York, NY, USA, 2007. doi: 10.1145/1240624.1240704
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/SIclhxYV6f9/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,239 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § SIMULTANEOUS WORLDS: SUPPORTING FLUID EXPLORATION OF MULTIPLE DATA SETS VIA PHYSICAL MODELS
2
+
3
+ Category: Research
4
+
5
+ < g r a p h i c s >
6
+
7
+ Figure 1: Examples of three opportunities for integrating visualizations and architectural scale models on tabletops. (Left) Satellite imagery shown situated with a physical model. (Center) Multiple data visualizations composed together using the geometry and position of a model. (Right) Individual buildings from a model are used to manipulate and author visualizations.
8
+
9
+ § ABSTRACT
10
+
11
+ We take the well-established use of physical scale models in architecture and identify new opportunities for using them to interactively visualize and examine multiple streams of geospatial data. Overlaying, comparing, or integrating visualizations of complementary data sets in the same physical space is often challenging given the constraints of various data types and the limited design space of possible visual encodings. Our vision of "simultaneous worlds" uses physical models as a substrate upon which visualizations of multiple data streams can be dynamically and concurrently integrated. To explore the potential of this concept, we created three design explorations that use an illuminated campus model to integrate visualizations about building energy use, climate, and movement paths on a university campus. We use a research through design approach, documenting how our interdisciplinary collaborations with domain experts, students, and architects informed our designs. Based on our observations, we characterize the benefits of models for 1) situating visualizations, 2) composing visualizations, and 3) manipulating and authoring visualizations. Our work highlights the potential of physical models to support embodied exploration of spatial and non-spatial visualizations through fluid interactions.
12
+
13
+ Keywords: Information visualization, interactive surfaces, data physicalization, architectural models
14
+
15
+ Index Terms: Human-centered computing-Visualization-Visualization techniques-Treemaps; Human-centered computing-Visualization-Visualization design and evaluation methods
16
+
17
+ § 1 INTRODUCTION
18
+
19
+ Although data sets are often examined in isolation, they are rarely generated that way. Rather, every piece of data represents one small element in a larger picture and captures only one of many perspectives of the places, people, and phenomena it seeks to characterize. Overlaying, comparing, or integrating visualizations of multiple complementary data sets in the same physical space is often challenging [8], given the unique constraints of various data types and the limited design space of possible visual encodings. Moreover, for data sets that reference the physical world, much of the surrounding context remains unrecorded, and can be appreciated only by visualizing the data in-situ, where physical and temporal scales can make observation difficult. For example, it is impossible to simultaneously experience summer and winter climate conditions at the same location. Similarly, in the physical world, it is impossible to observe large scale systems, such as an entire campus or urban area, directly. As a result, it is difficult for viewers to examine many data sets at once, and viewers often miss out on ambient and environmental data that might provide context and support interpretation.
20
+
21
+ Our work proposes the concept of "simultaneous worlds", (Figure 2) which highlights how physical architectural models can provide context for and support transitions between multiple data visualizations. To explore the potential of this concept, we built a tangible table-top system using scale models of a university campus. Our tabletop system juxtaposes visualizations of operational data such as heating and cooling costs alongside ambient and contextual data sets including environmental conditions, occupancy and movement logs, and historical aerial photos. The interactive table uses rear-projection to overlay visualizations of this data with transparent trackable architectural models.
22
+
23
+ We explored this particular system with several sets of stakehold-ers. These included campus energy managers (who were interested in contextualizing data about energy use and weather), architects (who were interested in understanding patterns of human movement on campus), and students (for whom these kinds of physical models could increase awareness around important topics like energy use). In our explorations, we wanted to emphasize the broad utility of our tabletop system for use with other data visualizations including human movement and occupancy data.
24
+
25
+ For our first contribution, we examine three avenues via which physical architectural models can support data exploration and showcase the benefits they provide (Figure 1). We explore how architectural models can situate data, improving viewers' ability to identify locations and connect data to them. We then highlight how visualization developers can use models to anchor composite visualizations that combine multiple datasets and visualizations together in the same space. Finally, we show how physical models can support fluid, tangible interactions which allow viewers to explore and reconfigure spatial visualizations. We then illustrate these concepts via two example data analysis tools, one of which uses our system to visualize campus climate and energy use, and one visualizing human movement across the university.
26
+
27
+ Our second contribution is the documentation of our design process using a research through design approach. We conducted this research as an iterative design-oriented exploration of the potential of simultaneous worlds. We collected reflections from a variety of stakeholders, including campus architects and energy managers, who participated in the design of the system. Throughout the process, we collected reflections, framings, and opportunities, using these qualitative and observational practices to guide our research work-resulting in a set of framings and prototypes that more deeply illustrate the potential of architectural models to serve as tangible and context-specific interfaces for data visualizations.
28
+
29
+ < g r a p h i c s >
30
+
31
+ Figure 2: Multiple geospatial and visualization layers can all be visualized in the context of the same physical architectural model. These layers serve as "Simultaneous Worlds", supporting integrated exploration and reasoning.
32
+
33
+ Our initial findings show that the models provide immediate and familiar symbols that allow the user to quickly understand visual encodings in a variety of different visualizations without annotation or lengthy explanation, and provide additional benefits related to the geometric and spatial characteristics of the model. We conclude with a discussion on additional possible application areas and considerations for applying the concept of simultaneous worlds for visualization researchers and designers of tabletop systems.
34
+
35
+ § 2 RELATED WORK
36
+
37
+ Traditionally, most data visualization tools have focused on creating new visual representations that support the intentional exploration of specific data sets of interest. Yet, in practice, interpreting data and making informed decisions often calls for additional context-which situates the data with respect to locations, events, and phenomena not captured in the data itself. To address this, we explore how physical models can serve as a substrate for data analysis tasks, providing a common set of anchors upon which to display both operational data that drives analyses and ambient data which provides context to them. Our work sits at the intersection of research on physical architectural models, situated data visualizations, and tangible interfaces.
38
+
39
+ § 2.1 ARCHITECTURAL MODELS AND TABLETOPS
40
+
41
+ Digitally-augmented physical and architectural models have a relatively long history in HCI, including examples like the Metadesk [29] and URP [30] which provide some of the earliest demonstrations of the value of tangible computing. A diverse range of subsequent projects have also explored how physical modeling [2, 22], shape-changing displays [10], and augmented architectural models [28] can support physical planning, drive social engagement, and present data specific to urban settings. The classic tabletop literature has established the collaborative advantages of physical tabletop systems, allowing for shared ownership of the territory of the work space, as well as ease of use in navigation, locomotion, and turn-taking [26].
42
+
43
+ Although the current trend in urban analytics focuses on the exploration of 3D digital models in virtual reality, physical models provide an immersive experience of data within the context of a "real-world" environment that doesn't rely on VR equipment. Chandler et al. characterize the benefits of analysing urban data within a 3D model over 2D maps [19, Chapter 11] but also note some of the challenges associated with supporting collaborative discussion in virtual environments. Physical models may provide useful alternatives to these tools in a variety of application areas-including maps for emergency response, real estate development, and neighborhood planning which could leverage the collaborative benefit of tabletop models with site-specific data visualizations.
44
+
45
+ § 2.2 SITUATED VISUALIZATIONS
46
+
47
+ Work on situated visualizations (visualizations displayed in related environments [32]) and embedded visualizations (visualizations deeply connected to specific spaces, objects, and entities [33]) highlight how visualizing data in the physical world can help provide environmental and ambient context like weather and traffic conditions. Mobile and augmented reality visualization tools [31, 32], which overlay data on top of physical referents in a viewer's surroundings represent one popular approach. Viewers in physical spaces can also observe environmental traces like paths, physical wear, and decay-and these traces give a sense of ongoing ambient processes-or create indexical visualizations [21] and Autographic Visualizations [20] that expressly illustrate ambient data in the environment.
48
+
49
+ However, in many cases, the distance, size, or physical inaccessibility of relevant environments can make it difficult or impossible to display data on top of them to support in-situ analysis and decision-making. Moreover, ambient data that could provide context about spaces and phenomena may not be visible to the naked eye and may span larger timescales or geographic extents than a viewer can reasonably observe. In response, we examine how architectural scale models [5] can serve as facsimiles or proxies for real-world environments [33], providing anchors upon which both operational and ambient data can be examined and integrated.
50
+
51
+ § 2.3 TANGIBLES ON TABLETOPS
52
+
53
+ According to Ishii, "the key idea of Tangible User Interfaces (TUIs) is to give physical forms to digital information...to serve as both representation and controls for their digital counterparts" [14]. Many projects embody either representation or control, but not both. Most often, tangibles are used as tools for interaction and control, such as TZee objects [34] and Lumino [3]. Other projects in architecture and urban planning also consider tangibles. The MIT CityScope uses projection onto Lego objects for urban planning and other scenarios [2], while Maquil et al.'s ColorTable [18] uses simple primitive forms for both representing generic road and wall forms and as input devices. The generic forms used in ColorTable, however, do not show important details such as height, context, or real-world scale-three variables identified in immersive analytics [19] as essential for urban design analysis. Within most of the existing tangible tabletop projects, there is a missed opportunity to encode meaning in the material and geometric properties of the object. More recent systems like Ens et al.'s Uplift [8], meanwhile, have mostly focused on physical models as a background for augmented reality visualizations above the tabletop. Ren and Heiecker further support the use of physical models over VR experiences in their 2021 study that revealed faster, more confident answers and long term memorability with physical models for data visualization [24]. Our approach is to use site-specific architectural models to display and contain the visualizations of a specific place. Like other projects, we track the models to allow people to use them as interaction handles, and by doing so, control aspects of the displayed visualizations.
54
+
55
+ From a technical perspective, occlusion is a significant problem with top down projection systems, not only because the arms of the user block the projected images, but also because the tangibles occlude whatever is on the illuminated surface beneath them. Most TUIs use visible markers for detection by a computer vision system, which requires opaque objects and top mounted projection. Tangible 3D Tabletops by Dalsgaard et al. [7] uses two projectors to project images onto 3D cubes to represent buildings, plus a bottom projector to project visuals below. Using this system, the designer can project architectural details onto the sides of the blocks, however image quality is limited by the resolution of the projector and the size of the cubes.
56
+
57
+ § 2.4 DATA PHYSICALIZATION
58
+
59
+ Data Physicalization [15] is an emerging research area that studies the use of material and geometric encodings to capture data. While this is a closely related area, we do not consider this work a data physicalization project as our simultaneous worlds prototypes never encode data using the physical form or properties of the model. Instead, the data visualizations remain strictly $2\mathrm{D}$ while the physical models provide context, define the shape of the visualizations, and serve as interactive handles for them.
60
+
61
+ § 3 SIMULTANEOUS WORLDS
62
+
63
+ We introduce the concept of "simultaneous worlds" in which architectural models and data visualizations inhabit the same physical space. Using a research through design framework [35], we document our iterative design process for a 3D interactive campus model. Based on ongoing conversations with energy, building, and operations managers, as well as students and architects over the course of approximately two years, we built and revised two interactive model systems (Figure 5). We also demonstrated the system publicly seven times during its development, including as part of a citywide art and science festival, at department and educational showcase events, and lab demo days. In all cases, our intent was to examine the ability of the model to facilitate an understanding of the data more quickly, and to expand the possibilities of connections between energy use and their own experiences on campus, whether as a student or an administrator. Based on our observations, the paper seeks to highlight interesting potential areas of opportunity for integrating architectural models and visualizations. Through this lens, we illustrate how "simultaneous worlds" offers opportunities for situating spatial and non-spatial datasets and supporting complex reasoning about real world spaces.
64
+
65
+ Our work illustrates the potential for even tighter integration between data visualization and more complex architectural models than URP [30] and CityScope [2], which highlighted the potential for using simple building shapes to serve as a canvas for an interface to data and simulations. In particular, we highlight how translucent architectural models can provide a substrate for compositing visualizations of multiple complementary datasets including climate information, building automation logs, and human movement traces on tabletop displays. While each distinct information model or representation can exist on its own, we demonstrate how the physical geometry of the models can help connect related visualizations in an integrated fashion. We conducted three design explorations that examine the potential for physical architectural models to help situate, compose, and support interaction with data visualizations.
66
+
67
+ § 3.1 TABLETOP IMPLEMENTATION AND SETUP
68
+
69
+ We explored these concepts in the context of a bottom-projected tabletop which we built to accommodate a ${26} \times {46}$ inch 1:1700 scale architectural model of a ${2.13}{\mathrm{\;{km}}}^{2}$ university campus. This model provided a platform on which to visualize a wide variety of readily-available environmental, social, and infrastructure-related datasets.
70
+
71
+ § 3.2 ACRYLIC CAMPUS MODEL
72
+
73
+ The physical table consists of a projector, a laminated screen with a base map etched into the surface, and a frame made of ${80}/{20}$ building materials. Our system (Figure 3-left and middle) uses an acrylic model placed on the illuminated tabletop which displays a variety of different spatial visualizations. We constructed the scale model using a mix of digital fabrication and hand-building techniques. The unique outline of every floor of each of the buildings was laser cut from a $1/8$ inch acrylic sheet (which, at this scale, was roughly equivalent to the height of one floor). We then stacked and glued these layers together with a clear adhesive. We also etched the surface of the tabletop to include the footprints of each building along with roads, parking lots, trails, and other important physical elements of campus architecture. The tabletop is bottom-projected, with the visualizations visible through and around the model. Due to the translucent and internally reflective nature of the acrylic, the visualizations displayed on the surface reflect up through the building masses, filling the volumes with color.
74
+
75
+ < g r a p h i c s >
76
+
77
+ Figure 3: Detail images showing system diagram (left), photo of table (middle), and close-up of building tracker markers (right)
78
+
79
+ < g r a p h i c s >
80
+
81
+ Figure 4: Scale model with our campus movement visualization.
82
+
83
+ § 3.3 TOUCH SURFACE AND TANGIBLE INTERACTION
84
+
85
+ To track the position of buildings on the tabletop, we developed a custom tracking system using a single Microsoft Kinect V2 and OpenCV 3 on the Unity game engine. The approach is similar to motion tracking systems like the Vicon or OptiTrack. We attach between three and seven small retro-reflective stickers to each of the buildings as tracking markers (Figure 3-right), then illuminate and track them using a Kinect mounted immediately above the tabletop. We use k-means clustering in OpenCV to group the marker positions detected by the Kinect. We then use OpenCV's machine learning tools to train a recognizer to identify buildings based on their total number of markers, the positions of the markers on their perimeters, and the compactness of the cluster. This process estimates the total number of tracked buildings on the table, and outputs positions and ids for individual recognized buildings. The system broadcasts update events via WebSockets whenever a building is placed on the table, removed from the table, or changes position.
86
+
87
+ § 3.4 VISUALIZATIONS
88
+
89
+ We implemented two visualization systems for the model. The first (Figure 1-center) is an energy use visualization, developed using Processing, which combines building automation logs and ambient weather data from the university campus collected over a two year period. The second movement visualization (Figure 4), developed using HTML, Javascript, and Mapbox GL, showcases location data from several hundred university students collected between 2013 and 2017. We describe both visualizations in more detail later in the paper.
90
+
91
+ § 4 COLLABORATIONS
92
+
93
+ Throughout the design process of the tabletop model, we systematically consulted with domain experts including campus operations and energy managers at major project milestones (Figure 5). We collected feedback from these experts via periodic rounds of semi-structured interviews as well as informal demo sessions. After the final system was complete, we also invited four architects to reflect on the system, and discussed the impact of using physical architectural models for visualizing campus-specific data. We also demonstrated the system publicly seven times during its development, including as part of a citywide art and science festival, at department and educational showcase events, and during lab demo days.
94
+
95
+ max width=
96
+
97
+ Design Phase 1st Prototype (4 months) 2nd Prototype (12 months) Reflection (8 months)
98
+
99
+ 1-4
100
+ Knowledge Production define discover construct sythesize refine re-construct assess reflect sythesize
101
+
102
+ 1-4
103
+ Collaborators Energy Manager Facilities Director Sustainability Manager Energy Manager Operations Manager Sustainability Director Energy Manager 4 Architects 16 students 7 public demos
104
+
105
+ 1-4
106
+
107
+ Figure 5: Design phases and knowledge gathered through collaborations in each phase.
108
+
109
+ § 4.1 COLLABORATIONS WITH DOMAIN EXPERTS
110
+
111
+ We consulted repeatedly with domain experts including the university's energy manager, operations managers, and personnel from the office of sustainability. We met with a total of five experts over two years, including multiple iterations with the energy manager and sustainability staff. These stakeholders provided access to initial raw data as well as consultation and feedback on the project, helping us to tailor the visualization design to their requirements.
112
+
113
+ In addition to this ongoing engagement, we held informal debriefing meetings near the end of the development process with members of the energy management team to collect additional feedback. We began each semi-structured interview with a demo of the current features and possible interactions, and collected responses and interactions of the participants through notes and video.
114
+
115
+ Our first interview with the campus energy manager was particularly influential, providing a deeper understanding of the campus's existing methods of energy data analysis and what the managers were looking for in a new visualization system for energy use data. The campus's existing web-based dashboard did not engage users or receive as much traffic as the team had hoped and staff felt the tool was unlikely to raise students' awareness of their energy use on campus. Additionally, the operations team discussed challenges they faced in stakeholder meetings with non-technical university administrators which were often grounded in static reports and spreadsheets. In particular, the energy manager highlighted the challenge of communicating different types of energy data, each with different units, and expressed a desire for visualizations that could communicate multiple variables simultaneously.
116
+
117
+ After the initial prototype was built, meetings with the university's energy manager, facilities director, and members of the office of sustainability also offered particularly fruitful insights. All staff responded positively to our initial environmental and energy visualizations, and provided detailed feedback which we used to refine the design. Throughout the design process, the initial prototypes functioned as "physical hypotheses" to test the feasibility of our concept and provide direction for future iterations.
118
+
119
+ § 4.2 COLLABORATIONS WITH ARCHITECTS
120
+
121
+ Near the end of the project, we also demonstrated the final system to four architects, who provided feedback about the use of site-specific models and data for public engagement. As with our previous engagements, we began with a demo of the system, then followed a semi-structured interview protocol. We tailored the rest of the conversations based on the background and expertise of each architect, and recorded audio of the conversations which we later transcribed.
122
+
123
+ § 4.3 ANALYSIS
124
+
125
+ Throughout the multi-year deployment, we used an ongoing qualitative synthesis approach in which two of the authors regularly reviewed new notes, interview transcripts, and feedback from collaborators. During this process, we maintained and updated a working set of top-level research themes. Over the first two phases, these emergent themes - as well as more specific input from our domain expert collaborators-guided our prototyping efforts and prompted our exploration of the potential for architectural models to support 1) situating, 2) composing, and 3) interacting with geospatial visualizations. In the third phase, we used the results from our interviews with architects and members of the public to refine our higher-level themes as well as identify further opportunities and challenges for integrating visualizations and architectural models. We also used our system as the basis for a small quantitative study, which we describe in section 5.2.
126
+
127
+ § 5 ARCHITECTURAL MODELS FOR DATA VISUALIZATION
128
+
129
+ In the following sections, we describe three unique opportunities for integrating physical architectural models and data visualizations on tabletops and illustrate these benefits via our system implementations. Within each section, we critically reflect on these opportunities using feedback and observations from throughout the design process.
130
+
131
+ § 5.1 SITUATING VISUALIZATIONS
132
+
133
+ The physical characteristics of a scale architectural model can preserve important details about their original referents (including the buildings' size, height, orientation, and layout) that could make it easier to reason about data from them. As such, situating visualizations within and on top of these models can help analysts retain many of the benefits of examining data in the original setting. Moreover, scale models can permit situated analysis and observations from scales and perspectives that are impossible to access in the physical world.
134
+
135
+ Including the geometric details of the building in terms of height, volume and facade provides valuable information about a particular building such as window and exit locations, which are vital for many types of urban design and architectural analysis [5]. Similarly, the empty space around the $3\mathrm{D}$ model is also representative of places in the real world such as courtyards, parking lots and other spaces that are familiar to the viewer through their experience of the campus. The area around a model surfaces different associations about a space and the buildings within it, and sets up relationships of inside/outside, boundaries, and other spatial relationships. Explorations like Allahverdi et al.'s Landscaper [1] and Buur et al.'s noise curves [4] highlight some of the advantages of incorporating physical representations of data with site-specific physical models. Both highlight the value of maps and models which serve as proxies for locations and make it possible to situate real world data. However, they also emphasize how the lack of depth information in 2D maps can obscure important details that are relevant to analysis.
136
+
137
+ § 5.2 RECOGNIZABILITY OF MAPS AND PHYSICAL MODELS
138
+
139
+ To understand how the presence of a physical model might impact viewers' ability to interpret the campus layout, we conducted a between-subjects study in which we asked participants to use either a map or model to identify campus buildings. We recruited 16 participants (four female / twelve male, ages 21 to 42) half of whom had spent less than one year on the campus and half of whom had spent at least one year or more. Using either a map with outlines of all buildings on the campus (map condition) or the same map projected underneath our physical campus model (model condition), we gave participants two minutes to identify and name as many buildings as possible. We provided participants with paper strips listing the names of all campus buildings and asked them to place the names on the tabletop directly on top of the matching buildings. After two minutes had elapsed, a researcher counted the number of correctly-placed names.
140
+
141
+ < g r a p h i c s >
142
+
143
+ Figure 6: Number of campus buildings correctly identified by participants using only the map (top) and participants using the physical model (bottom). Participants who had spent less than one year on campus appear in blue, while those with more than one year appear in orange. Error bars show 95% bootstrapped confidence intervals.
144
+
145
+ The results from our models study with 16 students (Figure 6) suggest that participants who had access to the model tended to be able to more accurately identify campus buildings. While some participants fared poorly in both conditions, only one participant in the map condition was able to correctly identify more than 11 buildings. By contrast, five of the eight participants in the model condition were able to identify 14 or more. Anecdotally, individuals in the model condition reported that they were able to rely on the heights of buildings as well as their visual signatures, allowing them to more readily align their mental model of the campus with the representation on the tabletop.
146
+
147
+ These findings suggest that physical models can more easily serve as a stand-in for real-world geography, allowing viewers to understand the locations referenced in visualizations and helping them access their own mental model of those spaces, providing context that could help them interpret data. P6 noted that "the model helped me see what I see everyday", while P16 explained that "without the height I wouldn't have been able to tell which one was MacKimmie Tower" (a tall landmark building on campus). Another participant, P15, had been on campus for more than two years, and said that the model helped with "the odd shaped buildings you're used to seeing; that's the tall one, the shape of the buildings helped to see which was which".
148
+
149
+ While our models capture the relative heights and geometry of campus buildings, they still fail to represent much of the finer-grained detail of the buildings themselves, including construction materials, facades, or surrounding greenery. However, our experiences projecting satellite imagery onto the model (as in Figure 1- left) highlight how additional imagery can align well with simple transparent models, providing texture and detail that can give an even richer sense of the real-world environment and further contextualize data.
150
+
151
+ § 5.3 COMPOSING VISUALIZATIONS
152
+
153
+ Any single analysis often involves data from a variety of sources. Visualization designers typically look for ways to join datasets directly using some shared information (an explicit shared key, dates, etc.) in order to visualize them together as a single view. When this is not possible, designers often generate multiple independent visualizations and display them together, using dashboards and overlays to support visual comparison between them.
154
+
155
+ Spatial and environmental datasets often present a unique challenge, since they frequently use different levels of hierarchical organization which can make it difficult to join datasets directly. Many architecturally-relevant datasets refer to specific point locations (latitude-longitude) or spatio-temporal paths (like the walking trajectories of individuals or vehicles). However, others may refer to regions, buildings, rooms, and other architectural elements with very different scales. This can make it challenging to simultaneously visualize datasets with different scales together (such as building-level energy use and city- or county-level weather data). Moreover, other important pieces of data relevant to the analysis (such as the current price of electricity) may have no spatial component at all.
156
+
157
+ While most of these datasets can be plotted spatially, simply overlaying them one on top of the other quickly reduces their legibility. We illustrate how designers can use the physical geometry of scale architectural models to compose multiple visualizations together and facilitate transitions between them using the shared context of the model. Specifically, we examine how models and their subcomponents can anchor, bound, and define the geometry of visual marks, providing new opportunities for integrating multiple simultaneous views. These approaches allow designers to create composite visualizations [16] that encode more diverse combinations of data, while also creating strong associations between the components of the physical model and the related visualizations, reducing the need for labels and annotations.
158
+
159
+ Anchoring. Using an anchoring approach, the physical positions of an architectural model and its sub-elements define the position of visual marks. Simple examples include positioning visual marks at the centroids of buildings (Figure 7a) or connecting visual marks (or even whole visualizations) to pieces of a model using call-outs or connecting lines. Because anchoring only specifies the position of the visual marks and not their form, it can create a strong visual connection between the visualization and the model while still permitting a wide range of different visual encodings.
160
+
161
+ Bounding. In contrast, a bounding approach uses the shape of a model and/or its sub-elements to separate and contain visualizations. This approach uses the edges and sub-components of the model to divide space to simultaneously show multiple different visualizations both outside (Figure 7b) and inside (Figure 7c). This division of space makes it possible to composite multiple separate visualizations together while creating strong visual associations between visualizations and individual pieces of the model. Bounding can also be used to carve out positive and negative spaces in and around visualizations, creating a stronger sense of alignment between the model and the visualization(s).
162
+
163
+ Defining Geometry. Alternatively, designers can also use the shape of the model to define the geometry of visual marks themselves, creating visualizations that extend the model. For example, colored strokes around the outside (Figure 7d) or inside of a model component can encode categorical or quantitative data related to that element. Similarly, designers can use the geometry of model components as the basis for data-driven shadows (Figure 7e) or extrusions that extend beyond the bounds of the model. While systems like URP [30] and MetaDesk [29] have used these kinds of cast shadows to support light and shadow studies in urban environments, we instead use a shadow metaphor to simultaneously visualize multiple abstract data streams around individual buildings.
164
+
165
+ < g r a p h i c s >
166
+
167
+ Figure 7: Three approaches for using physical architectural models to compose visualizations. Pieces of a model can (a) anchor visual marks, (b, c) bound and mask visual marks, or dictate the geometry and encodings of visual elements like (d) borders and (e) shadows.
168
+
169
+ § 5.3.1 PROTOTYPE VISUALIZATIONS
170
+
171
+ Our two example visualizations each use a combination of these operations to create composite visualizations that showcase multiple datasets in and around the model.
172
+
173
+ Visualizing campus climate and energy use. The first visualization (Figure 8-left) uses a bounding approach to simultaneously visualize building management data and energy use data for individual campus structures with daily climate data. We created this visualization by integrating daily heating and cooling cost data for individual campus buildings with daily minimum and maximum outdoor temperatures covering a 1-year period from 2016 to 2017. By default, we use the interior of individual buildings to visualize daily heating and cooling costs in that structure, which we encode using a red-blue color ramp. Meanwhile, we use the area around the buildings to visualize a temperature gradient for the same day. Viewers can also toggle the visualization to display electricity and water use inside individual buildings and use a time-series plot below the map to scroll through or play back the entire year's worth of data. Seeing the climate data and energy use together might allow for easy anomaly detection. For instance, viewers can quickly detect if a building is showing high cooling levels even during cold weather events, signalling potential mechanical issues or data quality concerns.
174
+
175
+ We also examine the use of anchoring and geometry approaches via a second style of visualization in the work area to the right of the main campus map. Here, viewers can examine individual buildings outside of the geographic constraints of the main map, allowing them to display more datasets simultaneously. Here, we encode buildings' overall energy use via the size of a circle anchored at the building's center. Buildings also cast data-driven shadows showing their heating, cooling, electricity, and water use independently.
176
+
177
+ Visualizing human movement on campus. Our second example (Figure 8-right) uses the model to visualize human movement across the campus. We based this visualization on anonymized smartphone location data collected by Galpern et al. in 2017 [11]. This dataset includes 5,530 unique paths drawn from the location histories of 208 students and provides a snapshot of movement patterns across the university over a 4-year period. Because plotting the entire set of paths results in considerable visual clutter, we use the geometry of the model to aggregate and simplify these paths. By default, we bound the visualization using the outlines of campus buildings-showing individual paths colored by the movement direction in outdoor areas, but use solid colors to encode aggregated occupancy inside each building. Viewers can also manipulate the visualization to access additional data by interacting directly with the building models.
178
+
179
+ § 5.4 MANIPULATING AND AUTHORING VISUALIZATIONS
180
+
181
+ Because of their size and shape, architectural models can also serve as graspable tokens, which viewers can use to interactively control visualizations associated with them. Depending on scale and level of detail, models can also be broken down in a variety of different ways, separating pieces into city sectors, blocks, buildings, or even parts of buildings such as floors or staircases to create additional controls. Viewers can then interact directly with the model, moving and arranging pieces to perform a variety of analytic operations.
182
+
183
+ Broadly, tangible interactions allow people to "grasp & manipulate bits by coupling bits with physical objects" [14] and offer a number of benefits, including making user interfaces "more manipulable by using physical artifacts" [9]. Interacting with physical objects can offer a tactile and embodied way of exploring the relationships in complex representations, providing "scaffolds" or cognitive aids that help people solve problems that would be more difficult using "brain-internal computation" [6]. Moreover, tangible interaction can be a valuable tool for embodied sensemaking [13].
184
+
185
+ The use of tangibles on tabletops has been widely explored in other domains, but presents a particular set of challenges and opportunities for architectural models. On one hand, architectural tabletop models are a core component of architectural design practice, where scale models are still routinely crafted and manipulated by hand and serve as a locus of design exploration [12]. However, in contrast to other instances of tangibles on tabletops, architectural models as input devices have limited degrees of freedom-constrained by the physical characteristics of the models themselves. (For example, most building models have a natural up and down, and thus are unlikely to support rotation around multiple axes.) Despite these constraints, physical models offer a rich set of possible interactions via which viewers can reconfigure models to gain new information, while simultaneously leveraging the recognizable form and physical properties of the pieces themselves.
186
+
187
+ Based on these insights, we used our two prototypes to examine four specific interaction techniques (Figure 9) which use buildings as physical interaction tokens. These include several interactions in which viewers interact with models to alter visualizations while preserving the original spatial layout. We also showcase how models can support grouping, reorganizing, and re-configuring visualizations when these spatial constraints are relaxed.
188
+
189
+ Reveal. In a reveal interaction, picking up a piece of the model can be used to hide or show information in the visualization. These simple interactions can work well when models are placed in a fixed geospatial layout (like the campus map) and translating or rotating them would disrupt that configuration. In these cases, reveal interactions can trigger queries and filters or change the properties of the underlying visualizations that do not impact their layout. For example, our movement visualization introduces a reveal interaction (Figure 9a) in which lifting a building off of the tabletop hides the occupancy data for that building and reveals the raw movement paths underneath. This particular interaction builds on the intuition of lifting a physical object to reveal the area or objects beneath it.
190
+
191
+ Assemble. Conversely, in an assemble interaction, repositioning pieces of the model on the map serves as a mechanism for constructing new visualizations that selectively reveal information associated with individual pieces while still retaining a fixed spatial layout. We explore this concept in our movement visualization by allowing viewers to clear the tabletop of all models, then selectively re-add buildings to reveal only the paths that pass through all of them (Figure 9b). These new views make it possible to examine distinct subsets of the data, letting viewers examine specific flow paths and bottlenecks on campus, while reducing clutter both on the tabletop and in the visualization.
192
+
193
+ Extract. In an extract interaction, pieces of the model can be repositioned to create new visualizations which ignore the spatial constraints of the original model, allowing viewers to create dramatically different visual configurations. By extracting models from their geospatial context and placing them into a more flexible space, viewers can surface additional information that might be hidden or occluded in a spatial layout. In both of our example visualizations, we support these interactions by including a work area to the right of the main map. In the movement visualization, we use this extract interaction to display building names and encode information about the number of paths that pass through the building. Meanwhile, in the climate visualization, placing objects into the work area reveals data-driven shadows which simultaneously reveal information about that building's heating, cooling, electricity, and water use (Figure 9c). Viewers can also re-position buildings in the work area to create clusters, orderings, and other layouts. Because the models retain the recognizable form of the original building, they remain easy to identify and reason about even when removed from their original geospatial locations. As in prior systems like reacTable [17], tangible models could also be used to dynamically construct new kinds of charts, including network visualizations.
194
+
195
+ < g r a p h i c s >
196
+
197
+ Figure 8: Screenshots of our visualizations without the physical model. The campus climate visualization (left) showing daily heating (red) and cooling energy (blue) for individual campus buildings with that day’s temperature gradient in the background. The work area at right shows heating, cooling, water and electrical usage for specific buildings. The movement visualization (right) overlays movement paths with occupancy data from inside the buildings-shown here with paths visible both inside and outside of buildings.
198
+
199
+ < g r a p h i c s >
200
+
201
+ Figure 9: Four interaction techniques (reveal, assemble, extract, and reorient) that use manipulation of physical models to interactively control the layout and detail of the visualizations.
202
+
203
+ Reorient. Similarly, in a reorient interaction, pieces of the model can be rotated on the tabletop to provide additional input to the system. While this may be impossible in geospatial layouts where pieces appear close together, the rotation of pieces in non-spatial layouts can provide a rich, continuous input mechanism associated directly with a specific building. In our movement visualization, we examine how these rotation interactions could be used to filter the underlying data based on direction of travel (Figure 9d). Viewers can rotate models that they have placed in the work area like dials. These buildings then serve as simple angular selection widgets [25] that filter the main visualization to show only the traffic that passes through that building in a specific range of orientations.
204
+
205
+ § 6 DISCUSSION
206
+
207
+ Our ongoing process of design, reflection, consultation, and validation surfaced a variety of implications for future systems that integrate visualizations and architectural models. In our discussion, we offer four takeaways for layering multiple data sets with physical model. While we have discussed the potential for simultaneous worlds in the context of architectural models, we also highlight how the concept may hold promise in other application domains.
208
+
209
+ § 6.1 MODELS AND DATA GRANULARITY
210
+
211
+ Physical models are static objects limited to the scale at which they are fabricated, which in turn limits the scale and granularity of the data visualizations they anchor. This can present challenges if the level of detail of the data and visualization are not compatible.
212
+
213
+ In our energy visualizations we map energy use to the color of the building outline on a building-by-building basis. This suits the granularity of the current data, which is monitored by a single meter in each building. Both the energy managers and sustainability directors preferred this scale because it matched the scale they used to analyze the university systems. They also emphasized the value of seeing energy use data together with a building's facade and volume, since this combination reveals relationships between energy use and size that are not visible in spreadsheet data-with the energy manager emphasizing, "we never see the data this way, it brings up other things to think about."
214
+
215
+ The architects, however, questioned the usefulness of visualizing data for entire buildings, as they felt that seeing information on a room-by-room basis would be more likely to show inefficiencies and other issues. This way, operators who are used to a "normal pattern of usage" for each building could see outliers or anomalies within each building. Additionally, one architect suggested that the system might be used as a control panel in other applications, for example for airport security and logistics - again suggesting that showing data on a floor-by-floor or room-by-room basis might be more useful for detecting patterns.
216
+
217
+ However, using models as a substrate for more granular data poses several practical challenges. For example, surfacing more granular data on a campus-scale model composed of individual buildings would likely reduce designers' ability to use models as interactive controls, since each building would now visualize multiple data points. However, breaking models into smaller pieces make them more difficult to manipulate and to track. In these cases, designers may need to explore hybrid solutions that use models to show data at one level of abstraction but support exploration of more granular data using other representations. For example a tabletop system like ours might show building-level aggregates on the model itself, while displaying floor-by-floor or room-by-room data in the work area next to the model.
218
+
219
+ § 6.2 OPTIMIZING CITY-SCALE SYSTEMS
220
+
221
+ Campus, city, and neighborhood-scale architectural models present opportunities for understanding and evaluating larger meta-systems together with their component pieces. Interactive and modular physical models can help facilitate this interplay-but their physical nature limits the potential for analyses that bridge more distant scales.
222
+
223
+ In our discussions, the campus operations manager appreciated the combined visualizations with multiple types of data at one time, noting that "we can see the sustainability of the campus as a whole, which compliments and expands our understanding of energy use for whole building optimization, which considers energy management of the campus as a whole." Going forward, the goal of the university is to identify excess energy and store it for future use or transfer to other areas nearby when required. For example, if a building such as an ice skating arena is producing excess heat, the operations department might store or transfer the heat to use in other nearby buildings.
224
+
225
+ Integrated visualizations like our climate tool could make it easy to monitor where on campus excess cooling or heating is occurring, and which buildings close by are in need of that type of energy. The facilities manager noted that the geospatial layout made it easy to see the energy loop from the plant to each building and back again. These considerations are especially important when evaluating potential building upgrades, where even incremental reductions in energy use in large, well-designed buildings can translate into substantial overall savings. Like the architect's considerations for more granularity, the operations manager considered aspects on a different scale than we designed our model for. The operations manager needed to keep an eye on the big picture. Thus, they were more inclined to consider campus design as a system. In terms of using architectural models, this suggests considering ways that multiple pieces of a model might connect physically to be considered as one.
226
+
227
+ § 6.3 COLLABORATION AROUND SCALE MODELS
228
+
229
+ The use of tabletop tools as collaboration platforms is well-documented $\left\lbrack {{26},{27}}\right\rbrack$ and our feedback supports these findings. Moreover, our interviews with stakeholders and the response to our public demos suggests that situating visualizations using a model provides a strong and engaging entry point and encourages viewers to treat the tabletop as a collaborative tool. Additionally, the administrators we worked with felt that the combination of visualization and model would make it easier for non-technical users to understand the relationship between buildings' form and their energy use. Administrators also highlighted the potential for using the model as a "control center" on which to visualize diverse situational and operational data. They noted that such a display could be used by multiple departments on campus to help develop a better understanding of their operations and thus aid interdepartmental meetings. Similarly, the operations and sustainability managers emphasized how this kind of model could help frame public discussions around proposed buildings and retrofits, allowing stakeholders to more readily appreciate both the physical interplay of buildings and their relative energy footprints.
230
+
231
+ § 6.4 USING SCALE MODELS FOR EXPLORATION AND SIMULATION
232
+
233
+ Our prototypes all used detailed models of the current campus to visualize historical and real-time data. However, our discussions with architects highlighted the potential for models to support the exploration of predictive simulations and future scenarios, and we consider this a particularly rich area for future work. In particular, one of the architects surfaced an essential distinction between models that reflect the current state of the world and those that embody alternative possibilities. Currently, most tangible tabletop systems designed for planning or educational purposes [27] represent abstract scenarios using generic or primitive tangible forms. These abstract scenarios can allow participants to model and imagine many different potential campus designs free from the constraints of the current configuration. However, energy managers and other administrators are interested in how the model reflects the campus operationally, as it is in real life.
234
+
235
+ Considering how to support both monitoring and simulation scenarios is an ongoing challenge with numerous trade-offs. For example, representing buildings using generic geometric primitives frees viewers to imagine a variety of possible design alternatives, but may miss opportunities for situating predictive simulations in an accurate architectural context. Going forward, modular, reconfigurable, or shape-changing models have the potential to address these challenges, allowing the same models to transition between abstract and realistic forms as needed. This suggests the next exciting steps for integrating architectural models and tabletop user interfaces might look to recent work in shape-changing interfaces [23]. We encourage future work to understand how we might build models that integrate possibilities for simulating "what might be".
236
+
237
+ § 7 CONCLUSION
238
+
239
+ We present a design exploration examining how physical architectural models on tabletops can anchor visualizations of multiple complementary datasets. Our explorations detail how physical models offer new potential for supporting observations that integrate contextual and ambient information from multiple "Simultaneous Worlds". Specifically, our work highlights how physical models can help situate and compose multiple visualizations together, while also serving as tangible tokens that allow viewers to manipulate and author new representations. The layering of heterogeneous data streams around a physical model creates new opportunities for situated data analysis, which we are actively exploring in our continuing research. Through both our design explorations and reflections on the development process, we hope to lay the groundwork for even richer integration between visualizations, physical architectural models, and interactions, including ones that extend beyond the context of tabletops. Moreover, we hope this work provides inspiration for other forms of physical data models that support situated and embedded visualization with embodied and fluid interactions.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/SMxl-K4pG9/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,383 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # FaceUI: Leveraging Front-Facing Camera Input to Access Mid-Air Spatial Interfaces on Smartphones.
2
+
3
+ First Last*
4
+
5
+ Author Affiliation
6
+
7
+ First Last ${}^{ \dagger }$
8
+
9
+ Author Affiliation
10
+
11
+ First Last ${}^{ \ddagger }$
12
+
13
+ Author Affiliation
14
+
15
+ ![01963e6b-d02c-7c86-be81-dbbafd042d3b_0_218_428_1362_427_0.jpg](images/01963e6b-d02c-7c86-be81-dbbafd042d3b_0_218_428_1362_427_0.jpg)
16
+
17
+ Figure 1: A FaceUI-based calendar app. A user can access calendar events on a date by moving the phone in mid-air around their face.
18
+
19
+ ## Abstract
20
+
21
+ We present FaceUI, a novel strategy to access mid-air face-centered spatial interfaces with off-the-shelf smartphones. FaceUI uses the smartphone's front-facing camera to track the phone's mid-air position relative to the user's face. This self-contained tracking mechanism opens up new opportunities to enable mid-air interactions on off-the-shelf smartphones. We demonstrate one possibility that leverages the empty mid-air space in front of the user to accommodate virtual windows which the user can browse by moving the phone in the space in front of their face. We inform our implementation of FaceUI by first studying essential design factors, such as the comfortable face-to-phone distance range and appropriate viewing angles for browsing mid-air windows and visually accessing their content. After that, we compare users' performance with FaceUI to their performance when using a touch-based interface in an analytic task that requires browsing multiple windows. We find that FaceUI offers better performance than the traditional touch-based interface. We conclude with recommendations for the design and use of face-centered mid-air interfaces on smartphones.
22
+
23
+ Index Terms: Human-centered computing-Visualization-Visualization techniques-Treemaps; Human-centered computing-Visualization-Visualization design and evaluation methods
24
+
25
+ ## 1 INTRODUCTION
26
+
27
+ When using touch-based input on a smartphone, people typically hold the phone more or less in front of the face. This posture allows for easy visual access to screen content. Clearly, while interacting, having the phone in a stationary position seems preferable, and, accordingly, most smartphone interfaces assume a stable in-front-of-the-face posture. However, many people are also very skilled in sub-optimal situations when the phone is not still in front of their face, such as to text while walking. In this paper, we explore how to design smartphone interfaces that require that the user deliberately moves the phone in the space in front of the face as part of the interaction. We use the high-resolution front-facing camera on a standard smartphone together with machine learning algorithms [10] to track the spatial location of the phone relative to the user's face. This allows us to integrate the large empty space in front of the user into new spatial interactions and user interfaces.
28
+
29
+ Prior research have explored ways to extend a smartphone's input capabilities by shifting the interaction space into the empty in-air space surrounding users' bodies or their mobile devices. For instance, Virtual Shelves [27] allows users to point their hand inside a hemisphere in front of their body to access a set of discrete virtual and invisible items, relying heavily on the users' spatial recall. Similarly, the Imaginary Interface [11] is a mid-air interface in front of the user's body that can be used for pointing and drawing activities. In more recent work, Hasan et al. [19] present the AirPane system and demonstrate how the mid-air space surrounding a mobile device can be used for browsing information in an e-commerce application. These and most other prior projects that demonstrate approaches to leverage around-body or around-device interactions rely on external tracking systems, which is not practical in real-life usage situations. Furthermore, most earlier projects are also limited in that they either do not provide any visual representation of the in-air space (and its interaction objects) at all, or they provide very limited visual information that typically is decoupled from the actual location within the in-air space.
30
+
31
+ In this paper, we present FaceUI, an approach that avoid these shortcomings. FaceUI is a novel strategy that leverages mid-air space in front of the user. FaceUI uses a smartphone's built-in front-facing camera to detect and track the phone's position relative to the user's face. This self-contained tracking approach allows visual access to the in-air space since the screen content is updated depending on the phone's in-air location and the virtual content at that location. The concept is visualized in Figure 1 where a user navigates a FaceUI-based calendar.
32
+
33
+ To our best knowledge, ways to leverage face-centered in-air spaces to access virtual user interfaces (UIs) with off-the-shelf smart-phones have never been explored before. With two user studies, we first investigate how the in-air space can be structured to accommodate virtual UIs used for information exploration on smartphones. We identify the comfortable phone-to-face distance range for accessing virtual UIs in the in-air space and suitable viewing angles for browsing and inspecting content that reside in the in-air space. We use this knowledge to design FaceUI-based calendar application. In a third user study, we evaluate users' performance in a calendar browsing task comparing our FaceUI-based calendar with a touch-based calendar interface. Our results show that the FaceUI-approach can offer considerable advantages compared to traditional touch-based interfaces. We end our exploration with showcasing further FaceUI-based applications.
34
+
35
+ ---
36
+
37
+ *e-mail: author@email.com
38
+
39
+ ${}^{ \dagger }$ e-mail: author@email.com
40
+
41
+ ${}^{\frac{1}{4}}$ e-mail: author@email.com
42
+
43
+ ---
44
+
45
+ Accordingly, our contributions include: 1) FaceUI, a novel face-centered spatial in-air interface-approach for off-the-shelf smart-phones; 2) an exploration of suitable design parameters for FaceUI-based applications; 3) a performance comparison between a FaceUI application and standard touch interface in an analytic task; and 4) showcasing further promising FaceUI-enabled interactive applications that demonstrate the potential of face-centered smartphone interfaces.
46
+
47
+ ## 2 BACKGROUND AND RELATED WORK
48
+
49
+ We review prior work that has explored ways to design spatial interfaces, interaction spaces, and interaction techniques. These earlier projects inspired the design of our face-centered spatial user interface, FaceUI. The previous research closely aligned to components of FaceUI falls mainly under around-device interaction, on- and around-body interaction, and Face-Centered Input.
50
+
51
+ ### 2.1 Around-Device Interaction
52
+
53
+ There has been substantial prior research work exploring the use of mid-air space around mobile devices. Researchers have demonstrated that the mid-air space can be used for novel interactions, such as for virtual content browsing and selection $\left\lbrack {{14},{17},{19},{23},{37}}\right\rbrack$ , map navigation [18,21], mode switching [21] and typing [30]. For instance, AD-Binning [17] leveraged the empty 2D space around a smartphone to off-load and browse content into the space. They further showed that the mid-air space could facilitate faster access to items than the standard touch input. In a similar work, Hasan et al. [19] showed that 3D in-air space around a device could be used for browsing m-commerce applications. Researchers also investigated ways to track users activities around the device with commercial tracking solutions (e.g., Vicon tracking $\left\lbrack {{18},{19}}\right\rbrack$ ) or using different cameras or sensor-based solutions (e.g., depth camera [7,24], distance sensor $\left\lbrack {6,{23}}\right\rbrack )$ . Though these solutions offer precise motion capture data, they require either environments, users or devices to be instrumented with sensors. This makes mobile devices less portable to be used in public spaces.
54
+
55
+ ### 2.2 On- and Around-Body Interaction
56
+
57
+ Prior work investigated ways to use the on- and around-body space for designing novel interaction with devices $\left\lbrack {3,5,9,{27}}\right\rbrack$ . For instance, researchers [11-13] explored the use on body locations such as palm to access on-screen contents. Imaginary Phone [12] used user's palm as the input surface for iPhone. In a similar work, Gustafson et al. [13] investigated palm-based imaginary interface for supporting visually impaired users. Imaginary Interfaces [11] allowed users to perform spatial interaction on empty palm and without visual feedback. In addition, palm has been used for trigger pre-defined functions [26], to perform 3D rotation [24], or use it as an input space for augmenting keyboards [34]. Similarly, researchers explored the skin as an interactive touch surface $\left\lbrack {{16},{39},{40}}\right\rbrack$ . They commonly used external depth cameras to detect and track hand and finger activities such as tapping and sliding on body parts.
58
+
59
+ Researchers also investigated using the mid-air space around the body as a novel interaction space. For instance, Virtual Shelves [27] demonstrated that the mid-air space in front of users could be used to trigger shortcuts. With a study, they showed that users could recall shortcuts by moving their phone into a $7 \times 4$ grid on a circular hemisphere in front of them. Yee et al. [38] designed a solution allowing users to move the mobile phone to different locations around the body and change the on-screen content based on the device's location relative to the body. Ens et al. [9] designed Personal Cockpit leveraging the around-body space to display virtual windows in an head-worn displays. In a similar work, Babic et al. [5] explored Gesture Drawer, an one-handed interaction technique allowing users to define and interact with self-define imaginary interfaces while moving their hand to interact with the interfaces. Researchers have also investigated mid-air spatial interface specific to applications in mixed-reality $\left\lbrack {9,{29},{35}}\right\rbrack$ , for games [33], workspace navigation [22]. For instance, Lubos et al. [29] introduced kinespheres, an mixed-reality based body-centric spatial interface within arm's reach. They received positive feedback from users on using their method compared to traditional head-centered interaction for mixed-reality. Yan et al. [35] explored an eyes-free target acquisition technique for mixed-reality by placing the targets in around-body space. Way Out [33] is a game scenario where players can navigate through an omni-directional panorama scene by moving the device around the body using the built-in motion sensors in smartphones. In a recent work, Kim et al. [22] demonstrated image and map zoom-in and zoom-out using the vision-based interface OddEyeCam, that detects and tracks the location of the mobile phone with respect to the user's body using external sensors such as wide-view RGB cameras and narrow-view depth cameras.
60
+
61
+ ### 2.3 Face-Centered Input
62
+
63
+ Prior research investigated using head and face movements as an input to design new face-centered interactions on devices. For instance, Zhao et al. [41] used a combination of facial movements, device motion and touch for designing face-centered interaction techniques on smartphones. Kumar et al. [25] leveraged eye gaze to scroll mobile phone contents. Yang et al. [36] used a face interpretation engine for enabling face-aware applications for smartphones using the phone's front-facing camera and built-in motion sensors. Similarly, Babie et al. [4] designed Simo that used head movement as input for pointing on a distant large display. Instead of using external cameras, they used the smartphone's front-facing camera to detect face orientation. Rustagi et al. [31] explored touchless typing using head gestures detected by the smartphone's front-facing camera and used them to type on an on-screen QWERTY keyboard. We also observed that the smartphone's front-facing camera could be used to design new strategies such as for rotation of on-screen content on mobile devices by detecting direction changes of objects from the camera view $\left\lbrack {1,2,8}\right\rbrack$ .
64
+
65
+ The manifold opportunities of spatial mid-air interfaces, as demonstrated by earlier projects, inspire us to continue on this promising path. However, in opposite to most previous projects which use external sensors or cameras to identify interaction gestures, we are interested in using a self-contained tracking mechanism to detect in-air movements. Similar to a few face-tracking systems $\left\lbrack {4,{31},{41}}\right\rbrack$ , our FaceUI-approach also uses the front-camera of a smartphone to detect changes in the relative positions of the user's face and the phone. However, FaceUI differ from earlier systems in that it does not rely on any other sensors than the smart-phone's front-facing camera. Furthermore, we aim at interactions where the user keeps the head still while moving the phone. Earlier approaches $\left\lbrack {4,{31},{41}}\right\rbrack$ require the user to do the opposite, to move the head while holding the phone in a fixed position. In this way, we intend to create the sensation of a hemispherical interaction space that is anchored in front of the user's face but moves along with the user (through the self-contained tracking). In this first exploration of such an interaction hemisphere, we focus on using virtual application windows that are located inside the hemisphere. When the user moves the smartphone to a location inside the hemisphere, the content of the virtual window that resides at that location is displayed on the smartphone's screen. When the user re-positions the smartphone inside the hemisphere the screen displays the content of the virtual window that resides at the new position.
66
+
67
+ ![01963e6b-d02c-7c86-be81-dbbafd042d3b_2_229_151_1346_305_0.jpg](images/01963e6b-d02c-7c86-be81-dbbafd042d3b_2_229_151_1346_305_0.jpg)
68
+
69
+ Figure 2: Study 1 task. Using (a) horizontal and (b) vertical mid-air movements to select invisible in-air items. (c) Task prompt. (d) A participant holding the phone in the neutral start position, straight in front of the face.
70
+
71
+ Next we describe a few central aspects of the face-detection software and the setup we used in our user studies. After that we present our three studies in turn and order.
72
+
73
+ ## 3 FACE DETECTION SOFTWARE AND STUDY SETUP
74
+
75
+ The self-contained tracking software facility we developed for our FaceUI-approach is based on the Face Detection API [10] in Google's ML Kit (Machine learning for mobile developers). The API provides a comfortable and reliable way to track the position and orientation of a smartphone relative to the user's face when the front-facing camera is used. Among the available face-tracking related measures, our software relies on yaw data (the smartphone's movements to the left or to the right relative to the detected face), pitch data (the smartphone's up and down movements in the vertical direction relative to the detected face), and distance data (the current distance between the detected face and the camera lens of the front camera on the smartphone). Our software does not use any roll-related information. The Face Detection API delivers ${0}^{ \circ }$ for both yaw and pitch when the user holds the phone straight in front of the face.
76
+
77
+ Restrictions related to COVID-19 prevented us from meeting our study participants face-to-face. Instead, we conducted our studies remotely using teleconferencing software. Accordingly, our participants were required to have a laptop or a desktop computer with a stable Internet connection, a microphone, loudspeakers, and a webcam. In the studies our participants used their own smartphone to run the study software. The study software was designed for any phone running Android 4.2 to 11. Our participants received the study software (i.e., the apk file) and all necessary instructions over email and we guided them through the installation process in the beginning of the study session. The data logged during a study session was automatically transferred from the participant's phone to a Cloud-Firestore data base when the participant had completed the last study tasks.
78
+
79
+ We ran all of our three studies remotely, where participants used the study apps on their smartphones in the wild as opposed to the controlled lab environment. All participants sat in front of their web-cam while completing the study tasks. In each study, a study session lasted approximately 45 minutes, including instructions, practice trials, timed study trials, breaks, and completion of questionnaires. As the study apps were designed for the Android platform, we only recruited participants who possessed an Android smartphone.
80
+
81
+ ## 4 STUDY 1: EXPLORING DIRECTION AND DISTANCE
82
+
83
+ Prior research has reported arm fatigue and 'heavy arm'-issues related to mid-air interactions [15] and that working with a bent arm in mid-air is more comfortable and less strenuous than working with a stretched arm [20]. Since FaceUI involves mid-air hand movements arm fatigue is a potential problem. Moreover, with FaceUI, the mid-air movements need to be constrained such that the user's face is inside the front-camera's field of view.
84
+
85
+ With FaceUI, we envision the mid-air interaction space as a semicircular space in front of the user's face. Through a pilot test (with five participants) we found that the face tracking works best when the phone is between 5 and 80 centimetres away from the user's face and the user moves the phone within a longitudinal range of ${90}^{ \circ }$ (from -45 ${}^{ \circ }$ to the left of the user’s face to ${45}^{ \circ }$ to the right of the user’s face) and a latitudinal range of ${70}^{ \circ }$ (from - ${35}^{ \circ }$ below the user’s nose to ${35}^{ \circ }$ above the user’s nose). Whereas we know that movements inside this space are accurately tracked, we do not know how accurately, fast, and comfortably people can navigate around in this mid-air space. Accordingly, we want to chart out the suitable dimensions and the granularity of the mid-air interaction space for FaceUI in our first study.
86
+
87
+ ### 4.1 Study Design and Study Task
88
+
89
+ We oriented the study task and study design of our first study according to previous projects that have explored the dimensions and the granularity of the mid-air space in front of the user, e.g., the Virtual Shelves [27] and AD-Binning [17] projects. We used a simple item selection task where a trial consists of moving the smartphone to a specified position in mid-air to select the virtual item at that position. We investigated horizontal movements and vertical movements when the phone is close or far from the user's face.
90
+
91
+ Figure 2 visualizes the study task and setup. We divided the mid-air space along the horizontal into seven equally wide one-dimensional regions - or items -, each ${12.85}^{ \circ }$ 'wide' (Figure 2a). We used five one-dimensional regions - or items - in the vertical direction, each ${12.85}^{ \circ }$ 'high' (Figure 2b). From a user’s perspective, the size of these items in the air in front of the face depends on the distance between the phone and the face: the further away from the face, the larger the item becomes. Accordingly, we decided to also test movements (horizontal and vertical) when performed close to the face and far away from the face.
92
+
93
+ #### 4.1.1 Study design
94
+
95
+ With this, we arrive at two independent factors for our study: 1) movement Direction: horizontal and vertical, and 2) Distance: close and far. Close represents the distance range within which participants commonly and comfortably hold their phone when accessing on-screen content with touch. We regarded any distance beyond that range as far. However, where the comfortable range ends is likely to differ between participants (depending on arm length and preference). Therefore, it is critical to have a user-depended threshold value rather than using a common value for all participants. We calibrated the individual value for each participant in the beginning of the study session. We asked the participant to provide us the phone-to-face distance where it started to feel awkward and less comfortable when moving the phone in front of the face. Once the phone had reached these locations, the study app showed the distance between the face and the phone (in centimetres) on the screen. We asked the participant to move the phone left, right, middle, up, and down and share the distance data. We calculated the participant's upper value for the 'close' (i.e., comfortable) distance for the horizontal direction by averaging the left, right and middle values. For the vertical direction we averaged the up, down, and middle values.
96
+
97
+ ![01963e6b-d02c-7c86-be81-dbbafd042d3b_3_298_168_1201_488_0.jpg](images/01963e6b-d02c-7c86-be81-dbbafd042d3b_3_298_168_1201_488_0.jpg)
98
+
99
+ Figure 3: Result of Study 1. Mean trial time for close and far distance in (a) the horizontal direction and (b) in the vertical direction. (c) Mean error rate for the horizontal and vertical directions at close and far distances. Error bars: 95% Cl.
100
+
101
+ We used a within-subjects study design. All participants performed four series of six blocks of trials, one series of blocks with each of the four Distance-Direction combinations: close-horizontal, close-vertical, far-horizontal, and far-vertical. Blocks in the horizontal direction consisted of six trials, one trial for each of the six target items (1,2,3,5,6, and 7, cf. Figure 2a) in random order. Blocks in the vertical direction consisted of four trials, one trial for each of the four target items $(1,2,4$ , and 5, cf. Figure 2b) in random order. Accordingly, each participant performed 120 trials: one block series of 36 horizontal trials at close distance + one block series of 24 vertical trials at close distance and one block series of 36 horizontal trials at far distance + one block series of 24 vertical trials at far distance. Half of the participants started with the two block series at close distance and then completed the two block series at far distance, the other half used the other order. The order of the two directions-series within a distance was random.
102
+
103
+ #### 4.1.2 Task procedure
104
+
105
+ To start a trial, the participant moved the phone to the middle region, straight in front of the face. In the horizontal direction, this corresponded to Item 4 in Figure 2a. In the vertical direction Item 3 in Figure 2b was used as the start region. Once the participant moved the phone inside the start region the screen turned green and displayed information for the upcoming trial, including the target prompt with the item number to select next, as shown in Figure 2c. A trial started when the participant pressed down the thumb on the screen. If the phone was moved outside the start region before pressing down with the thumb, the screen turned red and showed instructions to move the phone into the start region. A thumb-press in the start region started timing for the trial. A selection was registered and the trial time stopped when the thumb was released after having moved the phone into one of the items (or regions) outside the start region. Speech output informed if the participant selected the correct item or not by playing "Correct selection" resp. "Wrong selection". Erroneous trials were re-queued at a random position among the unfinished trials within the current block.
106
+
107
+ During a running trial we relied on audio to inform participants about the current position of the phone. The app provided speech output when i) the phone entered a new item, by saying the number of the item, when ii) the participant moved the phone at the wrong distance, by playing "Move the phone further away" in far conditions or "Move the phone closer" in close conditions, and when iii) the face tracking software lost track of the face, by playing "Face out of camera view". Working with audio guidance was important: in our first study, we wanted to focus on the motoric aspects and movement properties that determine the dimensions of FaceUI's interaction space. We wanted to exclude aspects that relate to how well a user can read screen content while moving the phone in mid-air space, such as the size of screen content and the viewing angle and distance. We return to such visual issues in our second study.
108
+
109
+ ### 4.2 Participants
110
+
111
+ We recruited twelve right-handed participants (mean age 27.08 years, s.d. 5.98, 6 male) via on-campus flyers and word-of-mouth. All participants were daily smartphone users.
112
+
113
+ ### 4.3 Results
114
+
115
+ We first report on results regarding participants' comfortable phone-to-face distance (calibrated in the beginning of a study session) that served as the basis for each participant's individual threshold value that separated the close distance from the far distance. After that we report on trial time, error rates, and subjective ratings.
116
+
117
+ #### 4.3.1 Close/far threshold value
118
+
119
+ Across all participants, the average face-to-phone distance where movements started to feel less comfortable was ${39.31}\mathrm{\;{cm}}$ (s.d. 7.61) for horizontal phone movements and ${39.38}\mathrm{\;{cm}}$ (s.d. 7.34) for vertical movements. This critical threshold varied a lot between participants. In the horizontal movement direction it was between 30 and 61 $\mathrm{{cm}}$ and in the vertical direction between 28 and ${61}\mathrm{\;{cm}}$ (only one participant had values greater than ${50}\mathrm{\;{cm}}$ ).
120
+
121
+ #### 4.3.2 Trial time
122
+
123
+ The trial time analyses are based on error free trials only. Figure 3a shows the mean trial time for each target position at both distances in the horizontal movement direction and Figure $3\mathrm{\;b}$ shows the corresponding results for the vertical direction. Mean trial times for close and far (across the two directions) were 2.93 sec and 2.99 sec, respectively. The overall mean trial time (across the two distances) for the horizontal and vertical directions were 3.01 sec and ${2.90}\mathrm{{sec}}$ , respectively. A $2 \times 2\mathrm{{RM}} - \mathrm{{ANOVA}}$ showed that there was no significant difference between the two distances $\left( {{F}_{1,{11}} = {0.09}}\right.$ , $p = {0.76}$ ) or between the two directions $\left( {{F}_{1,{11}} = {0.29}, p = {0.59}}\right)$ . A one-way RM-ANOVA (independent factor block) indicated that participants became faster during the course of the study with significantly longer trial times in the first block of trials than in last two blocks $\left( {{F}_{5,{55}} = {5.85}, p < {0.001}}\right)$ . The mean trial time decreased from 3.43 sec in Block 1 to 2.81 and 2.72 sec in Block 5 and 6, respectively.
124
+
125
+ ![01963e6b-d02c-7c86-be81-dbbafd042d3b_4_298_149_1191_316_0.jpg](images/01963e6b-d02c-7c86-be81-dbbafd042d3b_4_298_149_1191_316_0.jpg)
126
+
127
+ Figure 4: Study 2 setup. (a) The green screen showing the instruction of locating the phone to a ${40}^{ \circ }$ viewing angle on horizontal plane,(b) Participant located the phone to downward at ${40}^{ \circ }$ viewing angle on horizontal plane and,(c) located the phone to upward at ${40}^{ \circ }$ viewing angle on horizontal plane, (d) selecting the total number of black dots on down and up screen at ${40}^{ \circ }$ viewing angle on horizontal plane.
128
+
129
+ In Figure 3 we also see a clear and expected pattern regarding the different target positions: given further phone-movement distances, selecting item at positions close to the start position (Position 4 for horizontal movements and Position 3 for vertical movements, cf. Figure 2) was quicker than selection items further away. This pattern we see for movements in both directions and at both the close and far distances.
130
+
131
+ #### 4.3.3 Error rate
132
+
133
+ Figure 3c shows the mean error rates for the four distance $\times$ direction combinations. A Friedman test identified a significant difference among the combinations $\left( {{\chi }^{2}\left( {3, N = {12}}\right) = {8.95}, p < {0.05}}\right)$ and posthoc Wilcoxon tests (Bonferroni adjusted $\alpha$ -level from 0.05 to 0.008) revealed that the close-vertical combination was significantly less error prone than close-horizontal combination and that there were no other pairwise differences.
134
+
135
+ #### 4.3.4 Subjective feedback
136
+
137
+ We asked participants to rate the two directions and the two distances according to their overall preference on a 5-point scale with $1 =$ bad, $3 =$ neutral, and $5 =$ good. We found an unsurprising and strong preference for the close distance with mean rating 4.52 compared to the far distance with mean rating 1.91 . Participants were not that decided in their opinions regarding the two movement directions. They rated the horizontal movement direction only slighter better than the vertical direction, mean rating 4.23 vs. mean rating 3.1.
138
+
139
+ ### 4.4 Summary
140
+
141
+ Results from the subjective feedback indicate that participants had a slight preference for horizontal movements over vertical movements. However, our analyses also revealed that there is no significant difference between the movement directions in regard of trial time. But we see a clear, and unsurprising advantage for the close distance over the far distance. Accordingly, for our future FaceUI explorations, we learn that people are sensitive regarding the phone-to-face distance and that FaceUI-based applications should avoid requiring user to use large phone-to-face distances. Consequently, in our we continue utilizing regions along both the horizontal and vertical directions. However, we observed increased trial time with items located in certain vertical regions, e.g., Item 1, than others. This warrants further investigation into factors such as visual angles that could influence users' performance when reading screen content when holding the phone in such regions.
142
+
143
+ ## 5 STUDY 2: EXPLORING TARGET REGION AND TARGET AN- GLE
144
+
145
+ Application interfaces that are placed in FaceUI move in both horizontal and vertical regions with the user's head movement along with the same regions. Therefore, to read content that is located to the right or left on the FaceUI, a user needs to keep their head static and move their eyes to read the content. Prior research [32] showed that such eye movement could cause eyes fatigue, pain and tiredness Therefore, in this study, we explored suitable viewing angles where users can comfortably access on-screen items on smartphones.
146
+
147
+ ### 5.1 Participants
148
+
149
+ We recruited fourteen right-handed participants (mean age 26.78 years, s.d. 6.07, 7 male) via on-campus flyers and word-of-mouth. All participants were daily smartphone users. None of the participants had participated in Study 1.
150
+
151
+ ### 5.2 Factors
152
+
153
+ We considered the following factors in this study.
154
+
155
+ #### 5.2.1 Target Region
156
+
157
+ In this study, we considered placing items in two regions - vertical (up and down) and horizontal (left and right). Similar to the first study, we kept the middle location reserved as the starting point of a trial.
158
+
159
+ #### 5.2.2 Target Angle
160
+
161
+ We decided to place a set of targets at angles both in horizontal and vertical regions. With a pilot study, we choose to place items at $\pm {20}^{ \circ }$ , $\pm {30}^{ \circ }, \pm {40}^{ \circ }$ and $\pm {50}^{ \circ }$ angles where positive and negative angles indicate items to the right and left regions, respectively. Results from our pilot study showed that participants were not able to see items that are located above $+ {30}^{ \circ }$ in the up. Additionally, any items placed below $- {40}^{ \circ }$ angles for the downward region were not accessible as the phone gets very close to the body. Therefore, we used $+ {20}^{ \circ }$ and $+ {30}^{ \circ }$ for the up region, and $- {20}^{ \circ }, - {30}^{ \circ }$ and $- {40}^{ \circ }$ for the down region.
162
+
163
+ ### 5.3 Procedure and Tasks
164
+
165
+ At the beginning of a trial, the participant was required to move the phone to the middle position (straight in front of the face). As long as the phone was still and outside the middle position, the screen remained red and contained instructions to move the phone to the middle position. Once the phone was inside the middle position the screen turned green and displayed the target prompt for the next target (along with block and trial counts), as seen in Figure 4a. Participants were asked to press on the screen with their thumb and move their phone at the target angle while keeping the thumb on the screen. Tapping on the screen also started a timer. The study application then removed the on-screen instructions, replaced with an empty black window, and kept it until participants moved the phone to the target angle. We used circle counting tasks in this study, where participants were required to count the number of circles presented in two windows located at a target angle. For instance, if the target angle was $+ {30}^{ \circ }$ in the right region, we placed two more windows above and below the vertical plane defined by the user's eye. Participants could only see the windows once they reach to the instructed region and angle. They could now move their phone up and down (for horizontal region) or left and right (for vertical region) to access the windows while keeping the phone in the target angle (Figure 4b-c). The windows contained a random number of non-overlapping black circles between 12 and 16. Participants were required to count the total number of circles seen on both screens. Once they believed counting all the circles on both windows, they were asked to lift off their thumb from the touchscreen. This action further popup a window containing multiple options for the summation results (Figure 4d). Once they selected the correct answer, the application stopped the trial time, provided voice feedback on whether they were correct or not and displayed the instruction for the next trial on the screen. If incorrect, the app stopped the trial time, provided audio feedback, and re-queued the trial at a random position among unfinished trials within a block of trials. Participants were then required to move the phone back to the middle and continue trials until all the trials were finished. Note that for either case, the app sent trial-related information (e.g., task completion time, correctness) to a database server.
166
+
167
+ ![01963e6b-d02c-7c86-be81-dbbafd042d3b_5_294_156_1202_451_0.jpg](images/01963e6b-d02c-7c86-be81-dbbafd042d3b_5_294_156_1202_451_0.jpg)
168
+
169
+ Figure 5: Result of Study 2. (a) Mean reading time for left, right, down and up target regions and (b) for target angles in each target region. Error bars: 95% Cl.
170
+
171
+ Each participant completed six blocks of trials with each of the Target Region (left, right, up and down) where one block contained one trial for each of the Target Angles. Therefore, each participant performed 78 error-free trials (24 trials for left, 24 for right, 12 for up and 18 for down region). The presentation order of the target region were selected randomly between participants and the angles were presented in a random order. Participants were provided with 2 blocks of practice trials. After completing all the trials, we collected the participants' feedback on their preferences on Target Region and Target Angles. This study required participants around 45 minutes to complete all the tasks.
172
+
173
+ ### 5.4 Results
174
+
175
+ Instead of analyzing the trial time, we were interested in the time that participants spent on counting the circles rather than moving the phone to the target position and taking time to answer questions. We called this time as Reading time.
176
+
177
+ #### 5.4.1 Reading time
178
+
179
+ We used repeated measures ANOVA and post-hoc pairwise comparisons to analyze reading time. Results showed that Target Region had significant effects on reading time $\left( {{F}_{3,{39}} = {3.24}, p < {0.05}}\right)$ . Fig. ure 5a shows the mean reading time for all four regions: left (mean 10.40s) and right (mean 11.16s), down (mean 9.76s) and up (mean 13.84s). Post-hoc pairwise comparisons showed that Down was significantly faster than Up while accessing the items. No other pairwise difference was found.
180
+
181
+ We also analyzed the reading time for each target angle on each target region. Figure $5\mathrm{\;b}$ shows the mean reading time for all target angle on each target region. Target angles in Right showed significant effects on reading time $\left( {{F}_{3,{39}} = {6.71}, p < {0.001}}\right)$ . Post-hoc pairwise comparisons between target angles showed that targets at ${50}^{ \circ }$ angle was significantly slower than target angles at ${20}^{ \circ }$ . There were no other pairwise statistically significant differences. We also observed that the target angles in Left target region had significant effects on reading time $\left( {{F}_{3.39} = {7.03}, p < {0.001}}\right)$ . Similar to the right region, targets at ${50}^{ \circ }$ angles were significantly slower than targets at ${20}^{ \circ }$ There were no other statistically significant differences between the angles.
182
+
183
+ Like left and right regions, target angles in ${Up}$ region showed significant effects on reading time $\left( {{F}_{3,{39}} = {23.39}, p < {0.001}}\right)$ . Targets at ${30}^{ \circ }$ angles were significantly slower than targets at angle ${20}^{ \circ }$ . Target angles in Down region also showed significant effects on trial time $\left( {{F}_{2.26} = {21.12}, p < {0.001}}\right)$ . Targets at ${40}^{ \circ }$ angle were significant slower than targets at ${20}^{ \circ }$ and ${30}^{ \circ }$ . No other statistically significant differences were observed.
184
+
185
+ #### 5.4.2 Subjective feedback
186
+
187
+ Participants rated each target region using a 5-point Likert scale. They preferred the right target region (mean rating 3.85) most, followed by down (mean rating 3.78) and left (mean rating 3.07). Up target region was rated as the least preferred region to access items (mean rating 2.28).
188
+
189
+ ### 5.5 Summary
190
+
191
+ The results shows that visually accessing screen content while the phone is positioned in the upper region (up) is slower than when the phone is positioned in any other region (the lower region, the left and the right areas) of the in-air space. For the target angles for each region, we see that participants' performance degrades significantly at the highest angle in each region. Accordingly, we suggest to avoid the extreme angles, ${50}^{ \circ }$ for both right and left, ${30}^{ \circ }$ for up, and ${40}^{ \circ }$ for down when designing a application that uses FaceUI.
192
+
193
+ ![01963e6b-d02c-7c86-be81-dbbafd042d3b_6_300_155_1192_430_0.jpg](images/01963e6b-d02c-7c86-be81-dbbafd042d3b_6_300_155_1192_430_0.jpg)
194
+
195
+ Figure 6: Calendar app interfaces. (a) A trial starts with displaying a query, e.g., - "Find the number of online meetings scheduled on July 6". A tap on the start button opens a new window (b) - containing calendar dates for a month at the bottom of the screen. With the touch interface, a tap on a date shows events scheduled on that date (at the top of the screen). A user can further inspect an event by tapping on it. This action triggers a new window (c) with displaying details on the event. The user can return to the previous screen by tapping the back button or swiping left. (d) With FaceUI, virtual windows containing events details are placed in front of the user, which can be accessed by moving the phone in mid-air. After inspecting the event details, the user can tap on a check button to open a popup window containing multiple answers.
196
+
197
+ ## 6 STUDY 3: PERFORMANCE ANALYSIS OF FACEUI WITH AN ANALYTIC TASK
198
+
199
+ We conducted previous two studies exploring different design factors such as target angle and target region that could potentially influence users performance using FaceUI. In this study, we evaluate a practical usage scenario using FaceUI where users are required to browse multiple windows for retrieving information. Consequently, we designed a calendar app and compared the performance of FaceUI with traditional touch interfaces.
200
+
201
+ ### 6.1 Participants
202
+
203
+ We recruited twelve right-handed participants (mean age 25.5, s.d. 5.23,6male) via on-campus flyers and word-of-mouth. All participants were daily smartphone users. None of the participants had participated in Study 1 or in Study 2.
204
+
205
+ ### 6.2 Task, Procedure and Design
206
+
207
+ In this study, we used an analytic task where users were required reviewing information on a calendar before reaching a decision. A trial starts with displaying the question along with a start button (e.g., "Find the number of online meetings scheduled on July 7"), as seen in Figure 6a. After reading the question, the user taps on the start button, which also starts the trial time and opens a new window containing calendar dates for a month (e.g., July 2021 in our case) at the bottom of the screen (Figure 6b). Once the user selects a date (e.g., July 7) with touch, the date gets highlighted with green color and a number of calendar events on that specific date is displayed at the top. To find the number of calendar events that should be added for each date, we briefly surveyed students and faculty members and found that they commonly have 3-5 events (e.g., classes or meetings) per day, excluding weekends. Consequently, we added 3 to 5 events, represented with an event title and time (e.g., "Department Meeting 15:00-16:00"), for each day except for Saturday and Sunday. Once the user taps on an event, the app opens a new window (i.e., "detailed view") containing detailed information about that event (Figure 6c). The design of detailed view was inspired by the Android's generic calendar applications that contains event title, event time, persons hosting/attending in the event (e.g., Host, Attendees), event type and mode (weekly meeting, online/in-person), reminder-related information (e.g., reminder type, reminder time). After checking the detailed view, the user can tap the "back" button or swipe left to return to the previous screen to view the other events on that day.
208
+
209
+ With FaceUI, a trial also starts with showing the screen displaying a query prompt. Once the user taps on the start button, it opens a new window showing calendar dates at the bottom (Figure 6d). While designing FaceUI, we leveraged the empty mid-air space in front of the user to accommodate virtual windows for frequent browsing. Therefore, we used FaceUI in conjunction with traditional touch input where touch is used to select an item (e.g., a date from the calendar) and FaceUI was used for browsing content (e.g., "detailed view") by moving the phone in mid-air space. Note that as we have maximum 5 events for a date, we decide to only place the detailed views between $+ {40}^{ \circ }$ and $- {40}^{ \circ }$ in the horizontal direction rather than placing them in a grid. Once the user selects a date, FaceUI shows the details of an event (i.e., detailed view) on the top half of the screen and the user can browse other events on the date by moving the phone horizontally in mid-air (Figure 6d). Phone movements along vertical directions are ignored. After inspecting all the event details, the user taps on a check button (for both touch and FaceUI interfaces) to open a popup window containing multiple answers (Figure 6e). Once the user selects an answer and taps the select button, the app provides audio feedback on the correctness of the answer. If the selected answer is correct, the trial time stops, and the next trial is displayed on the screen. If incorrect, audio feedback for incorrect selection is provided, and the user is required to redo the trial immediately.
210
+
211
+ The study used a $2 \times 3$ within-subjects design for factors Interface (FaceUI, Touch), and Number of events (3, 4, and 5). For each combination of factors, participants performed 10 repetitions resulting in 60 error-free timed trials per participant. The order of the Interface was counter-balanced across participants and Number of events orders were randomized within each Interface. Participants were provided with practice trials for each combination until they felt comfortable operating the two interfaces. A study session lasted approximately 45 minutes.
212
+
213
+ ### 6.3 Results
214
+
215
+ Measures: Our study tasks include three sub-tasks: (i) selecting a correct date from the calendar, (ii) browsing event details (i.e., detailed view) on the selected date and (iii) selecting the correct answer. Therefore, we recorded the following times: Browsing start time is the time when the participants read the question and click on the start button to the time when they select the correct date from the calendar; Browsing time is the time from when participants select the correct date to the time when they finish browsing detailed view and tap on the check button to open the interface containing four possible answers; and Selection time is the time from when they tap on the check button to the time when they select an answer and press the submit button. In addition, we record trials where participants selected a wrong answer.
216
+
217
+ ![01963e6b-d02c-7c86-be81-dbbafd042d3b_7_302_163_1194_383_0.jpg](images/01963e6b-d02c-7c86-be81-dbbafd042d3b_7_302_163_1194_383_0.jpg)
218
+
219
+ Figure 7: Result of Study 3. (a) Mean browsing start time, (b) browsing time and (c) selection time. Error bars: 95% Cl.
220
+
221
+ #### 6.3.1 Error trials, outliers, and trial time
222
+
223
+ We marked a trial as an error trial if participants answered incorrectly. We observed that participants selected wrong answers in 41 trials (5.38%): 24 with Touch Interface (3.15%) and 17 with FaceUI (2.23%). A Wilcoxon Signed-Rank test showed no difference between the two Interfaces. To analyze the times (e.g., browsing start time, browsing time or selection time), we first removed all erroneous trials and then removed eight outlier trials with a total trial time outside of $\pm 3\mathrm{{SD}}$ . Overall, we found that participants ${13}\%$ faster with FaceUI than touch interfaces (with FaceUI 15.2s and touch 17.2s). We used Repeated Measures ANOVA and Bonferroni adjusted post-hoc pairwise comparisons to analyze the times.
224
+
225
+ #### 6.3.2 Browsing start time
226
+
227
+ Figure 7a shows the browsing start time for the two Interfaces and three Number of events. There were no significant differences in browsing start time between the two Interfaces $\left( {{F}_{1,{11}} = {0.04}}\right.$ , $p = {0.84})$ . Participants with Touch interfaces were (2.31s, SE 0.31) slightly faster than FaceUI (2.35ms, SE 0.30). Similarly, we did not find any significant differences in browsing start time between the Number of events $\left( {{F}_{2.22} = {0.26}, p = {0.78}}\right) .2,3$ and 4 events were taking 2.28s (SE 0.30), 2.34s (SE 0.28) and 2.38s (SE 0.31), respectively. There was no significant Interface $\times$ number of events effect on browsing start time $\left( {{F}_{2,{22}} = {0.61}, p = {0.55}}\right)$ .
228
+
229
+ #### 6.3.3 Browsing time
230
+
231
+ Figure 7b displays the browsing time for the two Interfaces and three Number of events. A RM-ANOVA showed that the browsing time was significantly different depending on Interfaces $\left( {{F}_{1.11} = {6.52}}\right.$ , $p < {0.05}$ ) and the Number of events $\left( {{F}_{2,{22}} = {39.55}, p < {0.001}}\right)$ . Bonferroni adjusted post-hoc comparisons showed that FaceUI with 9.4s (SE 0.56) was significantly faster than touch interfaces with 11.4s (SE 0.77). For number of events, each pairwise comparison was significant (all p’s $;{0.001}$ ) where the higher number events took longer time than the lower ( 3 events - 7.95s, SE 0.49, 4 events - 10.24s, SE 0.59 and 5 events - 13.09s, SE 0.82). There was no significant Interface $\times$ number of events on browsing time $\left( {{F}_{2,{22}} = }\right.$ ${1.70}, p = {0.21})$ .
232
+
233
+ #### 6.3.4 Selection time
234
+
235
+ Figure 7c shows the selection time. We observed no significant differences in selection time between the two Interfaces $\left( {{F}_{1,{11}} = }\right.$ ${0.92}, p = {0.36})$ or three Number of events $\left( {{F}_{2,{22}} = {1.28}, p = {0.30}}\right)$ 2,3 and 4 events were taking 1.61s (SE 0.06), 1.59s (SE 0.05) and 1.66s (SE 0.04), respectively. There was no significant Interface $\times$ number of events effect on Selection time $\left( {{F}_{2,{22}} = {0.80}, p = {0.46}}\right)$ .
236
+
237
+ #### 6.3.5 Subjective Feedback
238
+
239
+ Participants had prior experience with touch interfaces and were very comfortable using smartphones. Once participants mentioned: "I have been using touch interfaces on smartphones for a long time I feel comfortable with them". We also observed a bias for touch interfaces where we asked participants to rate the two directions according to their overall preference on a 5-point scale. Touch interfaces (mean rating 4.5, SD 0.5) was rated higher than FaceUI (mean rating 3.5, SD 1.1). However, participants acknowledged that the concept of FaceUI was utterly new, and they were not familiar with any similar concepts. Once participants commented: "It's a new method and I don't have experience with it. However, it seems a potential method for operating smartphones". We believe that once such interfaces become available on commercial smartphones, people will feel comfortable using them for accessing on-screen content.
240
+
241
+ ### 6.4 Summary
242
+
243
+ Our participants had more than 9 years of prior experience using the traditional touch interfaces where FaceUI was a new experience for them. Despite this, results demonstrated that the FaceUI offers faster access to information than traditional touch interfaces. We found that the browsing start time and the selection time were comparable for both interfaces as participants used the same procedures (i.e., selecting dates or buttons with touches) with both interfaces. However, we observed that participants were significantly faster with browsing information with FaceUI (see figure 7). FaceUI enables quick retrieval of spatially located virtual UIs by moving the phone into large space whereas touch interfaces require users switching between windows with frequent tap and swipe, which is known to be costly [17]. From the results of the studies we believe that FaceUI can be a promising supporting input technique to the conventional touch input and this new interaction can serve the users with the best timing in browsing multiple windows.
244
+
245
+ ## 7 DESIGN GUIDELINES
246
+
247
+ We summarize and present our key findings as design guidelines. Our investigation offers the following guidelines to designers for interfaces similar to FaceUI:
248
+
249
+ ![01963e6b-d02c-7c86-be81-dbbafd042d3b_8_152_151_1496_320_0.jpg](images/01963e6b-d02c-7c86-be81-dbbafd042d3b_8_152_151_1496_320_0.jpg)
250
+
251
+ Figure 8: FaceUI enabled applications. A user (a-b): browse through images by moving the phone along horizontal and vertical directions; (c) scroll through messages to check message history by moving the phone vertically; and (d) navigate a map by moving the phone.
252
+
253
+ ### 7.1 Distance
254
+
255
+ We found that participants preferred moving their hands within ${40}\mathrm{\;{cm}}$ around the face. Using this space for accessing virtual UIs will also help minimizing concerns related to arm fatigue. Thus, we recommend designers placing UIs in the mid-air space within ${40}\mathrm{\;{cm}}$ from users' faces.
256
+
257
+ ### 7.2 Direction and Region
258
+
259
+ Results indicate that participants preferred the horizontal over the vertical direction for moving the phone to access items. Designers should emphasise placing items in this direction. In addition, participants reported difficulties accessing items in the up region. Thus, limited or no items should be placed in this region.
260
+
261
+ ### 7.3 Viewing Angle
262
+
263
+ Items should be placed within a comfortable viewing angle as users performance significantly degrades once the targets are placed far from the comfortable viewing angles. Caution is needed, especially when placing items in extreme angles which could create eyestrain. Results from our study suggest placing items between $- {40}^{ \circ }$ left and $+ {40}^{ \circ }$ right in the horizontal direction, between $+ {20}^{ \circ }$ up and $- {30}^{ \circ }$ down in the vertical direction.
264
+
265
+ ### 7.4 Mid-Air Space for Browsing-Intensive Tasks
266
+
267
+ Study 3 results showed that mid-air space is more effective for browsing through UIs than the traditional touch interface. This is primarily due to the minimum switching costs (e.g., small device movements) involved with navigating between UIs with FaceUI. Therefore, we suggest that designers consider using mid-air space to place UIs to perform any browsing-intensive tasks with FaceUI-enabled interfaces.
268
+
269
+ ## 8 FACEUI ENABLED APPLICATIONS
270
+
271
+ We designed the following three applications to demonstrate how face-centered spatial user interfaces can be used in a off-the-shelf smartphone.
272
+
273
+ ### 8.1 Image Browser
274
+
275
+ In Study 3, FaceUI implementation did not consider a scenario where items were located in a grid. However, many FaceUI-enabled application could be benefited from such as an item arrangement style. Consequently, we developed an image browsing app that offloads a set of images into a $5 \times 3$ grid on the mid-air space in front of users. Users can browse images by moving the device in horizontal and vertical directions. While browsing the images, users can touch the screen to access further details about an image. Figure 8a-b demonstrates the app scenario where a user is browsing the images by moving the phone along horizontal and vertical directions.
276
+
277
+ ### 8.2 Message History
278
+
279
+ Scrolling through items with touch interfaces is cumbersome and time-consuming, especially for one-handed interaction mode [28]. We developed an application leveraging face-centered UIs to scroll through messages in a messenger app with one hand. In our implementation, the app allows users to scroll through messages by moving their phone vertically between $+ {20}^{ \circ }$ up and $- {30}^{ \circ }$ down (Figure 8c). Clutching can be used to browse extended message history by tapping on the screen.
280
+
281
+ ### 8.3 Map Navigation
282
+
283
+ We designed a FaceUI enabled map application where users move the phone in mid-air to zoom and pan the map. Users phone movements to the left/right/up/down (relative to their face) were associated with the map movement in the same direction (Figure 8d). Map zoom in or zoom out was mapped with the distance from the face (smaller the distance, zoomed in the map).
284
+
285
+ ## 9 LIMITATION AND FUTURE WORK
286
+
287
+ FaceUI requires users to move the phone in mid-air, which may cause eye and arm fatigues for prolonged use. Further investigation is needed to explore solutions to minimize such fatigues. In addition, using FaceUI in public spaces may trigger feelings such as embarrassment or discomfort due to hand movements which may attract by-passers' attention. Thus, it is also important to carry out studies to explore the social acceptance of FaceUI in public and private spaces. We asked participants to keep the head static and move the phone only during studies. Further studies can explore how to leverage head and smartphone movements to access spatial UIs. While using FaceUI, on-screen information can be viewed by surrounding people, triggering privacy concerns. Future studies need to investigate users and by-passers privacy concerns and explore approaches such as software and hardware-based privacy filters to only keep the information visible to users. We only investigated the performance of FaceUI for accessing windows located in the horizontal direction. Further investigations are needed to explore natural delimiters to switch between FaceUI and touch and the performance of FaceUI-enabled applications, where windows are arranged in both horizontal and vertical directions. We foresee future research exploring ways to reduce switching between UIs for the traditional touch interfaces Lastly, future work needs to examine the performance of FaceUI in different usage contexts such as standing, sitting or walking in public and private spaces.
288
+
289
+ ## 10 CONCLUSION
290
+
291
+ We have presented FaceUI, a novel approach leveraging mid-air space to access face-centered spatial user interfaces. Through two user studies, we first explored different factors that influence the design and performance of FaceUI. Based on the results, we designed a FaceUI-based calendar app and compared users' performance using the calendar with a touch-based calendar interface. Results showed that FaceUI is a promising approach to enable faster access to UIs than the traditional touch interfaces.
292
+
293
+ ## REFERENCES
294
+
295
+ [1] What is the Smart Scroll ${}^{\mathrm{{TM}}}$ feature on my Samsung Galaxy Alpha®?
296
+
297
+ https://www.samsung.com/za/support/mobile-devices/ what-is-the-smart-scroll-feature-on-my-samsung-galaxy-alpha/, 2018. [Online; accessed March 26, 2022].
298
+
299
+ [2] What is the smart rotation feature on my samsung galaxy s. https://www.samsung.com/za/support/mobile-devices/ what-is-the-smart-rotation-feature-on-my-samsung-galaxy-s5/, 2021. [Online; accessed March 26, 2022].
300
+
301
+ [3] D. Ahlström, K. Hasan, and P. Irani. Are you comfortable doing that? acceptance studies of around-device gestures in and for public settings. In Proceedings of the 16th International Conference on Human-Computer Interaction with Mobile Devices and Services, Mo-bileHCI '14, p. 193-202. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/2628363.2628381
302
+
303
+ [4] T. Babic, F. Perteneder, H. Reiterer, and M. Haller. Simo: Interactions with distant displays by smartphones with simultaneous face and world tracking. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI EA '20, p. 1-12. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/ 3334480.3382962
304
+
305
+ [5] T. Babic, H. Reiterer, and M. Haller. Gesturedrawer: One-handed interaction technique for spatial user-defined imaginary interfaces. In Proceedings of the 5th Symposium on Spatial User Interaction, SUI '17, p. 128-137. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10.1145/3131277.3132185
306
+
307
+ [6] A. Butler, S. Izadi, and S. Hodges. Sidesight: Multi-"touch" interaction around small devices. In Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology, UIST '08, p. 201-204. Association for Computing Machinery, New York, NY, USA, 2008. doi: 10.1145/1449715.1449746
308
+
309
+ [7] X. A. Chen, J. Schwarz, C. Harrison, J. Mankoff, and S. E. Hudson. Air+touch: Interweaving touch & in-air gestures. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, UIST '14, p. 519-525. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/2642918.2647392
310
+
311
+ [8] L.-P. Cheng, F.-I. Hsiao, Y.-T. Liu, and M. Y. Chen. IRotate: Automatic Screen Rotation Based on Face Orientation, p. 2203-2210. Association for Computing Machinery, New York, NY, USA, 2012.
312
+
313
+ [9] B. M. Ens, R. Finnegan, and P. P. Irani. The personal cockpit: A spatial interface for effective task switching on head-worn displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '14, p. 3171-3180. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/2556288.2557058
314
+
315
+ [10] Google. Face Detection — ML Kit. https://developers.google.com/ml-kit/vision/face-detection, 2021. [Online; accessed March 26, 2022].
316
+
317
+ [11] S. Gustafson, D. Bierwirth, and P. Baudisch. Imaginary interfaces: Spatial interaction with empty hands and without visual feedback. In Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology, UIST '10, p. 3-12. Association for Computing Machinery, New York, NY, USA, 2010. doi: 10.1145/1866029. 1866033
318
+
319
+ [12] S. Gustafson, C. Holz, and P. Baudisch. Imaginary phone: Learning imaginary interfaces by transferring spatial memory from a familiar device. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, UIST '11, p. 283-292. Association for Computing Machinery, New York, NY, USA, 2011. doi: 10.1145/ 2047196.2047233
320
+
321
+ [13] S. G. Gustafson, B. Rabe, and P. M. Baudisch. Understanding palm-based imaginary interfaces: The role of visual and tactile cues when browsing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '13, p. 889-898. Association for Computing Machinery, New York, NY, USA, 2013. doi: 10.1145/2470654. 2466114
322
+
323
+ [14] C. Harrison and S. E. Hudson. Abracadabra: Wireless, high-precision, and unpowered finger input for very small mobile devices. In Proceedings of the 22nd Annual ACM Symposium on User Interface Software and Technology, UIST '09, p. 121-124. Association for Computing Ma-
324
+
325
+ chinery, New York, NY, USA, 2009. doi: 10.1145/1622176.1622199
326
+
327
+ [15] C. Harrison, S. Ramamurthy, and S. E. Hudson. On-body interaction: Armed and dangerous. In Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction, TEI '12, p. 69-76. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10.1145/2148131.2148148
328
+
329
+ [16] C. Harrison, D. Tan, and D. Morris. Skinput: Appropriating the body as an input surface. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '10, p. 453-462. Association for Computing Machinery, New York, NY, USA, 2010. doi: 10.1145/ 1753326.1753394
330
+
331
+ [17] K. Hasan, D. Ahlström, and P. Irani. Ad-binning: Leveraging around device space for storing, browsing and retrieving mobile device content. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ' 13, p. 899-908. Association for Computing Machinery, New York, NY, USA, 2013. doi: 10.1145/2470654. 2466115
332
+
333
+ [18] K. Hasan, D. Ahlström, and P. P. Irani. Comparing direct off-screen pointing, peephole, and flick & pinch interaction for map navigation. In Proceedings of the 3rd ACM Symposium on Spatial User Interaction, SUI ' 15, p. 99-102. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2788940.2788957
334
+
335
+ [19] K. Hasan, D. Ahlström, J. Kim, and P. Irani. AirPanes: Two-Handed Around-Device Interaction for Pane Switching on Smartphones, p. 679-691. Association for Computing Machinery, New York, NY, USA, 2017.
336
+
337
+ [20] J. D. Hincapié-Ramos, X. Guo, P. Moghadasian, and P. Irani. Consumed endurance: A metric to quantify arm fatigue of mid-air interactions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '14, p. 1063-1072. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/2556288. 2557130
338
+
339
+ [21] B. Jones, R. Sodhi, D. Forsyth, B. Bailey, and G. Maciocci. Around device interaction for multiscale navigation. In Proceedings of the 14th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI '12, p. 83-92. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10.1145/ 2371574.2371589
340
+
341
+ [22] D. Kim, K. Park, and G. Lee. OddEyeCam: A Sensing Technique for Body-Centric Peephole Interaction Using WFoV RGB and NFoV Depth Cameras, p. 85-97. Association for Computing Machinery, New York, NY, USA, 2020.
342
+
343
+ [23] S. Kratz and M. Rohs. Hoverflow: Exploring around-device interaction with ir distance sensors. In Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI '09. Association for Computing Machinery, New York, NY, USA, 2009. doi: 10.1145/1613858.1613912
344
+
345
+ [24] S. Kratz, M. Rohs, D. Guse, J. Müller, G. Bailly, and M. Nischt. Palmspace: Continuous around-device gestures vs. multitouch for $3\mathrm{\;d}$ rotation tasks on mobile devices. In Proceedings of the International Working Conference on Advanced Visual Interfaces, AVI '12, p. 181-188. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10.1145/2254556.2254590
346
+
347
+ [25] M. Kumar and T. Winograd. Gaze-enhanced scrolling techniques. In Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology, UIST '07, p. 213-216. Association for Computing Machinery, New York, NY, USA, 2007. doi: 10.1145/1294211. 1294249
348
+
349
+ [26] H. V. Le, T. Kosch, P. Bader, S. Mayer, and N. Henze. PalmTouch: Using the Palm as an Additional Input Modality on Commodity Smart-phones, p. 1-13. Association for Computing Machinery, New York, NY, USA, 2018.
350
+
351
+ [27] F. C. Y. Li, D. Dearman, and K. N. Truong. Virtual shelves: Interactions with orientation aware devices. In Proceedings of the 22nd Annual ACM Symposium on User Interface Software and Technology, UIST '09, p. 125-128. Association for Computing Machinery, New York,
352
+
353
+ NY, USA, 2009. doi: 10.1145/1622176.1622200
354
+
355
+ [28] C. Liu, C. Liu, H. Mao, and W. Su. Tilt-scrolling: A comparative
356
+
357
+ study of scrolling techniques for mobile devices. In D.-S. Huang, Z.-K. Huang, and A. Hussain, eds., Intelligent Computing Methodologies, pp. 189-200. Springer International Publishing, Cham, 2019.
358
+
359
+ [29] P. Lubos, G. Bruder, O. Ariza, and F. Steinicke. Touching the sphere: Leveraging joint-centered kinespheres for spatial user interaction. In Proceedings of the 2016 Symposium on Spatial User Interaction, SUI '16, p. 13-22. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/2983310.2985753
360
+
361
+ [30] T. Niikura, Y. Hirobe, A. Cassinelli, Y. Watanabe, T. Komuro, and M. Ishikawa. In-air typing interface for mobile devices with vibration feedback. In ACM SIGGRAPH 2010 Emerging Technologies, SIG-GRAPH '10. Association for Computing Machinery, New York, NY, USA, 2010. doi: 10.1145/1836821.1836836
362
+
363
+ [31] S. Rustagi, A. Garg, P. R. Anand, R. Kumar, Y. Kumar, and R. R. Shah. Touchless typing using head movement-based gestures. In 2020 IEEE Sixth International Conference on Multimedia Big Data (BigMM), pp. 112-119, 2020. doi: 10.1109/BigMM50055.2020.00025
364
+
365
+ [32] K.-K. Shieh and D.-S. Lee. Preferred viewing distance and screen angle of electronic paper displays. Applied Ergonomics, 38(5):601- 608, 2007. doi: 10.1016/j.apergo.2006.06.008
366
+
367
+ [33] S.-Y. Teng, M.-H. Chen, and Y.-T. Lin. Way out: A multi-layer panorama mobile game using around-body interactions. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, CHI EA '17, p. 230-233. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10.1145/3027063. 3048410
368
+
369
+ [34] C.-Y. Wang, W.-C. Chu, P.-T. Chiu, M.-C. Hsiu, Y.-H. Chiang, and M. Y. Chen. Palmtype: Using palms as keyboards for smart glasses. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI '15, p. 153-160. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2785830.2785886
370
+
371
+ [35] Y. Yan, C. Yu, X. Ma, S. Huang, H. Iqbal, and Y. Shi. Eyes-Free Target Acquisition in Interaction Space around the Body for Virtual Reality, p. 1-13. Association for Computing Machinery, New York, NY, USA, 2018.
372
+
373
+ [36] X. Yang, C.-W. You, H. Lu, M. Lin, N. D. Lane, and A. T. Campbell. Visage: A face interpretation engine for smartphone applications. In D. Uhler, K. Mehta, and J. L. Wong, eds., Mobile Computing, Applications, and Services", pp. 149-168. Springer Berlin Heidelberg, Berlin, Heidelberg, 2013.
374
+
375
+ [37] X.-D. Yang, K. Hasan, N. Bruce, and P. Irani. Surround-see: Enabling peripheral vision on smartphones during active use. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology, UIST '13, p. 291-300. Association for Computing Machinery, New York, NY, USA, 2013. doi: 10.1145/2501988.2502049
376
+
377
+ [38] K.-P. Yee. Peephole displays: Pen interaction on spatially aware handheld computers. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '03, p. 1-8. Association for Computing Machinery, New York, NY, USA, 2003. doi: 10.1145/ 642611.642613
378
+
379
+ [39] C. Zhang, A. Bedri, G. Reyes, B. Bercik, O. T. Inan, T. E. Starner, and G. D. Abowd. Tapskin: Recognizing on-skin input for smart-watches. In Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces, ISS '16, p. 13-22. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/ 2992154.2992187
380
+
381
+ [40] Y. Zhang, J. Zhou, G. Laput, and C. Harrison. Skintrack: Using the body as an electrical waveguide for continuous finger tracking on the skin. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16, p. 1491-1503. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/ 2858036.2858082
382
+
383
+ [41] J. Zhao, R. Jota, D. J. Wigdor, and R. Balakrishnan. Augmenting mobile phone interaction with face-engaged gestures. CoRR, abs/1610.00214, 2016.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/SMxl-K4pG9/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,287 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § FACEUI: LEVERAGING FRONT-FACING CAMERA INPUT TO ACCESS MID-AIR SPATIAL INTERFACES ON SMARTPHONES.
2
+
3
+ First Last*
4
+
5
+ Author Affiliation
6
+
7
+ First Last ${}^{ \dagger }$
8
+
9
+ Author Affiliation
10
+
11
+ First Last ${}^{ \ddagger }$
12
+
13
+ Author Affiliation
14
+
15
+ < g r a p h i c s >
16
+
17
+ Figure 1: A FaceUI-based calendar app. A user can access calendar events on a date by moving the phone in mid-air around their face.
18
+
19
+ § ABSTRACT
20
+
21
+ We present FaceUI, a novel strategy to access mid-air face-centered spatial interfaces with off-the-shelf smartphones. FaceUI uses the smartphone's front-facing camera to track the phone's mid-air position relative to the user's face. This self-contained tracking mechanism opens up new opportunities to enable mid-air interactions on off-the-shelf smartphones. We demonstrate one possibility that leverages the empty mid-air space in front of the user to accommodate virtual windows which the user can browse by moving the phone in the space in front of their face. We inform our implementation of FaceUI by first studying essential design factors, such as the comfortable face-to-phone distance range and appropriate viewing angles for browsing mid-air windows and visually accessing their content. After that, we compare users' performance with FaceUI to their performance when using a touch-based interface in an analytic task that requires browsing multiple windows. We find that FaceUI offers better performance than the traditional touch-based interface. We conclude with recommendations for the design and use of face-centered mid-air interfaces on smartphones.
22
+
23
+ Index Terms: Human-centered computing-Visualization-Visualization techniques-Treemaps; Human-centered computing-Visualization-Visualization design and evaluation methods
24
+
25
+ § 1 INTRODUCTION
26
+
27
+ When using touch-based input on a smartphone, people typically hold the phone more or less in front of the face. This posture allows for easy visual access to screen content. Clearly, while interacting, having the phone in a stationary position seems preferable, and, accordingly, most smartphone interfaces assume a stable in-front-of-the-face posture. However, many people are also very skilled in sub-optimal situations when the phone is not still in front of their face, such as to text while walking. In this paper, we explore how to design smartphone interfaces that require that the user deliberately moves the phone in the space in front of the face as part of the interaction. We use the high-resolution front-facing camera on a standard smartphone together with machine learning algorithms [10] to track the spatial location of the phone relative to the user's face. This allows us to integrate the large empty space in front of the user into new spatial interactions and user interfaces.
28
+
29
+ Prior research have explored ways to extend a smartphone's input capabilities by shifting the interaction space into the empty in-air space surrounding users' bodies or their mobile devices. For instance, Virtual Shelves [27] allows users to point their hand inside a hemisphere in front of their body to access a set of discrete virtual and invisible items, relying heavily on the users' spatial recall. Similarly, the Imaginary Interface [11] is a mid-air interface in front of the user's body that can be used for pointing and drawing activities. In more recent work, Hasan et al. [19] present the AirPane system and demonstrate how the mid-air space surrounding a mobile device can be used for browsing information in an e-commerce application. These and most other prior projects that demonstrate approaches to leverage around-body or around-device interactions rely on external tracking systems, which is not practical in real-life usage situations. Furthermore, most earlier projects are also limited in that they either do not provide any visual representation of the in-air space (and its interaction objects) at all, or they provide very limited visual information that typically is decoupled from the actual location within the in-air space.
30
+
31
+ In this paper, we present FaceUI, an approach that avoid these shortcomings. FaceUI is a novel strategy that leverages mid-air space in front of the user. FaceUI uses a smartphone's built-in front-facing camera to detect and track the phone's position relative to the user's face. This self-contained tracking approach allows visual access to the in-air space since the screen content is updated depending on the phone's in-air location and the virtual content at that location. The concept is visualized in Figure 1 where a user navigates a FaceUI-based calendar.
32
+
33
+ To our best knowledge, ways to leverage face-centered in-air spaces to access virtual user interfaces (UIs) with off-the-shelf smart-phones have never been explored before. With two user studies, we first investigate how the in-air space can be structured to accommodate virtual UIs used for information exploration on smartphones. We identify the comfortable phone-to-face distance range for accessing virtual UIs in the in-air space and suitable viewing angles for browsing and inspecting content that reside in the in-air space. We use this knowledge to design FaceUI-based calendar application. In a third user study, we evaluate users' performance in a calendar browsing task comparing our FaceUI-based calendar with a touch-based calendar interface. Our results show that the FaceUI-approach can offer considerable advantages compared to traditional touch-based interfaces. We end our exploration with showcasing further FaceUI-based applications.
34
+
35
+ *e-mail: author@email.com
36
+
37
+ ${}^{ \dagger }$ e-mail: author@email.com
38
+
39
+ ${}^{\frac{1}{4}}$ e-mail: author@email.com
40
+
41
+ Accordingly, our contributions include: 1) FaceUI, a novel face-centered spatial in-air interface-approach for off-the-shelf smart-phones; 2) an exploration of suitable design parameters for FaceUI-based applications; 3) a performance comparison between a FaceUI application and standard touch interface in an analytic task; and 4) showcasing further promising FaceUI-enabled interactive applications that demonstrate the potential of face-centered smartphone interfaces.
42
+
43
+ § 2 BACKGROUND AND RELATED WORK
44
+
45
+ We review prior work that has explored ways to design spatial interfaces, interaction spaces, and interaction techniques. These earlier projects inspired the design of our face-centered spatial user interface, FaceUI. The previous research closely aligned to components of FaceUI falls mainly under around-device interaction, on- and around-body interaction, and Face-Centered Input.
46
+
47
+ § 2.1 AROUND-DEVICE INTERACTION
48
+
49
+ There has been substantial prior research work exploring the use of mid-air space around mobile devices. Researchers have demonstrated that the mid-air space can be used for novel interactions, such as for virtual content browsing and selection $\left\lbrack {{14},{17},{19},{23},{37}}\right\rbrack$ , map navigation [18,21], mode switching [21] and typing [30]. For instance, AD-Binning [17] leveraged the empty 2D space around a smartphone to off-load and browse content into the space. They further showed that the mid-air space could facilitate faster access to items than the standard touch input. In a similar work, Hasan et al. [19] showed that 3D in-air space around a device could be used for browsing m-commerce applications. Researchers also investigated ways to track users activities around the device with commercial tracking solutions (e.g., Vicon tracking $\left\lbrack {{18},{19}}\right\rbrack$ ) or using different cameras or sensor-based solutions (e.g., depth camera [7,24], distance sensor $\left\lbrack {6,{23}}\right\rbrack )$ . Though these solutions offer precise motion capture data, they require either environments, users or devices to be instrumented with sensors. This makes mobile devices less portable to be used in public spaces.
50
+
51
+ § 2.2 ON- AND AROUND-BODY INTERACTION
52
+
53
+ Prior work investigated ways to use the on- and around-body space for designing novel interaction with devices $\left\lbrack {3,5,9,{27}}\right\rbrack$ . For instance, researchers [11-13] explored the use on body locations such as palm to access on-screen contents. Imaginary Phone [12] used user's palm as the input surface for iPhone. In a similar work, Gustafson et al. [13] investigated palm-based imaginary interface for supporting visually impaired users. Imaginary Interfaces [11] allowed users to perform spatial interaction on empty palm and without visual feedback. In addition, palm has been used for trigger pre-defined functions [26], to perform 3D rotation [24], or use it as an input space for augmenting keyboards [34]. Similarly, researchers explored the skin as an interactive touch surface $\left\lbrack {{16},{39},{40}}\right\rbrack$ . They commonly used external depth cameras to detect and track hand and finger activities such as tapping and sliding on body parts.
54
+
55
+ Researchers also investigated using the mid-air space around the body as a novel interaction space. For instance, Virtual Shelves [27] demonstrated that the mid-air space in front of users could be used to trigger shortcuts. With a study, they showed that users could recall shortcuts by moving their phone into a $7 \times 4$ grid on a circular hemisphere in front of them. Yee et al. [38] designed a solution allowing users to move the mobile phone to different locations around the body and change the on-screen content based on the device's location relative to the body. Ens et al. [9] designed Personal Cockpit leveraging the around-body space to display virtual windows in an head-worn displays. In a similar work, Babic et al. [5] explored Gesture Drawer, an one-handed interaction technique allowing users to define and interact with self-define imaginary interfaces while moving their hand to interact with the interfaces. Researchers have also investigated mid-air spatial interface specific to applications in mixed-reality $\left\lbrack {9,{29},{35}}\right\rbrack$ , for games [33], workspace navigation [22]. For instance, Lubos et al. [29] introduced kinespheres, an mixed-reality based body-centric spatial interface within arm's reach. They received positive feedback from users on using their method compared to traditional head-centered interaction for mixed-reality. Yan et al. [35] explored an eyes-free target acquisition technique for mixed-reality by placing the targets in around-body space. Way Out [33] is a game scenario where players can navigate through an omni-directional panorama scene by moving the device around the body using the built-in motion sensors in smartphones. In a recent work, Kim et al. [22] demonstrated image and map zoom-in and zoom-out using the vision-based interface OddEyeCam, that detects and tracks the location of the mobile phone with respect to the user's body using external sensors such as wide-view RGB cameras and narrow-view depth cameras.
56
+
57
+ § 2.3 FACE-CENTERED INPUT
58
+
59
+ Prior research investigated using head and face movements as an input to design new face-centered interactions on devices. For instance, Zhao et al. [41] used a combination of facial movements, device motion and touch for designing face-centered interaction techniques on smartphones. Kumar et al. [25] leveraged eye gaze to scroll mobile phone contents. Yang et al. [36] used a face interpretation engine for enabling face-aware applications for smartphones using the phone's front-facing camera and built-in motion sensors. Similarly, Babie et al. [4] designed Simo that used head movement as input for pointing on a distant large display. Instead of using external cameras, they used the smartphone's front-facing camera to detect face orientation. Rustagi et al. [31] explored touchless typing using head gestures detected by the smartphone's front-facing camera and used them to type on an on-screen QWERTY keyboard. We also observed that the smartphone's front-facing camera could be used to design new strategies such as for rotation of on-screen content on mobile devices by detecting direction changes of objects from the camera view $\left\lbrack {1,2,8}\right\rbrack$ .
60
+
61
+ The manifold opportunities of spatial mid-air interfaces, as demonstrated by earlier projects, inspire us to continue on this promising path. However, in opposite to most previous projects which use external sensors or cameras to identify interaction gestures, we are interested in using a self-contained tracking mechanism to detect in-air movements. Similar to a few face-tracking systems $\left\lbrack {4,{31},{41}}\right\rbrack$ , our FaceUI-approach also uses the front-camera of a smartphone to detect changes in the relative positions of the user's face and the phone. However, FaceUI differ from earlier systems in that it does not rely on any other sensors than the smart-phone's front-facing camera. Furthermore, we aim at interactions where the user keeps the head still while moving the phone. Earlier approaches $\left\lbrack {4,{31},{41}}\right\rbrack$ require the user to do the opposite, to move the head while holding the phone in a fixed position. In this way, we intend to create the sensation of a hemispherical interaction space that is anchored in front of the user's face but moves along with the user (through the self-contained tracking). In this first exploration of such an interaction hemisphere, we focus on using virtual application windows that are located inside the hemisphere. When the user moves the smartphone to a location inside the hemisphere, the content of the virtual window that resides at that location is displayed on the smartphone's screen. When the user re-positions the smartphone inside the hemisphere the screen displays the content of the virtual window that resides at the new position.
62
+
63
+ < g r a p h i c s >
64
+
65
+ Figure 2: Study 1 task. Using (a) horizontal and (b) vertical mid-air movements to select invisible in-air items. (c) Task prompt. (d) A participant holding the phone in the neutral start position, straight in front of the face.
66
+
67
+ Next we describe a few central aspects of the face-detection software and the setup we used in our user studies. After that we present our three studies in turn and order.
68
+
69
+ § 3 FACE DETECTION SOFTWARE AND STUDY SETUP
70
+
71
+ The self-contained tracking software facility we developed for our FaceUI-approach is based on the Face Detection API [10] in Google's ML Kit (Machine learning for mobile developers). The API provides a comfortable and reliable way to track the position and orientation of a smartphone relative to the user's face when the front-facing camera is used. Among the available face-tracking related measures, our software relies on yaw data (the smartphone's movements to the left or to the right relative to the detected face), pitch data (the smartphone's up and down movements in the vertical direction relative to the detected face), and distance data (the current distance between the detected face and the camera lens of the front camera on the smartphone). Our software does not use any roll-related information. The Face Detection API delivers ${0}^{ \circ }$ for both yaw and pitch when the user holds the phone straight in front of the face.
72
+
73
+ Restrictions related to COVID-19 prevented us from meeting our study participants face-to-face. Instead, we conducted our studies remotely using teleconferencing software. Accordingly, our participants were required to have a laptop or a desktop computer with a stable Internet connection, a microphone, loudspeakers, and a webcam. In the studies our participants used their own smartphone to run the study software. The study software was designed for any phone running Android 4.2 to 11. Our participants received the study software (i.e., the apk file) and all necessary instructions over email and we guided them through the installation process in the beginning of the study session. The data logged during a study session was automatically transferred from the participant's phone to a Cloud-Firestore data base when the participant had completed the last study tasks.
74
+
75
+ We ran all of our three studies remotely, where participants used the study apps on their smartphones in the wild as opposed to the controlled lab environment. All participants sat in front of their web-cam while completing the study tasks. In each study, a study session lasted approximately 45 minutes, including instructions, practice trials, timed study trials, breaks, and completion of questionnaires. As the study apps were designed for the Android platform, we only recruited participants who possessed an Android smartphone.
76
+
77
+ § 4 STUDY 1: EXPLORING DIRECTION AND DISTANCE
78
+
79
+ Prior research has reported arm fatigue and 'heavy arm'-issues related to mid-air interactions [15] and that working with a bent arm in mid-air is more comfortable and less strenuous than working with a stretched arm [20]. Since FaceUI involves mid-air hand movements arm fatigue is a potential problem. Moreover, with FaceUI, the mid-air movements need to be constrained such that the user's face is inside the front-camera's field of view.
80
+
81
+ With FaceUI, we envision the mid-air interaction space as a semicircular space in front of the user's face. Through a pilot test (with five participants) we found that the face tracking works best when the phone is between 5 and 80 centimetres away from the user's face and the user moves the phone within a longitudinal range of ${90}^{ \circ }$ (from -45 ${}^{ \circ }$ to the left of the user’s face to ${45}^{ \circ }$ to the right of the user’s face) and a latitudinal range of ${70}^{ \circ }$ (from - ${35}^{ \circ }$ below the user’s nose to ${35}^{ \circ }$ above the user’s nose). Whereas we know that movements inside this space are accurately tracked, we do not know how accurately, fast, and comfortably people can navigate around in this mid-air space. Accordingly, we want to chart out the suitable dimensions and the granularity of the mid-air interaction space for FaceUI in our first study.
82
+
83
+ § 4.1 STUDY DESIGN AND STUDY TASK
84
+
85
+ We oriented the study task and study design of our first study according to previous projects that have explored the dimensions and the granularity of the mid-air space in front of the user, e.g., the Virtual Shelves [27] and AD-Binning [17] projects. We used a simple item selection task where a trial consists of moving the smartphone to a specified position in mid-air to select the virtual item at that position. We investigated horizontal movements and vertical movements when the phone is close or far from the user's face.
86
+
87
+ Figure 2 visualizes the study task and setup. We divided the mid-air space along the horizontal into seven equally wide one-dimensional regions - or items -, each ${12.85}^{ \circ }$ 'wide' (Figure 2a). We used five one-dimensional regions - or items - in the vertical direction, each ${12.85}^{ \circ }$ 'high' (Figure 2b). From a user’s perspective, the size of these items in the air in front of the face depends on the distance between the phone and the face: the further away from the face, the larger the item becomes. Accordingly, we decided to also test movements (horizontal and vertical) when performed close to the face and far away from the face.
88
+
89
+ § 4.1.1 STUDY DESIGN
90
+
91
+ With this, we arrive at two independent factors for our study: 1) movement Direction: horizontal and vertical, and 2) Distance: close and far. Close represents the distance range within which participants commonly and comfortably hold their phone when accessing on-screen content with touch. We regarded any distance beyond that range as far. However, where the comfortable range ends is likely to differ between participants (depending on arm length and preference). Therefore, it is critical to have a user-depended threshold value rather than using a common value for all participants. We calibrated the individual value for each participant in the beginning of the study session. We asked the participant to provide us the phone-to-face distance where it started to feel awkward and less comfortable when moving the phone in front of the face. Once the phone had reached these locations, the study app showed the distance between the face and the phone (in centimetres) on the screen. We asked the participant to move the phone left, right, middle, up, and down and share the distance data. We calculated the participant's upper value for the 'close' (i.e., comfortable) distance for the horizontal direction by averaging the left, right and middle values. For the vertical direction we averaged the up, down, and middle values.
92
+
93
+ < g r a p h i c s >
94
+
95
+ Figure 3: Result of Study 1. Mean trial time for close and far distance in (a) the horizontal direction and (b) in the vertical direction. (c) Mean error rate for the horizontal and vertical directions at close and far distances. Error bars: 95% Cl.
96
+
97
+ We used a within-subjects study design. All participants performed four series of six blocks of trials, one series of blocks with each of the four Distance-Direction combinations: close-horizontal, close-vertical, far-horizontal, and far-vertical. Blocks in the horizontal direction consisted of six trials, one trial for each of the six target items (1,2,3,5,6, and 7, cf. Figure 2a) in random order. Blocks in the vertical direction consisted of four trials, one trial for each of the four target items $(1,2,4$ , and 5, cf. Figure 2b) in random order. Accordingly, each participant performed 120 trials: one block series of 36 horizontal trials at close distance + one block series of 24 vertical trials at close distance and one block series of 36 horizontal trials at far distance + one block series of 24 vertical trials at far distance. Half of the participants started with the two block series at close distance and then completed the two block series at far distance, the other half used the other order. The order of the two directions-series within a distance was random.
98
+
99
+ § 4.1.2 TASK PROCEDURE
100
+
101
+ To start a trial, the participant moved the phone to the middle region, straight in front of the face. In the horizontal direction, this corresponded to Item 4 in Figure 2a. In the vertical direction Item 3 in Figure 2b was used as the start region. Once the participant moved the phone inside the start region the screen turned green and displayed information for the upcoming trial, including the target prompt with the item number to select next, as shown in Figure 2c. A trial started when the participant pressed down the thumb on the screen. If the phone was moved outside the start region before pressing down with the thumb, the screen turned red and showed instructions to move the phone into the start region. A thumb-press in the start region started timing for the trial. A selection was registered and the trial time stopped when the thumb was released after having moved the phone into one of the items (or regions) outside the start region. Speech output informed if the participant selected the correct item or not by playing "Correct selection" resp. "Wrong selection". Erroneous trials were re-queued at a random position among the unfinished trials within the current block.
102
+
103
+ During a running trial we relied on audio to inform participants about the current position of the phone. The app provided speech output when i) the phone entered a new item, by saying the number of the item, when ii) the participant moved the phone at the wrong distance, by playing "Move the phone further away" in far conditions or "Move the phone closer" in close conditions, and when iii) the face tracking software lost track of the face, by playing "Face out of camera view". Working with audio guidance was important: in our first study, we wanted to focus on the motoric aspects and movement properties that determine the dimensions of FaceUI's interaction space. We wanted to exclude aspects that relate to how well a user can read screen content while moving the phone in mid-air space, such as the size of screen content and the viewing angle and distance. We return to such visual issues in our second study.
104
+
105
+ § 4.2 PARTICIPANTS
106
+
107
+ We recruited twelve right-handed participants (mean age 27.08 years, s.d. 5.98, 6 male) via on-campus flyers and word-of-mouth. All participants were daily smartphone users.
108
+
109
+ § 4.3 RESULTS
110
+
111
+ We first report on results regarding participants' comfortable phone-to-face distance (calibrated in the beginning of a study session) that served as the basis for each participant's individual threshold value that separated the close distance from the far distance. After that we report on trial time, error rates, and subjective ratings.
112
+
113
+ § 4.3.1 CLOSE/FAR THRESHOLD VALUE
114
+
115
+ Across all participants, the average face-to-phone distance where movements started to feel less comfortable was ${39.31}\mathrm{\;{cm}}$ (s.d. 7.61) for horizontal phone movements and ${39.38}\mathrm{\;{cm}}$ (s.d. 7.34) for vertical movements. This critical threshold varied a lot between participants. In the horizontal movement direction it was between 30 and 61 $\mathrm{{cm}}$ and in the vertical direction between 28 and ${61}\mathrm{\;{cm}}$ (only one participant had values greater than ${50}\mathrm{\;{cm}}$ ).
116
+
117
+ § 4.3.2 TRIAL TIME
118
+
119
+ The trial time analyses are based on error free trials only. Figure 3a shows the mean trial time for each target position at both distances in the horizontal movement direction and Figure $3\mathrm{\;b}$ shows the corresponding results for the vertical direction. Mean trial times for close and far (across the two directions) were 2.93 sec and 2.99 sec, respectively. The overall mean trial time (across the two distances) for the horizontal and vertical directions were 3.01 sec and ${2.90}\mathrm{{sec}}$ , respectively. A $2 \times 2\mathrm{{RM}} - \mathrm{{ANOVA}}$ showed that there was no significant difference between the two distances $\left( {{F}_{1,{11}} = {0.09}}\right.$ , $p = {0.76}$ ) or between the two directions $\left( {{F}_{1,{11}} = {0.29},p = {0.59}}\right)$ . A one-way RM-ANOVA (independent factor block) indicated that participants became faster during the course of the study with significantly longer trial times in the first block of trials than in last two blocks $\left( {{F}_{5,{55}} = {5.85},p < {0.001}}\right)$ . The mean trial time decreased from 3.43 sec in Block 1 to 2.81 and 2.72 sec in Block 5 and 6, respectively.
120
+
121
+ < g r a p h i c s >
122
+
123
+ Figure 4: Study 2 setup. (a) The green screen showing the instruction of locating the phone to a ${40}^{ \circ }$ viewing angle on horizontal plane,(b) Participant located the phone to downward at ${40}^{ \circ }$ viewing angle on horizontal plane and,(c) located the phone to upward at ${40}^{ \circ }$ viewing angle on horizontal plane, (d) selecting the total number of black dots on down and up screen at ${40}^{ \circ }$ viewing angle on horizontal plane.
124
+
125
+ In Figure 3 we also see a clear and expected pattern regarding the different target positions: given further phone-movement distances, selecting item at positions close to the start position (Position 4 for horizontal movements and Position 3 for vertical movements, cf. Figure 2) was quicker than selection items further away. This pattern we see for movements in both directions and at both the close and far distances.
126
+
127
+ § 4.3.3 ERROR RATE
128
+
129
+ Figure 3c shows the mean error rates for the four distance $\times$ direction combinations. A Friedman test identified a significant difference among the combinations $\left( {{\chi }^{2}\left( {3,N = {12}}\right) = {8.95},p < {0.05}}\right)$ and posthoc Wilcoxon tests (Bonferroni adjusted $\alpha$ -level from 0.05 to 0.008) revealed that the close-vertical combination was significantly less error prone than close-horizontal combination and that there were no other pairwise differences.
130
+
131
+ § 4.3.4 SUBJECTIVE FEEDBACK
132
+
133
+ We asked participants to rate the two directions and the two distances according to their overall preference on a 5-point scale with $1 =$ bad, $3 =$ neutral, and $5 =$ good. We found an unsurprising and strong preference for the close distance with mean rating 4.52 compared to the far distance with mean rating 1.91 . Participants were not that decided in their opinions regarding the two movement directions. They rated the horizontal movement direction only slighter better than the vertical direction, mean rating 4.23 vs. mean rating 3.1.
134
+
135
+ § 4.4 SUMMARY
136
+
137
+ Results from the subjective feedback indicate that participants had a slight preference for horizontal movements over vertical movements. However, our analyses also revealed that there is no significant difference between the movement directions in regard of trial time. But we see a clear, and unsurprising advantage for the close distance over the far distance. Accordingly, for our future FaceUI explorations, we learn that people are sensitive regarding the phone-to-face distance and that FaceUI-based applications should avoid requiring user to use large phone-to-face distances. Consequently, in our we continue utilizing regions along both the horizontal and vertical directions. However, we observed increased trial time with items located in certain vertical regions, e.g., Item 1, than others. This warrants further investigation into factors such as visual angles that could influence users' performance when reading screen content when holding the phone in such regions.
138
+
139
+ § 5 STUDY 2: EXPLORING TARGET REGION AND TARGET AN- GLE
140
+
141
+ Application interfaces that are placed in FaceUI move in both horizontal and vertical regions with the user's head movement along with the same regions. Therefore, to read content that is located to the right or left on the FaceUI, a user needs to keep their head static and move their eyes to read the content. Prior research [32] showed that such eye movement could cause eyes fatigue, pain and tiredness Therefore, in this study, we explored suitable viewing angles where users can comfortably access on-screen items on smartphones.
142
+
143
+ § 5.1 PARTICIPANTS
144
+
145
+ We recruited fourteen right-handed participants (mean age 26.78 years, s.d. 6.07, 7 male) via on-campus flyers and word-of-mouth. All participants were daily smartphone users. None of the participants had participated in Study 1.
146
+
147
+ § 5.2 FACTORS
148
+
149
+ We considered the following factors in this study.
150
+
151
+ § 5.2.1 TARGET REGION
152
+
153
+ In this study, we considered placing items in two regions - vertical (up and down) and horizontal (left and right). Similar to the first study, we kept the middle location reserved as the starting point of a trial.
154
+
155
+ § 5.2.2 TARGET ANGLE
156
+
157
+ We decided to place a set of targets at angles both in horizontal and vertical regions. With a pilot study, we choose to place items at $\pm {20}^{ \circ }$ , $\pm {30}^{ \circ }, \pm {40}^{ \circ }$ and $\pm {50}^{ \circ }$ angles where positive and negative angles indicate items to the right and left regions, respectively. Results from our pilot study showed that participants were not able to see items that are located above $+ {30}^{ \circ }$ in the up. Additionally, any items placed below $- {40}^{ \circ }$ angles for the downward region were not accessible as the phone gets very close to the body. Therefore, we used $+ {20}^{ \circ }$ and $+ {30}^{ \circ }$ for the up region, and $- {20}^{ \circ }, - {30}^{ \circ }$ and $- {40}^{ \circ }$ for the down region.
158
+
159
+ § 5.3 PROCEDURE AND TASKS
160
+
161
+ At the beginning of a trial, the participant was required to move the phone to the middle position (straight in front of the face). As long as the phone was still and outside the middle position, the screen remained red and contained instructions to move the phone to the middle position. Once the phone was inside the middle position the screen turned green and displayed the target prompt for the next target (along with block and trial counts), as seen in Figure 4a. Participants were asked to press on the screen with their thumb and move their phone at the target angle while keeping the thumb on the screen. Tapping on the screen also started a timer. The study application then removed the on-screen instructions, replaced with an empty black window, and kept it until participants moved the phone to the target angle. We used circle counting tasks in this study, where participants were required to count the number of circles presented in two windows located at a target angle. For instance, if the target angle was $+ {30}^{ \circ }$ in the right region, we placed two more windows above and below the vertical plane defined by the user's eye. Participants could only see the windows once they reach to the instructed region and angle. They could now move their phone up and down (for horizontal region) or left and right (for vertical region) to access the windows while keeping the phone in the target angle (Figure 4b-c). The windows contained a random number of non-overlapping black circles between 12 and 16. Participants were required to count the total number of circles seen on both screens. Once they believed counting all the circles on both windows, they were asked to lift off their thumb from the touchscreen. This action further popup a window containing multiple options for the summation results (Figure 4d). Once they selected the correct answer, the application stopped the trial time, provided voice feedback on whether they were correct or not and displayed the instruction for the next trial on the screen. If incorrect, the app stopped the trial time, provided audio feedback, and re-queued the trial at a random position among unfinished trials within a block of trials. Participants were then required to move the phone back to the middle and continue trials until all the trials were finished. Note that for either case, the app sent trial-related information (e.g., task completion time, correctness) to a database server.
162
+
163
+ < g r a p h i c s >
164
+
165
+ Figure 5: Result of Study 2. (a) Mean reading time for left, right, down and up target regions and (b) for target angles in each target region. Error bars: 95% Cl.
166
+
167
+ Each participant completed six blocks of trials with each of the Target Region (left, right, up and down) where one block contained one trial for each of the Target Angles. Therefore, each participant performed 78 error-free trials (24 trials for left, 24 for right, 12 for up and 18 for down region). The presentation order of the target region were selected randomly between participants and the angles were presented in a random order. Participants were provided with 2 blocks of practice trials. After completing all the trials, we collected the participants' feedback on their preferences on Target Region and Target Angles. This study required participants around 45 minutes to complete all the tasks.
168
+
169
+ § 5.4 RESULTS
170
+
171
+ Instead of analyzing the trial time, we were interested in the time that participants spent on counting the circles rather than moving the phone to the target position and taking time to answer questions. We called this time as Reading time.
172
+
173
+ § 5.4.1 READING TIME
174
+
175
+ We used repeated measures ANOVA and post-hoc pairwise comparisons to analyze reading time. Results showed that Target Region had significant effects on reading time $\left( {{F}_{3,{39}} = {3.24},p < {0.05}}\right)$ . Fig. ure 5a shows the mean reading time for all four regions: left (mean 10.40s) and right (mean 11.16s), down (mean 9.76s) and up (mean 13.84s). Post-hoc pairwise comparisons showed that Down was significantly faster than Up while accessing the items. No other pairwise difference was found.
176
+
177
+ We also analyzed the reading time for each target angle on each target region. Figure $5\mathrm{\;b}$ shows the mean reading time for all target angle on each target region. Target angles in Right showed significant effects on reading time $\left( {{F}_{3,{39}} = {6.71},p < {0.001}}\right)$ . Post-hoc pairwise comparisons between target angles showed that targets at ${50}^{ \circ }$ angle was significantly slower than target angles at ${20}^{ \circ }$ . There were no other pairwise statistically significant differences. We also observed that the target angles in Left target region had significant effects on reading time $\left( {{F}_{3.39} = {7.03},p < {0.001}}\right)$ . Similar to the right region, targets at ${50}^{ \circ }$ angles were significantly slower than targets at ${20}^{ \circ }$ There were no other statistically significant differences between the angles.
178
+
179
+ Like left and right regions, target angles in ${Up}$ region showed significant effects on reading time $\left( {{F}_{3,{39}} = {23.39},p < {0.001}}\right)$ . Targets at ${30}^{ \circ }$ angles were significantly slower than targets at angle ${20}^{ \circ }$ . Target angles in Down region also showed significant effects on trial time $\left( {{F}_{2.26} = {21.12},p < {0.001}}\right)$ . Targets at ${40}^{ \circ }$ angle were significant slower than targets at ${20}^{ \circ }$ and ${30}^{ \circ }$ . No other statistically significant differences were observed.
180
+
181
+ § 5.4.2 SUBJECTIVE FEEDBACK
182
+
183
+ Participants rated each target region using a 5-point Likert scale. They preferred the right target region (mean rating 3.85) most, followed by down (mean rating 3.78) and left (mean rating 3.07). Up target region was rated as the least preferred region to access items (mean rating 2.28).
184
+
185
+ § 5.5 SUMMARY
186
+
187
+ The results shows that visually accessing screen content while the phone is positioned in the upper region (up) is slower than when the phone is positioned in any other region (the lower region, the left and the right areas) of the in-air space. For the target angles for each region, we see that participants' performance degrades significantly at the highest angle in each region. Accordingly, we suggest to avoid the extreme angles, ${50}^{ \circ }$ for both right and left, ${30}^{ \circ }$ for up, and ${40}^{ \circ }$ for down when designing a application that uses FaceUI.
188
+
189
+ < g r a p h i c s >
190
+
191
+ Figure 6: Calendar app interfaces. (a) A trial starts with displaying a query, e.g., - "Find the number of online meetings scheduled on July 6". A tap on the start button opens a new window (b) - containing calendar dates for a month at the bottom of the screen. With the touch interface, a tap on a date shows events scheduled on that date (at the top of the screen). A user can further inspect an event by tapping on it. This action triggers a new window (c) with displaying details on the event. The user can return to the previous screen by tapping the back button or swiping left. (d) With FaceUI, virtual windows containing events details are placed in front of the user, which can be accessed by moving the phone in mid-air. After inspecting the event details, the user can tap on a check button to open a popup window containing multiple answers.
192
+
193
+ § 6 STUDY 3: PERFORMANCE ANALYSIS OF FACEUI WITH AN ANALYTIC TASK
194
+
195
+ We conducted previous two studies exploring different design factors such as target angle and target region that could potentially influence users performance using FaceUI. In this study, we evaluate a practical usage scenario using FaceUI where users are required to browse multiple windows for retrieving information. Consequently, we designed a calendar app and compared the performance of FaceUI with traditional touch interfaces.
196
+
197
+ § 6.1 PARTICIPANTS
198
+
199
+ We recruited twelve right-handed participants (mean age 25.5, s.d. 5.23,6male) via on-campus flyers and word-of-mouth. All participants were daily smartphone users. None of the participants had participated in Study 1 or in Study 2.
200
+
201
+ § 6.2 TASK, PROCEDURE AND DESIGN
202
+
203
+ In this study, we used an analytic task where users were required reviewing information on a calendar before reaching a decision. A trial starts with displaying the question along with a start button (e.g., "Find the number of online meetings scheduled on July 7"), as seen in Figure 6a. After reading the question, the user taps on the start button, which also starts the trial time and opens a new window containing calendar dates for a month (e.g., July 2021 in our case) at the bottom of the screen (Figure 6b). Once the user selects a date (e.g., July 7) with touch, the date gets highlighted with green color and a number of calendar events on that specific date is displayed at the top. To find the number of calendar events that should be added for each date, we briefly surveyed students and faculty members and found that they commonly have 3-5 events (e.g., classes or meetings) per day, excluding weekends. Consequently, we added 3 to 5 events, represented with an event title and time (e.g., "Department Meeting 15:00-16:00"), for each day except for Saturday and Sunday. Once the user taps on an event, the app opens a new window (i.e., "detailed view") containing detailed information about that event (Figure 6c). The design of detailed view was inspired by the Android's generic calendar applications that contains event title, event time, persons hosting/attending in the event (e.g., Host, Attendees), event type and mode (weekly meeting, online/in-person), reminder-related information (e.g., reminder type, reminder time). After checking the detailed view, the user can tap the "back" button or swipe left to return to the previous screen to view the other events on that day.
204
+
205
+ With FaceUI, a trial also starts with showing the screen displaying a query prompt. Once the user taps on the start button, it opens a new window showing calendar dates at the bottom (Figure 6d). While designing FaceUI, we leveraged the empty mid-air space in front of the user to accommodate virtual windows for frequent browsing. Therefore, we used FaceUI in conjunction with traditional touch input where touch is used to select an item (e.g., a date from the calendar) and FaceUI was used for browsing content (e.g., "detailed view") by moving the phone in mid-air space. Note that as we have maximum 5 events for a date, we decide to only place the detailed views between $+ {40}^{ \circ }$ and $- {40}^{ \circ }$ in the horizontal direction rather than placing them in a grid. Once the user selects a date, FaceUI shows the details of an event (i.e., detailed view) on the top half of the screen and the user can browse other events on the date by moving the phone horizontally in mid-air (Figure 6d). Phone movements along vertical directions are ignored. After inspecting all the event details, the user taps on a check button (for both touch and FaceUI interfaces) to open a popup window containing multiple answers (Figure 6e). Once the user selects an answer and taps the select button, the app provides audio feedback on the correctness of the answer. If the selected answer is correct, the trial time stops, and the next trial is displayed on the screen. If incorrect, audio feedback for incorrect selection is provided, and the user is required to redo the trial immediately.
206
+
207
+ The study used a $2 \times 3$ within-subjects design for factors Interface (FaceUI, Touch), and Number of events (3, 4, and 5). For each combination of factors, participants performed 10 repetitions resulting in 60 error-free timed trials per participant. The order of the Interface was counter-balanced across participants and Number of events orders were randomized within each Interface. Participants were provided with practice trials for each combination until they felt comfortable operating the two interfaces. A study session lasted approximately 45 minutes.
208
+
209
+ § 6.3 RESULTS
210
+
211
+ Measures: Our study tasks include three sub-tasks: (i) selecting a correct date from the calendar, (ii) browsing event details (i.e., detailed view) on the selected date and (iii) selecting the correct answer. Therefore, we recorded the following times: Browsing start time is the time when the participants read the question and click on the start button to the time when they select the correct date from the calendar; Browsing time is the time from when participants select the correct date to the time when they finish browsing detailed view and tap on the check button to open the interface containing four possible answers; and Selection time is the time from when they tap on the check button to the time when they select an answer and press the submit button. In addition, we record trials where participants selected a wrong answer.
212
+
213
+ < g r a p h i c s >
214
+
215
+ Figure 7: Result of Study 3. (a) Mean browsing start time, (b) browsing time and (c) selection time. Error bars: 95% Cl.
216
+
217
+ § 6.3.1 ERROR TRIALS, OUTLIERS, AND TRIAL TIME
218
+
219
+ We marked a trial as an error trial if participants answered incorrectly. We observed that participants selected wrong answers in 41 trials (5.38%): 24 with Touch Interface (3.15%) and 17 with FaceUI (2.23%). A Wilcoxon Signed-Rank test showed no difference between the two Interfaces. To analyze the times (e.g., browsing start time, browsing time or selection time), we first removed all erroneous trials and then removed eight outlier trials with a total trial time outside of $\pm 3\mathrm{{SD}}$ . Overall, we found that participants ${13}\%$ faster with FaceUI than touch interfaces (with FaceUI 15.2s and touch 17.2s). We used Repeated Measures ANOVA and Bonferroni adjusted post-hoc pairwise comparisons to analyze the times.
220
+
221
+ § 6.3.2 BROWSING START TIME
222
+
223
+ Figure 7a shows the browsing start time for the two Interfaces and three Number of events. There were no significant differences in browsing start time between the two Interfaces $\left( {{F}_{1,{11}} = {0.04}}\right.$ , $p = {0.84})$ . Participants with Touch interfaces were (2.31s, SE 0.31) slightly faster than FaceUI (2.35ms, SE 0.30). Similarly, we did not find any significant differences in browsing start time between the Number of events $\left( {{F}_{2.22} = {0.26},p = {0.78}}\right) .2,3$ and 4 events were taking 2.28s (SE 0.30), 2.34s (SE 0.28) and 2.38s (SE 0.31), respectively. There was no significant Interface $\times$ number of events effect on browsing start time $\left( {{F}_{2,{22}} = {0.61},p = {0.55}}\right)$ .
224
+
225
+ § 6.3.3 BROWSING TIME
226
+
227
+ Figure 7b displays the browsing time for the two Interfaces and three Number of events. A RM-ANOVA showed that the browsing time was significantly different depending on Interfaces $\left( {{F}_{1.11} = {6.52}}\right.$ , $p < {0.05}$ ) and the Number of events $\left( {{F}_{2,{22}} = {39.55},p < {0.001}}\right)$ . Bonferroni adjusted post-hoc comparisons showed that FaceUI with 9.4s (SE 0.56) was significantly faster than touch interfaces with 11.4s (SE 0.77). For number of events, each pairwise comparison was significant (all p’s $;{0.001}$ ) where the higher number events took longer time than the lower ( 3 events - 7.95s, SE 0.49, 4 events - 10.24s, SE 0.59 and 5 events - 13.09s, SE 0.82). There was no significant Interface $\times$ number of events on browsing time $\left( {{F}_{2,{22}} = }\right.$ ${1.70},p = {0.21})$ .
228
+
229
+ § 6.3.4 SELECTION TIME
230
+
231
+ Figure 7c shows the selection time. We observed no significant differences in selection time between the two Interfaces $\left( {{F}_{1,{11}} = }\right.$ ${0.92},p = {0.36})$ or three Number of events $\left( {{F}_{2,{22}} = {1.28},p = {0.30}}\right)$ 2,3 and 4 events were taking 1.61s (SE 0.06), 1.59s (SE 0.05) and 1.66s (SE 0.04), respectively. There was no significant Interface $\times$ number of events effect on Selection time $\left( {{F}_{2,{22}} = {0.80},p = {0.46}}\right)$ .
232
+
233
+ § 6.3.5 SUBJECTIVE FEEDBACK
234
+
235
+ Participants had prior experience with touch interfaces and were very comfortable using smartphones. Once participants mentioned: "I have been using touch interfaces on smartphones for a long time I feel comfortable with them". We also observed a bias for touch interfaces where we asked participants to rate the two directions according to their overall preference on a 5-point scale. Touch interfaces (mean rating 4.5, SD 0.5) was rated higher than FaceUI (mean rating 3.5, SD 1.1). However, participants acknowledged that the concept of FaceUI was utterly new, and they were not familiar with any similar concepts. Once participants commented: "It's a new method and I don't have experience with it. However, it seems a potential method for operating smartphones". We believe that once such interfaces become available on commercial smartphones, people will feel comfortable using them for accessing on-screen content.
236
+
237
+ § 6.4 SUMMARY
238
+
239
+ Our participants had more than 9 years of prior experience using the traditional touch interfaces where FaceUI was a new experience for them. Despite this, results demonstrated that the FaceUI offers faster access to information than traditional touch interfaces. We found that the browsing start time and the selection time were comparable for both interfaces as participants used the same procedures (i.e., selecting dates or buttons with touches) with both interfaces. However, we observed that participants were significantly faster with browsing information with FaceUI (see figure 7). FaceUI enables quick retrieval of spatially located virtual UIs by moving the phone into large space whereas touch interfaces require users switching between windows with frequent tap and swipe, which is known to be costly [17]. From the results of the studies we believe that FaceUI can be a promising supporting input technique to the conventional touch input and this new interaction can serve the users with the best timing in browsing multiple windows.
240
+
241
+ § 7 DESIGN GUIDELINES
242
+
243
+ We summarize and present our key findings as design guidelines. Our investigation offers the following guidelines to designers for interfaces similar to FaceUI:
244
+
245
+ < g r a p h i c s >
246
+
247
+ Figure 8: FaceUI enabled applications. A user (a-b): browse through images by moving the phone along horizontal and vertical directions; (c) scroll through messages to check message history by moving the phone vertically; and (d) navigate a map by moving the phone.
248
+
249
+ § 7.1 DISTANCE
250
+
251
+ We found that participants preferred moving their hands within ${40}\mathrm{\;{cm}}$ around the face. Using this space for accessing virtual UIs will also help minimizing concerns related to arm fatigue. Thus, we recommend designers placing UIs in the mid-air space within ${40}\mathrm{\;{cm}}$ from users' faces.
252
+
253
+ § 7.2 DIRECTION AND REGION
254
+
255
+ Results indicate that participants preferred the horizontal over the vertical direction for moving the phone to access items. Designers should emphasise placing items in this direction. In addition, participants reported difficulties accessing items in the up region. Thus, limited or no items should be placed in this region.
256
+
257
+ § 7.3 VIEWING ANGLE
258
+
259
+ Items should be placed within a comfortable viewing angle as users performance significantly degrades once the targets are placed far from the comfortable viewing angles. Caution is needed, especially when placing items in extreme angles which could create eyestrain. Results from our study suggest placing items between $- {40}^{ \circ }$ left and $+ {40}^{ \circ }$ right in the horizontal direction, between $+ {20}^{ \circ }$ up and $- {30}^{ \circ }$ down in the vertical direction.
260
+
261
+ § 7.4 MID-AIR SPACE FOR BROWSING-INTENSIVE TASKS
262
+
263
+ Study 3 results showed that mid-air space is more effective for browsing through UIs than the traditional touch interface. This is primarily due to the minimum switching costs (e.g., small device movements) involved with navigating between UIs with FaceUI. Therefore, we suggest that designers consider using mid-air space to place UIs to perform any browsing-intensive tasks with FaceUI-enabled interfaces.
264
+
265
+ § 8 FACEUI ENABLED APPLICATIONS
266
+
267
+ We designed the following three applications to demonstrate how face-centered spatial user interfaces can be used in a off-the-shelf smartphone.
268
+
269
+ § 8.1 IMAGE BROWSER
270
+
271
+ In Study 3, FaceUI implementation did not consider a scenario where items were located in a grid. However, many FaceUI-enabled application could be benefited from such as an item arrangement style. Consequently, we developed an image browsing app that offloads a set of images into a $5 \times 3$ grid on the mid-air space in front of users. Users can browse images by moving the device in horizontal and vertical directions. While browsing the images, users can touch the screen to access further details about an image. Figure 8a-b demonstrates the app scenario where a user is browsing the images by moving the phone along horizontal and vertical directions.
272
+
273
+ § 8.2 MESSAGE HISTORY
274
+
275
+ Scrolling through items with touch interfaces is cumbersome and time-consuming, especially for one-handed interaction mode [28]. We developed an application leveraging face-centered UIs to scroll through messages in a messenger app with one hand. In our implementation, the app allows users to scroll through messages by moving their phone vertically between $+ {20}^{ \circ }$ up and $- {30}^{ \circ }$ down (Figure 8c). Clutching can be used to browse extended message history by tapping on the screen.
276
+
277
+ § 8.3 MAP NAVIGATION
278
+
279
+ We designed a FaceUI enabled map application where users move the phone in mid-air to zoom and pan the map. Users phone movements to the left/right/up/down (relative to their face) were associated with the map movement in the same direction (Figure 8d). Map zoom in or zoom out was mapped with the distance from the face (smaller the distance, zoomed in the map).
280
+
281
+ § 9 LIMITATION AND FUTURE WORK
282
+
283
+ FaceUI requires users to move the phone in mid-air, which may cause eye and arm fatigues for prolonged use. Further investigation is needed to explore solutions to minimize such fatigues. In addition, using FaceUI in public spaces may trigger feelings such as embarrassment or discomfort due to hand movements which may attract by-passers' attention. Thus, it is also important to carry out studies to explore the social acceptance of FaceUI in public and private spaces. We asked participants to keep the head static and move the phone only during studies. Further studies can explore how to leverage head and smartphone movements to access spatial UIs. While using FaceUI, on-screen information can be viewed by surrounding people, triggering privacy concerns. Future studies need to investigate users and by-passers privacy concerns and explore approaches such as software and hardware-based privacy filters to only keep the information visible to users. We only investigated the performance of FaceUI for accessing windows located in the horizontal direction. Further investigations are needed to explore natural delimiters to switch between FaceUI and touch and the performance of FaceUI-enabled applications, where windows are arranged in both horizontal and vertical directions. We foresee future research exploring ways to reduce switching between UIs for the traditional touch interfaces Lastly, future work needs to examine the performance of FaceUI in different usage contexts such as standing, sitting or walking in public and private spaces.
284
+
285
+ § 10 CONCLUSION
286
+
287
+ We have presented FaceUI, a novel approach leveraging mid-air space to access face-centered spatial user interfaces. Through two user studies, we first explored different factors that influence the design and performance of FaceUI. Based on the results, we designed a FaceUI-based calendar app and compared users' performance using the calendar with a touch-based calendar interface. Results showed that FaceUI is a promising approach to enable faster access to UIs than the traditional touch interfaces.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/ShGxRxFV6Mq/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,305 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Structured Shape-Patterns from a Sketch: A Multi-Scale Approach
2
+
3
+ Submission Number: 40
4
+
5
+ ## Abstract
6
+
7
+ Structured 2D patterns formed by the anisotropic distribution of arbitrary shapes are ubiquitous in nature and man-made environments. They may include both bounded and unbounded (extended fiber-like) shapes. In this work, we address the problem of interactively generating such patterns from a single exemplar, sketched by a user. We build our solution on a new data structure, the Support Structure Hierarchy, computed from a multi-resolution analysis of the input exemplar, that encodes the main anisotropy directions at different scales as well as deviations from them. We propose an efficient method based on this structure to synthesize a similar distribution of shapes in an extended 2D domain. The user may also choose to hybridize several input exemplars, by combining structural shapes extracted at different scales. As shown through a user study, our multi-scale solution generates structured shape-patterns that perceptually compete with state-of-the-art methods, were they from learning-based or not. Moreover, our interactive solution, which does not require any precomputation, matches well the needs of an interactive authoring tool, where the user can not only sketch and extend 2D vector textures but also combine them seamlessly.
8
+
9
+ Index Terms: Computing methodologies-Computer Graphics-Graphics systems and interface-Texturing.
10
+
11
+ ## 1 INTRODUCTION
12
+
13
+ From fibers and cellular organisms at microscopic scales to seaweeds, schools of fishes, human queues, and alignments of trees or buildings at a larger scale, anisotropic distributions of shapes are ubiquitous in nature and man-made environments. Moreover, such structured shape-patterns have been used extensively in 2D for decorative purposes, from mosaics and wallpapers to the distribution of windows and architectural decorations on building facades. The perceived structure emerges from the anisotropy of these distributions of shapes. In particular, the specific ranges and variances of perceived orientations, both in terms of salient shapes and alignments, convey their unique visual appearance. This work explores the synthesis of such structured 2D shape-patterns from a sketch.
14
+
15
+ Example-based texture synthesis has already been extensively studied. However, existing methods have mostly focused on point distributions. They have achieved statistical accuracy using noise models, continuous representations of discrete distributions such as pair-correlation or probability-density functions, or neighborhood metrics and energy optimization. The few methods tackling anisotropic distributions of shapes have used multiple point samples or proxy geometries to achieve the analysis and synthesis of structured patterns. To the best of our knowledge, none of them have tackled the case of anisotropic shape-patterns that may include both bounded and unbounded (fiber-like) shapes. While the use of deep learning may be a promising alternative, it requires large training databases and long precomputation (learning) times, which has limited its use so far in interactive design scenarios.
16
+
17
+ This work tackles the interactive, sketch-based design of anisotropic distributions of shapes in 2D. Given any sketched pattern (a distribution of simple bounded shapes, and/or fiber-like unbounded shapes), our method efficiently synthesizes a perceptually similar, consistent, and non-repetitive distribution of shapes in an extended 2D domain. Note that the input pattern is fully preserved at the synthesis stage. Indeed, contrary to previous methods, the input becomes the central part of the extended texture, while being seamlessly integrated into its larger surrounding. Our solution increases user control and also seems to improve the perceived similarity of the results. One such result generated by our interactive system is shown in Fig. 1 with its interface.
18
+
19
+ ![01963e67-9d54-7013-80b9-8160868f18e2_0_924_673_724_510_0.jpg](images/01963e67-9d54-7013-80b9-8160868f18e2_0_924_673_724_510_0.jpg)
20
+
21
+ Figure 1: Based on a few perceptual and depiction hypotheses, our method extends an input sketch (bottom left) into a larger vector texture (right). Both bounded (individual fishes) and unbounded shapes (wavy lines) are seamlessly handled. The simple interface (top left) is quick to learn and easy to use.
22
+
23
+ Real-time analysis and synthesis of distributions require an efficient representation, encoding both local and global correlations between shapes. Our first insight is to introduce a compact encoding for anisotropic distributions, called the Support Structure Hierarchy, where individual supporting structures are lead directions of alignments or line skeletons computed from user strokes, all computed at various scales. This representation leads to a particularly simple and efficient multi-scale analysis of the distributions of orientations in the input sketch. It also enables efficient domain extension.
24
+
25
+ The main challenge at the synthesis stage remains to understand user expectations and the required criteria for perceptual similarity. The (new) case of fiber-like shapes is particularly challenging, because extending fibers that are disjoint in the input exemplar may generate intersections in the extended domain. This could strongly affect our perception of the output as looking different from the input. To support our insights, we formulate a set of perceptual hypotheses to drive our synthesis solution; they were later validated through a user study. In particular, our solution interprets non-intersecting fiber-like strokes as curves that could slightly bend to prevent intersection in the extended domain.
26
+
27
+ Thanks to its efficiency, we integrated our solution in an interactive authoring tool, where users can progressively test and refine their designs. They can generate a wider variety of vector textures by interactively hybridizing features extracted from several input exemplars, e.g., combining shapes from an exemplar with larger-scale alignments from another exemplar.
28
+
29
+ In summary the contributions of our work are threefold, as we introduce:
30
+
31
+ - a fine-to-coarse analysis method that hierarchically clusters user strokes into a Support Structure Hierarchy, based on a new "perceived distance" between line-segments within a domain, depending on both their position and orientation;
32
+
33
+ - a coarse-to-fine synthesis method that extends the pattern around the input examplar, based on the extracted hierarchy and on a set of perceptual hypotheses validated by a user study;
34
+
35
+ - an interactive authoring tool, enabling both domain extension and hybridization of structured shape-patterns.
36
+
37
+ ## 2 RELATED WORK
38
+
39
+ This work addresses 2D sketch-based synthesis of anisotropic discrete distributions. It is related to example-based synthesis that aims at generating an output that minimizes some statistical or perceptual distance with the input, while avoiding artifacts such as salient repetitiveness. We focus below on distributions of 2D shapes, i.e., vector textures formed by arrangements of discrete 2D shapes, and also discuss recent alternatives based on deep learning. We refer the reader to $\left\lbrack {5,{18}}\right\rbrack$ for more general surveys.
40
+
41
+ Discrete vector textures (or shape-patterns) have been generated by analyzing distributions of the centroids of individual shapes, and then applying a two-stage synthesis for the new centroids, followed by the creation of the associated shapes. The pioneer work of Barla et al. [1] aims at synthesizing stroke patterns. Their method computes a Delaunay triangulation from the centroids to retrieve the input distribution connectivity. During synthesis, they rely on a Lloyd relaxation and some perturbation to generate a new set of points from which the shapes are recovered. In the same mindset, Ijiri et al. [7] explore local growth processes before the relaxation process. These two methods are however limited respectively to quasi-uniform distributions or to 1-ring neighborhoods. To manage more general shape distributions, Hurtut et al. [6] define the input distribution as a combination of Gibbs point processes from which they generate a new arrangement using Monte-Carlo chains. However, all of these methods are unable to analyze and synthesize structured inputs. In particular, they cannot tackle anisotropic distributions of shapes such as elongated ones, nor analyze correlations between shapes, orientations, and spatial alignments.
42
+
43
+ Instead of using a single centroid point, Ma et al. [12] characterize each input shape by several sample points. They rely on a neighborhood metric and an energy optimization process to insert individual shapes in a predefined output domain. Their approach has been extended to dynamic textures [11], stroke auto-completion [19], and adapted to other texture workflows $\left\lbrack {2,8}\right\rbrack$ . While the use of multiple sample points has also been extended to distribution synthesis of arbitrary shapes, these methods only address bounded shapes-as opposed to unbounded shapes-and require post-processing to avoid inter-penetrations at the synthesis stage, which precludes their use in real time.
44
+
45
+ Rather than sampling the input shapes, Landes et al. [9] propose to simplify them into proxy geometries. They introduce a spatial relationship measure that takes into account space between pairs of shapes and their relative orientations. By extending the stochastic models to point distributions $\left\lbrack {{13},{20}}\right\rbrack$ , their synthesis method successfully maintains distributions of distances and relative orientations of shapes. Although it handles anisotropic distributions, their method does not meet our goals, as it does not offer real-time performance and is limited to distributions of bounded shapes.
46
+
47
+ In contrast, Roveri et al. [15] present the first example-based distribution synthesis method applicable to both bounded and unbounded shapes. Regardless of their dimension, shapes are decomposed into point samples that are encoded in a functional representation. A similarity measure is defined in the associated functional space to quantify similarity between input and output. Synthesis is achieved in a few minutes through neighborhood matching and energy optimization. As most other neighborhood-based texture synthesis methods, their method requires input patterns with enough repetitions to avoid bad local minima in the optimization, which would distort the synthesized structures. Moreover, contrary to our method, the use of a fixed neighborhood size prevents their method from capturing repetitive structures at different scales.
48
+
49
+ Deep learning methods have recently been applied to texture synthesis $\left\lbrack {4,{10},{14},{16},{17},{21}}\right\rbrack$ . They show promising results for capturing, at least partially, local and global correlations present in an input exemplar. In particular, Fish et al. [4] enable sketch stylization via the transfer of geometric textural details from different images; this is related to our secondary goal of pattern hybridization However, most of these frameworks are image-based and do not extend well to the discrete distributions of vector shapes, which is the scope of our work. The method of Tu et al. [17] is closest to our goal of handling vector shape distributions; it characterizes point patterns via a trained VGG network. Our method, based on a simpler but efficient analysis stage, has the advantage of requiring no time-consuming precomputations (which are inherent to deep-learning techniques), while achieving real-time processing of any newly-created input.
50
+
51
+ ## 3 OVERVIEW
52
+
53
+ ### 3.1 Hypotheses on Depiction & Perception
54
+
55
+ Extending a sketched pattern in a perceptually similar way requires making some hypotheses about user depiction and perception of the resulting pattern. Our key hypothesis, common to most sketch-based modeling systems, is that users see their input as a general view of the distribution they want to create. Therefore, the input is supposed to include all the necessary information, in a perceptually representative way. This led us to three design hypotheses:
56
+
57
+ ${H1}$ : Groupings and alignments are meaningful: All alignments and groupings are intentional.
58
+
59
+ ${H2}$ : Repetitiveness is explicit: All the shapes that a user wants to see repeated in the output, are repeated in the input.
60
+
61
+ ${H3}$ : Non-overlapping shapes should remain disjoint: Shapes that do not overlap the input should not overlap in the output.
62
+
63
+ These three hypotheses are used as guidelines for our method at the design stage, and then validated by a user study (see Sect. 6).
64
+
65
+ ### 3.2 Creation and Preprocessing of an Input Sketch
66
+
67
+ During a sketching session, the user successively draws strokes of any color in a square representing our 2D Input Space (IS). See Fig. 1, left. Two different pens are provided to denote bounded and unbounded strokes. The former are limited to the dimensions of ${IS}$ , while the latter are interpreted as extending beyond the input domain, either in both directions if both extremities reach the border of the ${IS}$ , or in a single direction (in case an unbounded stroke does not reach any border of ${IS}$ , we add a segment to connect it to the closest border). The data stored for each stroke are a list of points, a color, a thickness, a type (bounded or unbounded), and a principal direction computed on the fly from the Principal Component Analysis (PCA) on the coordinates for all points of a stroke.
68
+
69
+ The user may sketch the input pattern in any order. As several stokes can be used to represent a shape, we provide an automatic clustering mechanism, presented next, to identify shapes at the beginning of the analysis stage.
70
+
71
+ ![01963e67-9d54-7013-80b9-8160868f18e2_2_296_147_1207_420_0.jpg](images/01963e67-9d54-7013-80b9-8160868f18e2_2_296_147_1207_420_0.jpg)
72
+
73
+ Figure 2: Processing pipeline for the fine-to-coarse analysis of a sketch into a Support Structure Hierarchy.
74
+
75
+ ### 3.3 Processing Pipeline
76
+
77
+ Multi-scale analysis: Based on Hypothesis ${H1}$ , the analysis stage consists in iteratively extracting a fine-to-coarse hierarchy of support structures (the Support Structure Hierarchy) from the input strokes, according to alignments and multi-scale repetitions in the input (see Fig. 2). We first cluster bounded strokes into shapes composed of one to several strokes, while each unbounded stoke is considered as an individual shape (Level 0). Note that colors are not used in the clustering, enabling the use of several different colors in a given shape. Bounded shapes are then simplified either into a central point or a support segment depending on their degree of anisotropy (Fig. 2c). Central points and support segments are clustered according to both orientation and position to find alignments, and then grouped into fibers (Fig. 2d), forming the Level 1 of the Support Structure Hierarchy. Other fibers are directly extracted from the unbounded strokes (Fig. 2c'). To capture repetitions at a larger scale, fibers of similar orientation are clustered into fiber medians (Level 2, Fig. 2f), which are finally grouped into lead directions (Level 3, Fig. 2g). During this hierarchical clustering and simplification process, the input domain ${IS}$ is progressively partitioned into a hierarchy of ribbons that express the variability of position of each substructure around its parent structure. This partitioning will be used to allow an adequate degree of variability while avoiding unwanted overlaps at the synthesis stage. See Section 4 for details.
78
+
79
+ Synthesis stage: Unlike most existing approaches, our method to synthesize distributions consists in directly replicating local and global correlations between the input shapes, encoded by our Support Structure Hierarchy. To avoid exact repetitions, this is done by instantiating each structure from top to bottom of the hierarchy while perturbing their positions within adequate allowed areas. These areas are computed to prevent overlaps between strokes belonging to the same lead direction and at a low cost since no further overlaps detection will be required.
80
+
81
+ Structures at the top of the hierarchy are first extended to the user-selected larger $2\mathrm{D}$ domain, defined as a radial extension of ratio $k > 1$ of ${IS}$ . The support hierarchy is then traversed top-down to the individual strokes. At each level, the repetitive structures are repeated within the larger domain, in order to generate the extended structured pattern. This is done in accordance with our design guidelines: a shape that only appears once in the input (such as the vertical see-weed in Fig. 1) will be extended at its extremities in case of an unbounded stroke, but will not be repeated (consistency with ${H2}$ ). Moreover, at each level of the hierarchy, allowed areas within ribbons are used to guide the synthesis of substructures while preventing unwanted overlaps (consistency with ${H3}$ ). Note that curving some of the supporting structures is necessary to avoid undesired overlaps in the extended the domain, as illustrated by the three green waves that do not overlap with the two blue waves in Fig. 1, right. This process, an original step of our solution justified by our perceptual guidelines, will be detailed in Section 5.
82
+
83
+ Interactivity and hybridization Thanks to its real-time performance, our method not only allows users to sketch and extend a given shape-pattern, but also to return to the sketching interface to iteratively improve their input. In our authoring system, all identified shapes are recorded in a shapes database, enabling the user to refine the input by adjusting their position, or to reuse them later for another design. Hierarchical structures extracted from the analysis stages of different inputs can also be combined to create a different design, a process called hybridization (see Section 6).
84
+
85
+ ## 4 FINE-TO-COARSE ANALYSIS
86
+
87
+ ### 4.1 Level 0: From Strokes to Shapes
88
+
89
+ As illustrated in Fig. 2b, b', the bounded and unbounded strokes in the input are analyzed separately, to extract supporting lines that will then be processed in a combined manner.
90
+
91
+ We consider the unbounded strokes as individual unbounded shapes. In contrast, we extract bounded shapes by clustering the input bounded strokes as follows: we compute the oriented bounding box of each bounded stroke and group these boxes according to their pair-wise distances. We then associate the resulting bounded shapes to a single central point or support segment, according to an anisotropy threshold. The resulting set of support segments and central points is the first simplification of the input, efficiently encoding the principal directions and approximate positions of the bounded shapes.
92
+
93
+ ### 4.2 Level 1: From Shapes to Fibers and their Ribbons
94
+
95
+ We approximate each unbounded shape with a line, called fiber, that best matches its principal direction and position. This support line, augmented with a perpendicular thickness to cover the whole shape, is called a ribbon. For bounded shapes, finding such fibers and ribbons requires analyzing anisotropic information such as alignments. We retrieve the support lines of support segment and cluster them using the Mean Shift algorithm. We then compute a central fiber within each cluster. Central points are first clustered by position, before using Principal Component Analysis to compute their main directions of alignment. Representative fibers are defined from the centroid of each cluster and these principal directions. Thicknesses are computed for each of these fibers, so that the corresponding ribbon fully covers the shapes associated with the clustered points or segments.
96
+
97
+ ### 4.3 Level 2: From Fibers to Fiber Medians
98
+
99
+ Fibers with similar orientations and close positions are grouped at this stage. Since we focus on anisotropic distributions, we prioritize orientations of fibers over their positions. We first compute the histogram of fiber orientations to group those that belong to the same anisotropic distribution. We then refine each cluster using a specific perceived distance, which we define as the minimum distance between each fiber intersection points with the domain contour (see Fig. 3). This distance takes into account both position and orientation of the lines: the more parallel and closer two lines are in position, the smaller the distance. Each resulting sub-cluster is stored as a fiber median defined as the mean of parameters both in orientation and in position of the clustered fibers (see Fig. 2f). We also store the circular standard deviation associated to each fiber median for later use at the synthesis stage.
100
+
101
+ ![01963e67-9d54-7013-80b9-8160868f18e2_3_296_154_427_236_0.jpg](images/01963e67-9d54-7013-80b9-8160868f18e2_3_296_154_427_236_0.jpg)
102
+
103
+ Figure 3: We compute the "perceived distance" between two fibers in a normalized input domain. It is defined as the minimal distance between their intersection points on any of the lines bordering the domain $\left( {X = 0, X = 1, Y = 0, Y = 1}\right)$ , which is extremely fast to compute (for each fiber, only the 4 values ${y}_{X = 0},{y}_{X = 1},{x}_{Y = 0},{x}_{Y = 1}$ are needed). This distance accounts for both position and orientation, and is defined even if the lines intersect in the domain. Here, $d\left( {{L1},{L2}}\right) < d\left( {{L2},{L3}}\right)$ , which matches our perception.
104
+
105
+ Similarly to the previous hierarchy level, a thickness parameter is associated with each newly created fiber median in order to define an associated ribbon that fully includes the sub-ribbons of the clustered substructures (see Fig. 4).
106
+
107
+ ### 4.4 Level 3: From Fiber Medians to Lead Directions
108
+
109
+ The top level of aggregation in our hierarchical analysis aims at clustering similarly oriented fiber medians. We use the same clustering process as at the previous level of the hierarchy. Each cluster is represented by a lead direction, defined using the average of the clustered elements in orientation and position (see Fig. 2g). As at the previous hierarchy level, ribbons are defined by associating a thickness parameter to each lead direction, as to includes ribbons around clustered median fibers. As a result, the input space ${IS}$ is divided into nested ribbon-like structures (see Fig. 4).
110
+
111
+ ### 4.5 Computing the allowed displacement areas
112
+
113
+ The last step of the analysis stage consists in computing the available space around each clustered shape or ribbon, within their parent structure in the hierarchy. We call this space the allowed displacement area, since it will be used at the synthesis stage for adding random displacements to repeated structures, enabling to provide visual diversity while avoiding unwanted overlaps between shapes.
114
+
115
+ Displacement areas for ribbons Starting at the top of the hierarchy, we recursively decompose each ribbon, using splitting lines that are parallel to its main axis, and evenly split the empty space between neighboring, non-overlapping sub-ribbons (defined as ribbons around one of the clustered sub-structures). The distance between neighboring sub-ribbons (ie. the minimal distance between their contents), used to position these lines, is computed while considering a toroidal topology for ${IS}$ . Based on this distance, two lines are evenly generated between the neighbouring ribbons, to define the limits of extended regions for each of them, as well an empty space between them.
116
+
117
+ This decomposition results into a displacement region around each sub-ribbon, and an given distance, called gap between them. Note that since the parent orientation was used for this decomposition, the sub-ribbons generally have a slightly different orientation. Moreover, they are not necessarily centered in the associated displacement region (see Fig. 4) a) and b).
118
+
119
+ Finally, the minimum and maximum gaps values are stored in the parent structure, together with the set of displacement areas associated to sub-ribbons.
120
+
121
+ Displacement areas for bounded shapes These rectangular regions, depicted using dashed lines in Fig. 4d), represent the areas within the fiber ribbon of a bounded shape in which its bounding box will be allowed to move during instantiation. Their two axes (x, y)respectively correspond to the direction of the associated fiber median and its orthogonal direction. The allowed perturbation along $x$ (tangent to the direction) is set to half of the distance to the next bounding box of a bounded shape, which ensures that overlaps will always be prevented at the synthesis stage. The allowed perturbation along $y$ is set so that the bounding box can cover the whole associated fiber-median ribbon. Again, these computations are done while considering a toroidal topology for the input space ${IS}$ . Therefore, the computed displacement areas can then expand outside ${IS}$ (see the orange areas in Fig. 4d), which is less restrictive when extending the pattern to a larger input domain.
122
+
123
+ ## 5 SYNTHESIS OF AN EXTENDED SHAPE-PATTERN
124
+
125
+ To enable a seamless exploration of a larger 2D domain by simply zooming out after sketching, our objective is to keep the user-drawn strokes within ${IS}$ while extending and repeating them in a larger output space ${OS}$ (defined as an expansion of ${IS}$ by a ratio $k > 1$ ). This is done through a coarse-to-fine process in which the elements stored in the Support Structure Hierarchy are extended to ${OS}$ and repeated if necessary.
126
+
127
+ Extension and repetition of lead ribbons: According to ${H2}$ (see Sect. 3.1), Lead directions consisting of only one fiber median, (such as the vertical lead direction in Fig. 5) should not be repeated. Therefore, we simply extend them as well as their unbounded child structures to span the whole ${OS}$ .
128
+
129
+ For the remaining lead directions (corresponding to repeated substructures in the input), we perform the same extension to the whole ${OS}$ , but also generate new copies of the structure in the remaining space, through an efficient randomized repetition procedure, as follows. For each lead ribbon, we start from a displacement area with a single neighbour, randomly generate a new gap using values in the recorded range, and generate the next displacement area as to randomly clone one of the exiting ones (ie. using the same width). We apply this technique to progressively fill ${OS}$ . The randomness in the gap values between displacement areas for sub-ribbons generates different lead ribbon configurations, and therefore different outputs from the same input (see Fig. 5).
130
+
131
+ Fiber medians ribbons will now be generated within the newly extended and repeated lead ribbons, as presented next, at the cost of slightly bending some of them as well as their child structures, if they happen to overlap when extended to ${OS}$ .
132
+
133
+ Repetition of fiber medians and ribbons For each newly generated displacement area, we synthesize its fiber median by first copying the parameter values of the original ribbon. We then use the circular standard deviation on the medians' orientations computed during the analysis stage (Sect. 4.3) to perturb its orientation. We also perturb the position of its centroid to place it in the middle of the displacement area. While the middle part of generated ribbons are guaranteed to remain within their lead ribbon, this is not necessarily the case when they extend to ${OS}$ , as illustrated in Fig. 6 (left). When this occurs, we slightly bend a ribbon and its fiber median (see Fig. 6 (right)) to make it fit entirely inside its allowed displacement area.
134
+
135
+ ![01963e67-9d54-7013-80b9-8160868f18e2_4_249_156_1293_366_0.jpg](images/01963e67-9d54-7013-80b9-8160868f18e2_4_249_156_1293_366_0.jpg)
136
+
137
+ Figure 4: Input domain partitioning: (a) lead ribbons, each between each a pair of dashed lines; (b) ribbons (solid lines) around the fiber medians (dashed lines); (c) the sub-ribbons (dashed lines) inside the ribbons (in plain); (d) the displacement areas, delimited by the dashed areas.
138
+
139
+ ![01963e67-9d54-7013-80b9-8160868f18e2_4_195_649_631_317_0.jpg](images/01963e67-9d54-7013-80b9-8160868f18e2_4_195_649_631_317_0.jpg)
140
+
141
+ Figure 5: (left) Allowed displacement areas between dashed lines, based on lead directions; (right) Randomized repetition and propagation of lead ribbons.
142
+
143
+ ![01963e67-9d54-7013-80b9-8160868f18e2_4_179_1107_660_321_0.jpg](images/01963e67-9d54-7013-80b9-8160868f18e2_4_179_1107_660_321_0.jpg)
144
+
145
+ Figure 6: Fiber medians and ribbons repetition in ${OS}$ : (left) without any bending; (right) with slight bending.
146
+
147
+ Avoiding overlaps by bending structures Inspired by physical properties of (real) fibers, we consider the following assumption: the thinner the ribbon, the more flexible it may be. This can be formalized through the equation $R = {\tau w}$ , relating the curvature radius $R$ to the ribbon width $w$ and a stiffness parameter $\tau \in {\mathbb{R}}^{ + }$ .
148
+
149
+ In case of overlap, each ribbon will intersect twice with its lead ribbon.For symmetry reasons, we then bend both sides of the ribbon, even if one of the intersection regions is out of the output domain. To preserve continuity between the original borderlines of the ribbon and their curved version, we consider the midpoints $\left( {M}_{1}\right.$ and ${M}_{2}$ in green in Fig. 7) between a projected point and the other intersection point as inflection points. For each of these inflexion points (say $M$ ), the key idea is to find the arc of circle $C$ that passes through $M$ and remains inside the lead ribbon as illustrated in Fig. 7. The details of this computation are provided in the associated Supplementary material.
150
+
151
+ The same bending process is applied to the children sub-ribbons, in order to fit them inside their parent curved ribbon.
152
+
153
+ ![01963e67-9d54-7013-80b9-8160868f18e2_4_1005_657_561_359_0.jpg](images/01963e67-9d54-7013-80b9-8160868f18e2_4_1005_657_561_359_0.jpg)
154
+
155
+ Figure 7: (Top) A ribbon has two intersections (I1 and I2) with its lead ribbon. (Bottom) The ribbon is bent to remain within its lead ribbon.
156
+
157
+ Shape distribution synthesis The final step is to synthesize new shapes within each extended or newly created fiber ribbon.
158
+
159
+ ![01963e67-9d54-7013-80b9-8160868f18e2_4_963_1240_648_343_0.jpg](images/01963e67-9d54-7013-80b9-8160868f18e2_4_963_1240_648_343_0.jpg)
160
+
161
+ Figure 8: Extension of unbounded strokes: (a) curve; (b) arc.
162
+
163
+ a) Unbounded shapes We define four unbounded stroke categories (lines, rays, arcs and curves) that respectively stand for perfectly linear unbounded strokes, half-lines, unbounded strokes with a single curvature extremum in ${IS}$ , and unbounded strokes with more than one curvature extrema. We start by extending these unbounded shapes to ${OS}$ along their fiber direction, which is trivial for lines and rays. Arcs are extended through an alternative mirror duplication that leads to a smooth sinusoidal curve. Curves are first cut at their first and last extrema. Then, we alternatively duplicate the mirror version of the curve-segment to extend it to ${OS}$ as illustrated in Fig. 8. These extended strokes are stored in the local frame of their corresponding fiber. They will therefore be automatically repeated and curved if needed through the repetition process of their parent structures in the hierarchy. The resulting curved structures are shown for different sizes of ${OS}$ in Fig. 9.
164
+
165
+ ![01963e67-9d54-7013-80b9-8160868f18e2_5_216_145_1360_482_0.jpg](images/01963e67-9d54-7013-80b9-8160868f18e2_5_216_145_1360_482_0.jpg)
166
+
167
+ Figure 9: Variation of lateral ratio (k) for unbounded stroke distribution: (a) input; (b) $k = 3$ ; (c) $k = 5$ ; (d) $k = {10}$ . Note that these results have been scaled to fit in the figure.
168
+
169
+ b) Bounded shapes We process bounded shapes by first, iteratively repeating their representative support segments or central points along their extended fiber, while using the previously computed displacement areas to perturb their positions, and drawing the shapes in the resulting local frames. We then reuse their local positions with respect to their fiber to repeat them within the parent fiber medians ribbons, but with randomly modified positions within in the authorized displacement areas. See Figure Fig. 10.
170
+
171
+ ![01963e67-9d54-7013-80b9-8160868f18e2_5_182_1034_656_322_0.jpg](images/01963e67-9d54-7013-80b9-8160868f18e2_5_182_1034_656_322_0.jpg)
172
+
173
+ Figure 10: Synthesis outline: (left) input with ribbons between pairs of dashed lines; (right) shape repetition within the extended and synthesized ribbons.
174
+
175
+ Avoiding residual overlaps: Given that repetitions in different lead directions are computed independently, lead ribbons with different orientations may naturally intersect. This may lead to perceptual artifacts if these lead directions both contain initially non-overlapping bounded shapes. Indeed some undesirable overlaps may occur in the output. We use an ${AABB}$ tree to partition ${OS}$ and efficiently detect overlaps between displacement areas of bounded shapes. In such case, we restrict the corresponding displacement areas. If this strategy fails (not enough space to insert a shape), we do not instantiate it (see Fig. 11 (b) for such a challenging example).
176
+
177
+ ## 6 RESULTS AND DISCUSSION
178
+
179
+ ### 6.1 Interactive authoring system
180
+
181
+ Our prototype system is implemented in WebGL. Creating and extending highly structured vector patterns is made easy by our method, as shown in Fig. 11. In addition to the main sketching and texture expansion interface, the user may store and reuse complex shapes, such as the two categories of fishes in Fig. 1. The use of our system for creating a complex sketch, inspired from biology, is illustrated in Fig. 12.
182
+
183
+ ![01963e67-9d54-7013-80b9-8160868f18e2_5_954_756_661_246_0.jpg](images/01963e67-9d54-7013-80b9-8160868f18e2_5_954_756_661_246_0.jpg)
184
+
185
+ Figure 11: Our synthesis method maintains the perceived regularity of structured distributions (known to be hard to handle) in both cases of unbounded and bounded strokes.
186
+
187
+ ![01963e67-9d54-7013-80b9-8160868f18e2_5_961_1166_650_456_0.jpg](images/01963e67-9d54-7013-80b9-8160868f18e2_5_961_1166_650_456_0.jpg)
188
+
189
+ Figure 12: a) Biological illustration depicting cells that navigate in a distribution of fibers; b) Input sketch inspired from (a); (c) Result.
190
+
191
+ In addition, several input shape-patterns can be interactively combined to create a hybrid one, as follows: Thanks to the Support Structure Hierarchy, the user can select the desired level of hierarchy from two different input shape-patterns, and combine them and create an hybrid result. We rely on the fact that our Support Structure Hierarchy encodes the input data into structures that are defined in the local frames of their upper structure ribbons, themselves characterized by a main direction and a width. Therefore, consistent patterns can be generated while the input shapes, fibers, fiber-medians or lead directions are changed. Such an hybridization is shown in Fig. 13.
192
+
193
+ ![01963e67-9d54-7013-80b9-8160868f18e2_6_188_159_640_415_0.jpg](images/01963e67-9d54-7013-80b9-8160868f18e2_6_188_159_640_415_0.jpg)
194
+
195
+ Figure 13: Hybridization example, where two input shape-patterns (left) are combined to create a new result (right)
196
+
197
+ ### 6.2 Comparison with previous work
198
+
199
+ We compared our results with both distribution-based and deep-learning-based methods for generating vector textures from examples. Since most classical methods are limited to distributions of bounded shapes, we restricted comparison to this sub-case (see Fig. 14). Since our results seemed close to those of the best classical method, Landes et al. [9], we selected this method for further comparison in our user study (see below).
200
+
201
+ ![01963e67-9d54-7013-80b9-8160868f18e2_6_182_1009_653_821_0.jpg](images/01963e67-9d54-7013-80b9-8160868f18e2_6_182_1009_653_821_0.jpg)
202
+
203
+ Figure 14: Comparison with distribution-based methods: (a) image input; (b) [1]; (c) [7]; (d) [6]; (e) [12]; (f) [9]; (g) our corresponding sketched input; (h) our result.
204
+
205
+ We also tried our method on examples presented as failure cases in previous papers, such as Fig. 11 (a failure case of [3]) and Fig. 15 (a failure case of [2]). In both cases, our solution was robust and managed to maintain the regularity of the structured input for both bounded and unbounded shapes.
206
+
207
+ ![01963e67-9d54-7013-80b9-8160868f18e2_6_961_158_646_284_0.jpg](images/01963e67-9d54-7013-80b9-8160868f18e2_6_961_158_646_284_0.jpg)
208
+
209
+ Figure 15: Challenging structured distributions: (a) input; (b) sketched representation of the input ; (c) result of [2]; (d) ours.
210
+
211
+ Lastly, we compared our results with those of Tu et al. [17], the only deep-learning method tackling point distributions (see Fig. 16). Although our method is interactive and does not require any precom-putation stage (in contrast to the hours of training of deep learning methods), the quality of our results looks almost as good (we get more artefacts in the first example, while [17] gets more of them in the second one).
212
+
213
+ ![01963e67-9d54-7013-80b9-8160868f18e2_6_957_823_656_235_0.jpg](images/01963e67-9d54-7013-80b9-8160868f18e2_6_957_823_656_235_0.jpg)
214
+
215
+ ![01963e67-9d54-7013-80b9-8160868f18e2_6_954_1066_659_235_0.jpg](images/01963e67-9d54-7013-80b9-8160868f18e2_6_954_1066_659_235_0.jpg)
216
+
217
+ Figure 16: Comparison with the closest deep learning method [17]: (left) input distribution; (middle) results from Tu et al. [17]; (right) our results.
218
+
219
+ ### 6.3 User study
220
+
221
+ We carefully designed an online user study to validate the perceptual hypotheses presented in Sect. 3.1, as well as the perceived quality of the extended textures we generate (See our supplemental document for screenshots and detailed results).
222
+
223
+ The study was conducted by 35 users, from 19 to 61 years old, including 22 males, 9 females and 4 genders unspecified. 14 had an intermediary or expert experience in digital design and 9 as traditional designers. It was composed of two parts: an interactive drawing session, and a comparison session. In the drawing session, users were asked to manually draw an extended texture from a given input pattern. In the comparison session, users were asked to select the closest result from a given a 2D input. Each experiment lasted around ten minutes, most of which in the drawing session.
224
+
225
+ Among the guidelines to validate, ${H1}$ (Groupings and alignments are meaningful) was validated by the drawing session, where 97% of the users preserved the grouping of fiber-like shapes and 76% of the users respected the anisotropy directions of bounded shapes in their drawings. H2 (repetitiveness is explicit) was validated by most users during the comparison session, and was also observed in the user's drawings as those of Fig. 17. H3 (non-overlapping shapes should remain disjoint) was validated as well by the users' drawings, with 73% of overlapping-free drawings when it was the case in the input.
226
+
227
+ ![01963e67-9d54-7013-80b9-8160868f18e2_7_152_147_721_190_0.jpg](images/01963e67-9d54-7013-80b9-8160868f18e2_7_152_147_721_190_0.jpg)
228
+
229
+ Figure 17: (Left) An example of input for the drawing session; (Right) Example of sketches created by different users
230
+
231
+ As part of the comparison session, user were asked to choose between our extended textures and the generated ones from Landes et al. [9] (shown in random order, and using the same shape depiction), for the ants and the balloons examples of Fig. 14. Respectively ${86}\%$ and ${77}\%$ of users preferred our results. We attribute this unexpectedly good results to the fact we keep the exact input pattern at the center of the generated texture, while seamlessly extending it sideways.
232
+
233
+ ### 6.4 Performance
234
+
235
+ The following table was computed using the Google Chrome runtime performance on an Intel(R) Core(TM) i7-7920HQ CPU at ${3.10}\mathrm{{GHz}}$ . The second column gives the number of points in the input example, and then the time in milliseconds of respectively the analysis and the synthesis. Note that the synthesis has been performed with a ratio of $k = 3$ . As can be observed, the overall computation times take less than a fraction of second.
236
+
237
+ <table><tr><td>Example</td><td>#Points</td><td>Analysis</td><td>Synthesis</td></tr><tr><td>Fishes (Fig. 1)</td><td>7699</td><td>73ms</td><td>111ms</td></tr><tr><td>Biology (Fig. 12)</td><td>3094</td><td>27ms</td><td>86ms</td></tr><tr><td>Ants (Fig. 14 top)</td><td>9447</td><td>134ms</td><td>233ms</td></tr><tr><td>Balloon (Fig. 14 bottom)</td><td>4034</td><td>42ms</td><td>68ms</td></tr><tr><td>Trunks (Fig. 15)</td><td>3164</td><td>68ms</td><td>83ms</td></tr></table>
238
+
239
+ ### 6.5 Discussion and limitations
240
+
241
+ The specificity of our method compared to previous work is that it does not require any neighborhood matching at the synthesis stage, given that our hierarchical representation already captures correlations. This leads to real-time performances, suitable to our application context.
242
+
243
+ However, the notion of anisotropy being central in our method, it makes it unsuitable to synthesize isotropic distributions. Indeed computing meaningful fiber directions then more difficult, which prevents further extraction a structural hierarchy. While this is the main limitation of our framework. Therefore, in an authoring tool, our solution should be compared with a former method handling isotropic distributions.
244
+
245
+ A useful extension would be to give the user the possibility of choosing among different perceptual hypotheses, for instance, regarding explicit repetitiveness, which may not be always desired, or to enable overlaps between bounded and unbounded shapes. For instance, in the biology illustration of Fig. 12, the cells depicted in pink/red should remain attached to the underlying fibers, which is not the case in our solution. Indeed, we never cluster bounded strokes with unbounded strokes, even if they overlap. This could easily be added as an option. Generating waving curves rather than having unbounded strokes intersect a design choice, which could also be disabled be the user if necessary.
246
+
247
+ Fig. 18 presents a failure case for our solutions, as bounding boxes around strokes overlap, while these strokes should not be grouped. To solve this problem, we could allow the choice between bounding-boxes-based distances and centroid-based distances as clustering criteria for bounded strokes. This would facilitate the processing of any dense distribution.
248
+
249
+ As a last limitation, we do not consider branching curves, which could be another kind of unbounded elements in the input texture pattern. Since it processes the input strokes one by one, our current method would not capture the branching. The input would be split into isolated curves, which would probably intersect when repeated in the output. In addition, even if grouping was forced, our current way of representing unbounded elements using a single linear ribbon of given width would fail. Therefore, seamlessly extending patterns that include such branching structures remains an open problem.
250
+
251
+ ![01963e67-9d54-7013-80b9-8160868f18e2_7_1032_534_502_251_0.jpg](images/01963e67-9d54-7013-80b9-8160868f18e2_7_1032_534_502_251_0.jpg)
252
+
253
+ Figure 18: Input example from [12], where our current stroke clustering method fails.
254
+
255
+ ## 7 CONCLUSION
256
+
257
+ Motivated by the interactive design of vector textures, we presented a multi-scale method to efficiently extract anisotropic properties from an input pattern, and seamlessly extend it to a larger $2\mathrm{D}$ domain. While our method runs in real time, visual quality of results compare well with those of state-of-the-art vector texture generation methods, including those requiring either higher computational time and/or training data to learn from. M
258
+
259
+ The new Support Structure Hierarchy we introduced is crucial to our method. Extracted at the analysis stage based on a new perceived distance between the salient anisotropic structures within in the input domain, it allows to capture and reproduce the multi-scale structures in an efficient way, while maintaining a good level of visual diversity in the synthesized distribution of shapes. In terms of the interface, our system can be used to quickly design new vector textures by interactively creating new patterns or combining existing ones.
260
+
261
+ Future work While our solution is well suited to most structured shape-patterns, our use of linear ribbon-like shapes to capture multi-scale anisotropy prevents us from handling more complex, branching structures. Addressing this specific case would be an interesting avenue for future work. In addition, a challenging open problem would be to generate a $3\mathrm{D}$ texture from the $2\mathrm{D}$ exemplar interactively sketched by the user. In cases such as biological illustrations, this would enable users to navigate in a 3D structure created from the sketch, leading to a better understanding the depicted environment.
262
+
263
+ ## REFERENCES
264
+
265
+ [1] P. Barla, S. Breslav, J. Thollot, F. Sillion, and L. Markosian. Stroke Pattern Analysis and Synthesis. Computer Graphics Forum, 25(3):663- 671, 2006. doi: 10.1111/j.1467-8659.2006.00986.x
266
+
267
+ [2] T. Davison, F. Samavati, and C. Jacob. Interactive example-palettes for discrete element texture synthesis. Computers & Graphics, 78:23-36, 2019. doi: 10.1016/j.cag.2018.10.016
268
+
269
+ [3] P. Ecormier-Nocca, P. Memari, J. Gain, and M.-P. Cani. Accurate Synthesis of Multi-Class Disk Distributions. Computer Graphics Forum, 38(2):157-168, 2019. doi: 10.1111/cgf.13627
270
+
271
+ [4] N. Fish, L. Perry, A. Bermano, and D. Cohen-Or. Sketchpatch: Sketch stylization via seamless patch-level synthesis. ACM Trans. Graph., 39(6), nov 2020. doi: 10.1145/3414685.3417816
272
+
273
+ [5] L. Gieseke, P. Asente, R. Měch, B. Benes, and M. Fuchs. A survey of control mechanisms for creative pattern generation. In Computer Graphics Forum, vol. 40, pp. 585-609. Wiley Online Library, 2021.
274
+
275
+ [6] T. Hurtut, P.-E. Landes, J. Thollot, Y. Gousseau, R. Drouillhet, and J.-F. Coeurjolly. Appearance-Guided Synthesis of Element Arrangements by Example. In Proc. Symposium on Non-Photorealistic Animation and Rendering, NPAR '09, p. 51-60, 2009. doi: 10.1145/1572614. 1572623
276
+
277
+ [7] T. Ijiri, R. Mêch, T. Igarashi, and G. Miller. An Example-based Procedural System for Element Arrangement. Computer Graphics Forum, 27(2):429-436, 2008. doi: 10.1111/j.1467-8659.2008.01140.x
278
+
279
+ [8] R. H. Kazi, T. Igarashi, S. Zhao, and R. Davis. Vignette: Interactive Texture Design and Manipulation with Freeform Gestures for Pen-and-Ink Illustration. In Proc. SIGCHI Conference on Human Factors in Computing Systems, CHI '12, p. 1727-1736, 2012. doi: 10.1145/ 2207676.2208302
280
+
281
+ [9] P.-E. Landes, B. Galerne, and T. Hurtut. A shape-aware model for discrete texture synthesis. Computer Graphics Forum, 32(4):67-76, 2013. doi: 10.1111/cgf. 12152
282
+
283
+ [10] T. Leimkühler, G. Singh, K. Myszkowski, H.-P. Seidel, and T. Ritschel. Deep Point Correlation Design. ACM Trans. Graph., 38(6), Nov. 2019. doi: 10.1145/3355089.3356562
284
+
285
+ [11] C. Ma, L.-Y. Wei, S. Lefebvre, and X. Tong. Dynamic Element Textures. ACM Trans. Graph., 32(4), July 2013. doi: 10.1145/2461912. 2461921
286
+
287
+ [12] C. Ma, L.-Y. Wei, and X. Tong. Discrete Element Textures. ACM Trans. Graph., 30(4), July 2011. doi: 10.1145/2010324.1964957
288
+
289
+ [13] A. C. Öztireli and M. Gross. Analysis and Synthesis of Point Distributions Based on Pair Correlation. ACM Trans. Graph., 31(6), Nov. 2012. doi: 10.1145/2366145.2366189
290
+
291
+ [14] P. Reddy, P. Guerrero, M. Fisher, W. Li, and N. J. Mitra. Discovering pattern structure using differentiable compositing. ACM Trans. Graph., 39(6), Nov. 2020. doi: 10.1145/3414685.3417830
292
+
293
+ [15] R. Roveri, A. C. Öztireli, S. Martin, B. Solenthaler, and M. Gross. Example Based Repetitive Structure Synthesis. Computer Graphics Forum, 34(5):39-52, 2015. doi: 10.1111/cgf. 12695
294
+
295
+ [16] O. Sendik and D. Cohen-Or. Deep Correlations for Texture Synthesis. ACM Trans. Graph., 36(5), July 2017. doi: 10.1145/3015461
296
+
297
+ [17] P. Tu, D. Lischinski, and H. Huang. Point Pattern Synthesis via Irregular Convolution. Computer Graphics Forum, 38(5):109-122, 2019. doi: 10.1111/cgf. 13793
298
+
299
+ [18] L.-Y. Wei, S. Lefebvre, V. Kwatra, and G. Turk. State of the Art in Example-based Texture Synthesis. In Eurographics 2009 - State of the Art Reports, 2009. doi: 10.2312/egst.20091063
300
+
301
+ [19] J. Xing, H.-T. Chen, and L.-Y. Wei. Autocomplete Painting Repetitions. ACM Trans. Graph., 33(6), Nov. 2014. doi: 10.1145/2661229.2661247
302
+
303
+ [20] Y. Zhou, H. Huang, L.-Y. Wei, and R. Wang. Point Sampling with General Noise Spectrum. ACM Trans. Graph., 31(4), July 2012. doi: 10.1145/2185520.2185572
304
+
305
+ [21] Y. Zhou, Z. Zhu, X. Bai, D. Lischinski, D. Cohen-Or, and H. Huang. Non-Stationary Texture Synthesis by Adversarial Expansion. ACM Trans. Graph., 37(4), July 2018. doi: 10.1145/3197517.3201285
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/ShGxRxFV6Mq/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,280 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § STRUCTURED SHAPE-PATTERNS FROM A SKETCH: A MULTI-SCALE APPROACH
2
+
3
+ Submission Number: 40
4
+
5
+ § ABSTRACT
6
+
7
+ Structured 2D patterns formed by the anisotropic distribution of arbitrary shapes are ubiquitous in nature and man-made environments. They may include both bounded and unbounded (extended fiber-like) shapes. In this work, we address the problem of interactively generating such patterns from a single exemplar, sketched by a user. We build our solution on a new data structure, the Support Structure Hierarchy, computed from a multi-resolution analysis of the input exemplar, that encodes the main anisotropy directions at different scales as well as deviations from them. We propose an efficient method based on this structure to synthesize a similar distribution of shapes in an extended 2D domain. The user may also choose to hybridize several input exemplars, by combining structural shapes extracted at different scales. As shown through a user study, our multi-scale solution generates structured shape-patterns that perceptually compete with state-of-the-art methods, were they from learning-based or not. Moreover, our interactive solution, which does not require any precomputation, matches well the needs of an interactive authoring tool, where the user can not only sketch and extend 2D vector textures but also combine them seamlessly.
8
+
9
+ Index Terms: Computing methodologies-Computer Graphics-Graphics systems and interface-Texturing.
10
+
11
+ § 1 INTRODUCTION
12
+
13
+ From fibers and cellular organisms at microscopic scales to seaweeds, schools of fishes, human queues, and alignments of trees or buildings at a larger scale, anisotropic distributions of shapes are ubiquitous in nature and man-made environments. Moreover, such structured shape-patterns have been used extensively in 2D for decorative purposes, from mosaics and wallpapers to the distribution of windows and architectural decorations on building facades. The perceived structure emerges from the anisotropy of these distributions of shapes. In particular, the specific ranges and variances of perceived orientations, both in terms of salient shapes and alignments, convey their unique visual appearance. This work explores the synthesis of such structured 2D shape-patterns from a sketch.
14
+
15
+ Example-based texture synthesis has already been extensively studied. However, existing methods have mostly focused on point distributions. They have achieved statistical accuracy using noise models, continuous representations of discrete distributions such as pair-correlation or probability-density functions, or neighborhood metrics and energy optimization. The few methods tackling anisotropic distributions of shapes have used multiple point samples or proxy geometries to achieve the analysis and synthesis of structured patterns. To the best of our knowledge, none of them have tackled the case of anisotropic shape-patterns that may include both bounded and unbounded (fiber-like) shapes. While the use of deep learning may be a promising alternative, it requires large training databases and long precomputation (learning) times, which has limited its use so far in interactive design scenarios.
16
+
17
+ This work tackles the interactive, sketch-based design of anisotropic distributions of shapes in 2D. Given any sketched pattern (a distribution of simple bounded shapes, and/or fiber-like unbounded shapes), our method efficiently synthesizes a perceptually similar, consistent, and non-repetitive distribution of shapes in an extended 2D domain. Note that the input pattern is fully preserved at the synthesis stage. Indeed, contrary to previous methods, the input becomes the central part of the extended texture, while being seamlessly integrated into its larger surrounding. Our solution increases user control and also seems to improve the perceived similarity of the results. One such result generated by our interactive system is shown in Fig. 1 with its interface.
18
+
19
+ < g r a p h i c s >
20
+
21
+ Figure 1: Based on a few perceptual and depiction hypotheses, our method extends an input sketch (bottom left) into a larger vector texture (right). Both bounded (individual fishes) and unbounded shapes (wavy lines) are seamlessly handled. The simple interface (top left) is quick to learn and easy to use.
22
+
23
+ Real-time analysis and synthesis of distributions require an efficient representation, encoding both local and global correlations between shapes. Our first insight is to introduce a compact encoding for anisotropic distributions, called the Support Structure Hierarchy, where individual supporting structures are lead directions of alignments or line skeletons computed from user strokes, all computed at various scales. This representation leads to a particularly simple and efficient multi-scale analysis of the distributions of orientations in the input sketch. It also enables efficient domain extension.
24
+
25
+ The main challenge at the synthesis stage remains to understand user expectations and the required criteria for perceptual similarity. The (new) case of fiber-like shapes is particularly challenging, because extending fibers that are disjoint in the input exemplar may generate intersections in the extended domain. This could strongly affect our perception of the output as looking different from the input. To support our insights, we formulate a set of perceptual hypotheses to drive our synthesis solution; they were later validated through a user study. In particular, our solution interprets non-intersecting fiber-like strokes as curves that could slightly bend to prevent intersection in the extended domain.
26
+
27
+ Thanks to its efficiency, we integrated our solution in an interactive authoring tool, where users can progressively test and refine their designs. They can generate a wider variety of vector textures by interactively hybridizing features extracted from several input exemplars, e.g., combining shapes from an exemplar with larger-scale alignments from another exemplar.
28
+
29
+ In summary the contributions of our work are threefold, as we introduce:
30
+
31
+ * a fine-to-coarse analysis method that hierarchically clusters user strokes into a Support Structure Hierarchy, based on a new "perceived distance" between line-segments within a domain, depending on both their position and orientation;
32
+
33
+ * a coarse-to-fine synthesis method that extends the pattern around the input examplar, based on the extracted hierarchy and on a set of perceptual hypotheses validated by a user study;
34
+
35
+ * an interactive authoring tool, enabling both domain extension and hybridization of structured shape-patterns.
36
+
37
+ § 2 RELATED WORK
38
+
39
+ This work addresses 2D sketch-based synthesis of anisotropic discrete distributions. It is related to example-based synthesis that aims at generating an output that minimizes some statistical or perceptual distance with the input, while avoiding artifacts such as salient repetitiveness. We focus below on distributions of 2D shapes, i.e., vector textures formed by arrangements of discrete 2D shapes, and also discuss recent alternatives based on deep learning. We refer the reader to $\left\lbrack {5,{18}}\right\rbrack$ for more general surveys.
40
+
41
+ Discrete vector textures (or shape-patterns) have been generated by analyzing distributions of the centroids of individual shapes, and then applying a two-stage synthesis for the new centroids, followed by the creation of the associated shapes. The pioneer work of Barla et al. [1] aims at synthesizing stroke patterns. Their method computes a Delaunay triangulation from the centroids to retrieve the input distribution connectivity. During synthesis, they rely on a Lloyd relaxation and some perturbation to generate a new set of points from which the shapes are recovered. In the same mindset, Ijiri et al. [7] explore local growth processes before the relaxation process. These two methods are however limited respectively to quasi-uniform distributions or to 1-ring neighborhoods. To manage more general shape distributions, Hurtut et al. [6] define the input distribution as a combination of Gibbs point processes from which they generate a new arrangement using Monte-Carlo chains. However, all of these methods are unable to analyze and synthesize structured inputs. In particular, they cannot tackle anisotropic distributions of shapes such as elongated ones, nor analyze correlations between shapes, orientations, and spatial alignments.
42
+
43
+ Instead of using a single centroid point, Ma et al. [12] characterize each input shape by several sample points. They rely on a neighborhood metric and an energy optimization process to insert individual shapes in a predefined output domain. Their approach has been extended to dynamic textures [11], stroke auto-completion [19], and adapted to other texture workflows $\left\lbrack {2,8}\right\rbrack$ . While the use of multiple sample points has also been extended to distribution synthesis of arbitrary shapes, these methods only address bounded shapes-as opposed to unbounded shapes-and require post-processing to avoid inter-penetrations at the synthesis stage, which precludes their use in real time.
44
+
45
+ Rather than sampling the input shapes, Landes et al. [9] propose to simplify them into proxy geometries. They introduce a spatial relationship measure that takes into account space between pairs of shapes and their relative orientations. By extending the stochastic models to point distributions $\left\lbrack {{13},{20}}\right\rbrack$ , their synthesis method successfully maintains distributions of distances and relative orientations of shapes. Although it handles anisotropic distributions, their method does not meet our goals, as it does not offer real-time performance and is limited to distributions of bounded shapes.
46
+
47
+ In contrast, Roveri et al. [15] present the first example-based distribution synthesis method applicable to both bounded and unbounded shapes. Regardless of their dimension, shapes are decomposed into point samples that are encoded in a functional representation. A similarity measure is defined in the associated functional space to quantify similarity between input and output. Synthesis is achieved in a few minutes through neighborhood matching and energy optimization. As most other neighborhood-based texture synthesis methods, their method requires input patterns with enough repetitions to avoid bad local minima in the optimization, which would distort the synthesized structures. Moreover, contrary to our method, the use of a fixed neighborhood size prevents their method from capturing repetitive structures at different scales.
48
+
49
+ Deep learning methods have recently been applied to texture synthesis $\left\lbrack {4,{10},{14},{16},{17},{21}}\right\rbrack$ . They show promising results for capturing, at least partially, local and global correlations present in an input exemplar. In particular, Fish et al. [4] enable sketch stylization via the transfer of geometric textural details from different images; this is related to our secondary goal of pattern hybridization However, most of these frameworks are image-based and do not extend well to the discrete distributions of vector shapes, which is the scope of our work. The method of Tu et al. [17] is closest to our goal of handling vector shape distributions; it characterizes point patterns via a trained VGG network. Our method, based on a simpler but efficient analysis stage, has the advantage of requiring no time-consuming precomputations (which are inherent to deep-learning techniques), while achieving real-time processing of any newly-created input.
50
+
51
+ § 3 OVERVIEW
52
+
53
+ § 3.1 HYPOTHESES ON DEPICTION & PERCEPTION
54
+
55
+ Extending a sketched pattern in a perceptually similar way requires making some hypotheses about user depiction and perception of the resulting pattern. Our key hypothesis, common to most sketch-based modeling systems, is that users see their input as a general view of the distribution they want to create. Therefore, the input is supposed to include all the necessary information, in a perceptually representative way. This led us to three design hypotheses:
56
+
57
+ ${H1}$ : Groupings and alignments are meaningful: All alignments and groupings are intentional.
58
+
59
+ ${H2}$ : Repetitiveness is explicit: All the shapes that a user wants to see repeated in the output, are repeated in the input.
60
+
61
+ ${H3}$ : Non-overlapping shapes should remain disjoint: Shapes that do not overlap the input should not overlap in the output.
62
+
63
+ These three hypotheses are used as guidelines for our method at the design stage, and then validated by a user study (see Sect. 6).
64
+
65
+ § 3.2 CREATION AND PREPROCESSING OF AN INPUT SKETCH
66
+
67
+ During a sketching session, the user successively draws strokes of any color in a square representing our 2D Input Space (IS). See Fig. 1, left. Two different pens are provided to denote bounded and unbounded strokes. The former are limited to the dimensions of ${IS}$ , while the latter are interpreted as extending beyond the input domain, either in both directions if both extremities reach the border of the ${IS}$ , or in a single direction (in case an unbounded stroke does not reach any border of ${IS}$ , we add a segment to connect it to the closest border). The data stored for each stroke are a list of points, a color, a thickness, a type (bounded or unbounded), and a principal direction computed on the fly from the Principal Component Analysis (PCA) on the coordinates for all points of a stroke.
68
+
69
+ The user may sketch the input pattern in any order. As several stokes can be used to represent a shape, we provide an automatic clustering mechanism, presented next, to identify shapes at the beginning of the analysis stage.
70
+
71
+ < g r a p h i c s >
72
+
73
+ Figure 2: Processing pipeline for the fine-to-coarse analysis of a sketch into a Support Structure Hierarchy.
74
+
75
+ § 3.3 PROCESSING PIPELINE
76
+
77
+ Multi-scale analysis: Based on Hypothesis ${H1}$ , the analysis stage consists in iteratively extracting a fine-to-coarse hierarchy of support structures (the Support Structure Hierarchy) from the input strokes, according to alignments and multi-scale repetitions in the input (see Fig. 2). We first cluster bounded strokes into shapes composed of one to several strokes, while each unbounded stoke is considered as an individual shape (Level 0). Note that colors are not used in the clustering, enabling the use of several different colors in a given shape. Bounded shapes are then simplified either into a central point or a support segment depending on their degree of anisotropy (Fig. 2c). Central points and support segments are clustered according to both orientation and position to find alignments, and then grouped into fibers (Fig. 2d), forming the Level 1 of the Support Structure Hierarchy. Other fibers are directly extracted from the unbounded strokes (Fig. 2c'). To capture repetitions at a larger scale, fibers of similar orientation are clustered into fiber medians (Level 2, Fig. 2f), which are finally grouped into lead directions (Level 3, Fig. 2g). During this hierarchical clustering and simplification process, the input domain ${IS}$ is progressively partitioned into a hierarchy of ribbons that express the variability of position of each substructure around its parent structure. This partitioning will be used to allow an adequate degree of variability while avoiding unwanted overlaps at the synthesis stage. See Section 4 for details.
78
+
79
+ Synthesis stage: Unlike most existing approaches, our method to synthesize distributions consists in directly replicating local and global correlations between the input shapes, encoded by our Support Structure Hierarchy. To avoid exact repetitions, this is done by instantiating each structure from top to bottom of the hierarchy while perturbing their positions within adequate allowed areas. These areas are computed to prevent overlaps between strokes belonging to the same lead direction and at a low cost since no further overlaps detection will be required.
80
+
81
+ Structures at the top of the hierarchy are first extended to the user-selected larger $2\mathrm{D}$ domain, defined as a radial extension of ratio $k > 1$ of ${IS}$ . The support hierarchy is then traversed top-down to the individual strokes. At each level, the repetitive structures are repeated within the larger domain, in order to generate the extended structured pattern. This is done in accordance with our design guidelines: a shape that only appears once in the input (such as the vertical see-weed in Fig. 1) will be extended at its extremities in case of an unbounded stroke, but will not be repeated (consistency with ${H2}$ ). Moreover, at each level of the hierarchy, allowed areas within ribbons are used to guide the synthesis of substructures while preventing unwanted overlaps (consistency with ${H3}$ ). Note that curving some of the supporting structures is necessary to avoid undesired overlaps in the extended the domain, as illustrated by the three green waves that do not overlap with the two blue waves in Fig. 1, right. This process, an original step of our solution justified by our perceptual guidelines, will be detailed in Section 5.
82
+
83
+ Interactivity and hybridization Thanks to its real-time performance, our method not only allows users to sketch and extend a given shape-pattern, but also to return to the sketching interface to iteratively improve their input. In our authoring system, all identified shapes are recorded in a shapes database, enabling the user to refine the input by adjusting their position, or to reuse them later for another design. Hierarchical structures extracted from the analysis stages of different inputs can also be combined to create a different design, a process called hybridization (see Section 6).
84
+
85
+ § 4 FINE-TO-COARSE ANALYSIS
86
+
87
+ § 4.1 LEVEL 0: FROM STROKES TO SHAPES
88
+
89
+ As illustrated in Fig. 2b, b', the bounded and unbounded strokes in the input are analyzed separately, to extract supporting lines that will then be processed in a combined manner.
90
+
91
+ We consider the unbounded strokes as individual unbounded shapes. In contrast, we extract bounded shapes by clustering the input bounded strokes as follows: we compute the oriented bounding box of each bounded stroke and group these boxes according to their pair-wise distances. We then associate the resulting bounded shapes to a single central point or support segment, according to an anisotropy threshold. The resulting set of support segments and central points is the first simplification of the input, efficiently encoding the principal directions and approximate positions of the bounded shapes.
92
+
93
+ § 4.2 LEVEL 1: FROM SHAPES TO FIBERS AND THEIR RIBBONS
94
+
95
+ We approximate each unbounded shape with a line, called fiber, that best matches its principal direction and position. This support line, augmented with a perpendicular thickness to cover the whole shape, is called a ribbon. For bounded shapes, finding such fibers and ribbons requires analyzing anisotropic information such as alignments. We retrieve the support lines of support segment and cluster them using the Mean Shift algorithm. We then compute a central fiber within each cluster. Central points are first clustered by position, before using Principal Component Analysis to compute their main directions of alignment. Representative fibers are defined from the centroid of each cluster and these principal directions. Thicknesses are computed for each of these fibers, so that the corresponding ribbon fully covers the shapes associated with the clustered points or segments.
96
+
97
+ § 4.3 LEVEL 2: FROM FIBERS TO FIBER MEDIANS
98
+
99
+ Fibers with similar orientations and close positions are grouped at this stage. Since we focus on anisotropic distributions, we prioritize orientations of fibers over their positions. We first compute the histogram of fiber orientations to group those that belong to the same anisotropic distribution. We then refine each cluster using a specific perceived distance, which we define as the minimum distance between each fiber intersection points with the domain contour (see Fig. 3). This distance takes into account both position and orientation of the lines: the more parallel and closer two lines are in position, the smaller the distance. Each resulting sub-cluster is stored as a fiber median defined as the mean of parameters both in orientation and in position of the clustered fibers (see Fig. 2f). We also store the circular standard deviation associated to each fiber median for later use at the synthesis stage.
100
+
101
+ < g r a p h i c s >
102
+
103
+ Figure 3: We compute the "perceived distance" between two fibers in a normalized input domain. It is defined as the minimal distance between their intersection points on any of the lines bordering the domain $\left( {X = 0,X = 1,Y = 0,Y = 1}\right)$ , which is extremely fast to compute (for each fiber, only the 4 values ${y}_{X = 0},{y}_{X = 1},{x}_{Y = 0},{x}_{Y = 1}$ are needed). This distance accounts for both position and orientation, and is defined even if the lines intersect in the domain. Here, $d\left( {{L1},{L2}}\right) < d\left( {{L2},{L3}}\right)$ , which matches our perception.
104
+
105
+ Similarly to the previous hierarchy level, a thickness parameter is associated with each newly created fiber median in order to define an associated ribbon that fully includes the sub-ribbons of the clustered substructures (see Fig. 4).
106
+
107
+ § 4.4 LEVEL 3: FROM FIBER MEDIANS TO LEAD DIRECTIONS
108
+
109
+ The top level of aggregation in our hierarchical analysis aims at clustering similarly oriented fiber medians. We use the same clustering process as at the previous level of the hierarchy. Each cluster is represented by a lead direction, defined using the average of the clustered elements in orientation and position (see Fig. 2g). As at the previous hierarchy level, ribbons are defined by associating a thickness parameter to each lead direction, as to includes ribbons around clustered median fibers. As a result, the input space ${IS}$ is divided into nested ribbon-like structures (see Fig. 4).
110
+
111
+ § 4.5 COMPUTING THE ALLOWED DISPLACEMENT AREAS
112
+
113
+ The last step of the analysis stage consists in computing the available space around each clustered shape or ribbon, within their parent structure in the hierarchy. We call this space the allowed displacement area, since it will be used at the synthesis stage for adding random displacements to repeated structures, enabling to provide visual diversity while avoiding unwanted overlaps between shapes.
114
+
115
+ Displacement areas for ribbons Starting at the top of the hierarchy, we recursively decompose each ribbon, using splitting lines that are parallel to its main axis, and evenly split the empty space between neighboring, non-overlapping sub-ribbons (defined as ribbons around one of the clustered sub-structures). The distance between neighboring sub-ribbons (ie. the minimal distance between their contents), used to position these lines, is computed while considering a toroidal topology for ${IS}$ . Based on this distance, two lines are evenly generated between the neighbouring ribbons, to define the limits of extended regions for each of them, as well an empty space between them.
116
+
117
+ This decomposition results into a displacement region around each sub-ribbon, and an given distance, called gap between them. Note that since the parent orientation was used for this decomposition, the sub-ribbons generally have a slightly different orientation. Moreover, they are not necessarily centered in the associated displacement region (see Fig. 4) a) and b).
118
+
119
+ Finally, the minimum and maximum gaps values are stored in the parent structure, together with the set of displacement areas associated to sub-ribbons.
120
+
121
+ Displacement areas for bounded shapes These rectangular regions, depicted using dashed lines in Fig. 4d), represent the areas within the fiber ribbon of a bounded shape in which its bounding box will be allowed to move during instantiation. Their two axes (x, y)respectively correspond to the direction of the associated fiber median and its orthogonal direction. The allowed perturbation along $x$ (tangent to the direction) is set to half of the distance to the next bounding box of a bounded shape, which ensures that overlaps will always be prevented at the synthesis stage. The allowed perturbation along $y$ is set so that the bounding box can cover the whole associated fiber-median ribbon. Again, these computations are done while considering a toroidal topology for the input space ${IS}$ . Therefore, the computed displacement areas can then expand outside ${IS}$ (see the orange areas in Fig. 4d), which is less restrictive when extending the pattern to a larger input domain.
122
+
123
+ § 5 SYNTHESIS OF AN EXTENDED SHAPE-PATTERN
124
+
125
+ To enable a seamless exploration of a larger 2D domain by simply zooming out after sketching, our objective is to keep the user-drawn strokes within ${IS}$ while extending and repeating them in a larger output space ${OS}$ (defined as an expansion of ${IS}$ by a ratio $k > 1$ ). This is done through a coarse-to-fine process in which the elements stored in the Support Structure Hierarchy are extended to ${OS}$ and repeated if necessary.
126
+
127
+ Extension and repetition of lead ribbons: According to ${H2}$ (see Sect. 3.1), Lead directions consisting of only one fiber median, (such as the vertical lead direction in Fig. 5) should not be repeated. Therefore, we simply extend them as well as their unbounded child structures to span the whole ${OS}$ .
128
+
129
+ For the remaining lead directions (corresponding to repeated substructures in the input), we perform the same extension to the whole ${OS}$ , but also generate new copies of the structure in the remaining space, through an efficient randomized repetition procedure, as follows. For each lead ribbon, we start from a displacement area with a single neighbour, randomly generate a new gap using values in the recorded range, and generate the next displacement area as to randomly clone one of the exiting ones (ie. using the same width). We apply this technique to progressively fill ${OS}$ . The randomness in the gap values between displacement areas for sub-ribbons generates different lead ribbon configurations, and therefore different outputs from the same input (see Fig. 5).
130
+
131
+ Fiber medians ribbons will now be generated within the newly extended and repeated lead ribbons, as presented next, at the cost of slightly bending some of them as well as their child structures, if they happen to overlap when extended to ${OS}$ .
132
+
133
+ Repetition of fiber medians and ribbons For each newly generated displacement area, we synthesize its fiber median by first copying the parameter values of the original ribbon. We then use the circular standard deviation on the medians' orientations computed during the analysis stage (Sect. 4.3) to perturb its orientation. We also perturb the position of its centroid to place it in the middle of the displacement area. While the middle part of generated ribbons are guaranteed to remain within their lead ribbon, this is not necessarily the case when they extend to ${OS}$ , as illustrated in Fig. 6 (left). When this occurs, we slightly bend a ribbon and its fiber median (see Fig. 6 (right)) to make it fit entirely inside its allowed displacement area.
134
+
135
+ < g r a p h i c s >
136
+
137
+ Figure 4: Input domain partitioning: (a) lead ribbons, each between each a pair of dashed lines; (b) ribbons (solid lines) around the fiber medians (dashed lines); (c) the sub-ribbons (dashed lines) inside the ribbons (in plain); (d) the displacement areas, delimited by the dashed areas.
138
+
139
+ < g r a p h i c s >
140
+
141
+ Figure 5: (left) Allowed displacement areas between dashed lines, based on lead directions; (right) Randomized repetition and propagation of lead ribbons.
142
+
143
+ < g r a p h i c s >
144
+
145
+ Figure 6: Fiber medians and ribbons repetition in ${OS}$ : (left) without any bending; (right) with slight bending.
146
+
147
+ Avoiding overlaps by bending structures Inspired by physical properties of (real) fibers, we consider the following assumption: the thinner the ribbon, the more flexible it may be. This can be formalized through the equation $R = {\tau w}$ , relating the curvature radius $R$ to the ribbon width $w$ and a stiffness parameter $\tau \in {\mathbb{R}}^{ + }$ .
148
+
149
+ In case of overlap, each ribbon will intersect twice with its lead ribbon.For symmetry reasons, we then bend both sides of the ribbon, even if one of the intersection regions is out of the output domain. To preserve continuity between the original borderlines of the ribbon and their curved version, we consider the midpoints $\left( {M}_{1}\right.$ and ${M}_{2}$ in green in Fig. 7) between a projected point and the other intersection point as inflection points. For each of these inflexion points (say $M$ ), the key idea is to find the arc of circle $C$ that passes through $M$ and remains inside the lead ribbon as illustrated in Fig. 7. The details of this computation are provided in the associated Supplementary material.
150
+
151
+ The same bending process is applied to the children sub-ribbons, in order to fit them inside their parent curved ribbon.
152
+
153
+ < g r a p h i c s >
154
+
155
+ Figure 7: (Top) A ribbon has two intersections (I1 and I2) with its lead ribbon. (Bottom) The ribbon is bent to remain within its lead ribbon.
156
+
157
+ Shape distribution synthesis The final step is to synthesize new shapes within each extended or newly created fiber ribbon.
158
+
159
+ < g r a p h i c s >
160
+
161
+ Figure 8: Extension of unbounded strokes: (a) curve; (b) arc.
162
+
163
+ a) Unbounded shapes We define four unbounded stroke categories (lines, rays, arcs and curves) that respectively stand for perfectly linear unbounded strokes, half-lines, unbounded strokes with a single curvature extremum in ${IS}$ , and unbounded strokes with more than one curvature extrema. We start by extending these unbounded shapes to ${OS}$ along their fiber direction, which is trivial for lines and rays. Arcs are extended through an alternative mirror duplication that leads to a smooth sinusoidal curve. Curves are first cut at their first and last extrema. Then, we alternatively duplicate the mirror version of the curve-segment to extend it to ${OS}$ as illustrated in Fig. 8. These extended strokes are stored in the local frame of their corresponding fiber. They will therefore be automatically repeated and curved if needed through the repetition process of their parent structures in the hierarchy. The resulting curved structures are shown for different sizes of ${OS}$ in Fig. 9.
164
+
165
+ < g r a p h i c s >
166
+
167
+ Figure 9: Variation of lateral ratio (k) for unbounded stroke distribution: (a) input; (b) $k = 3$ ; (c) $k = 5$ ; (d) $k = {10}$ . Note that these results have been scaled to fit in the figure.
168
+
169
+ b) Bounded shapes We process bounded shapes by first, iteratively repeating their representative support segments or central points along their extended fiber, while using the previously computed displacement areas to perturb their positions, and drawing the shapes in the resulting local frames. We then reuse their local positions with respect to their fiber to repeat them within the parent fiber medians ribbons, but with randomly modified positions within in the authorized displacement areas. See Figure Fig. 10.
170
+
171
+ < g r a p h i c s >
172
+
173
+ Figure 10: Synthesis outline: (left) input with ribbons between pairs of dashed lines; (right) shape repetition within the extended and synthesized ribbons.
174
+
175
+ Avoiding residual overlaps: Given that repetitions in different lead directions are computed independently, lead ribbons with different orientations may naturally intersect. This may lead to perceptual artifacts if these lead directions both contain initially non-overlapping bounded shapes. Indeed some undesirable overlaps may occur in the output. We use an ${AABB}$ tree to partition ${OS}$ and efficiently detect overlaps between displacement areas of bounded shapes. In such case, we restrict the corresponding displacement areas. If this strategy fails (not enough space to insert a shape), we do not instantiate it (see Fig. 11 (b) for such a challenging example).
176
+
177
+ § 6 RESULTS AND DISCUSSION
178
+
179
+ § 6.1 INTERACTIVE AUTHORING SYSTEM
180
+
181
+ Our prototype system is implemented in WebGL. Creating and extending highly structured vector patterns is made easy by our method, as shown in Fig. 11. In addition to the main sketching and texture expansion interface, the user may store and reuse complex shapes, such as the two categories of fishes in Fig. 1. The use of our system for creating a complex sketch, inspired from biology, is illustrated in Fig. 12.
182
+
183
+ < g r a p h i c s >
184
+
185
+ Figure 11: Our synthesis method maintains the perceived regularity of structured distributions (known to be hard to handle) in both cases of unbounded and bounded strokes.
186
+
187
+ < g r a p h i c s >
188
+
189
+ Figure 12: a) Biological illustration depicting cells that navigate in a distribution of fibers; b) Input sketch inspired from (a); (c) Result.
190
+
191
+ In addition, several input shape-patterns can be interactively combined to create a hybrid one, as follows: Thanks to the Support Structure Hierarchy, the user can select the desired level of hierarchy from two different input shape-patterns, and combine them and create an hybrid result. We rely on the fact that our Support Structure Hierarchy encodes the input data into structures that are defined in the local frames of their upper structure ribbons, themselves characterized by a main direction and a width. Therefore, consistent patterns can be generated while the input shapes, fibers, fiber-medians or lead directions are changed. Such an hybridization is shown in Fig. 13.
192
+
193
+ < g r a p h i c s >
194
+
195
+ Figure 13: Hybridization example, where two input shape-patterns (left) are combined to create a new result (right)
196
+
197
+ § 6.2 COMPARISON WITH PREVIOUS WORK
198
+
199
+ We compared our results with both distribution-based and deep-learning-based methods for generating vector textures from examples. Since most classical methods are limited to distributions of bounded shapes, we restricted comparison to this sub-case (see Fig. 14). Since our results seemed close to those of the best classical method, Landes et al. [9], we selected this method for further comparison in our user study (see below).
200
+
201
+ < g r a p h i c s >
202
+
203
+ Figure 14: Comparison with distribution-based methods: (a) image input; (b) [1]; (c) [7]; (d) [6]; (e) [12]; (f) [9]; (g) our corresponding sketched input; (h) our result.
204
+
205
+ We also tried our method on examples presented as failure cases in previous papers, such as Fig. 11 (a failure case of [3]) and Fig. 15 (a failure case of [2]). In both cases, our solution was robust and managed to maintain the regularity of the structured input for both bounded and unbounded shapes.
206
+
207
+ < g r a p h i c s >
208
+
209
+ Figure 15: Challenging structured distributions: (a) input; (b) sketched representation of the input ; (c) result of [2]; (d) ours.
210
+
211
+ Lastly, we compared our results with those of Tu et al. [17], the only deep-learning method tackling point distributions (see Fig. 16). Although our method is interactive and does not require any precom-putation stage (in contrast to the hours of training of deep learning methods), the quality of our results looks almost as good (we get more artefacts in the first example, while [17] gets more of them in the second one).
212
+
213
+ < g r a p h i c s >
214
+
215
+ < g r a p h i c s >
216
+
217
+ Figure 16: Comparison with the closest deep learning method [17]: (left) input distribution; (middle) results from Tu et al. [17]; (right) our results.
218
+
219
+ § 6.3 USER STUDY
220
+
221
+ We carefully designed an online user study to validate the perceptual hypotheses presented in Sect. 3.1, as well as the perceived quality of the extended textures we generate (See our supplemental document for screenshots and detailed results).
222
+
223
+ The study was conducted by 35 users, from 19 to 61 years old, including 22 males, 9 females and 4 genders unspecified. 14 had an intermediary or expert experience in digital design and 9 as traditional designers. It was composed of two parts: an interactive drawing session, and a comparison session. In the drawing session, users were asked to manually draw an extended texture from a given input pattern. In the comparison session, users were asked to select the closest result from a given a 2D input. Each experiment lasted around ten minutes, most of which in the drawing session.
224
+
225
+ Among the guidelines to validate, ${H1}$ (Groupings and alignments are meaningful) was validated by the drawing session, where 97% of the users preserved the grouping of fiber-like shapes and 76% of the users respected the anisotropy directions of bounded shapes in their drawings. H2 (repetitiveness is explicit) was validated by most users during the comparison session, and was also observed in the user's drawings as those of Fig. 17. H3 (non-overlapping shapes should remain disjoint) was validated as well by the users' drawings, with 73% of overlapping-free drawings when it was the case in the input.
226
+
227
+ < g r a p h i c s >
228
+
229
+ Figure 17: (Left) An example of input for the drawing session; (Right) Example of sketches created by different users
230
+
231
+ As part of the comparison session, user were asked to choose between our extended textures and the generated ones from Landes et al. [9] (shown in random order, and using the same shape depiction), for the ants and the balloons examples of Fig. 14. Respectively ${86}\%$ and ${77}\%$ of users preferred our results. We attribute this unexpectedly good results to the fact we keep the exact input pattern at the center of the generated texture, while seamlessly extending it sideways.
232
+
233
+ § 6.4 PERFORMANCE
234
+
235
+ The following table was computed using the Google Chrome runtime performance on an Intel(R) Core(TM) i7-7920HQ CPU at ${3.10}\mathrm{{GHz}}$ . The second column gives the number of points in the input example, and then the time in milliseconds of respectively the analysis and the synthesis. Note that the synthesis has been performed with a ratio of $k = 3$ . As can be observed, the overall computation times take less than a fraction of second.
236
+
237
+ max width=
238
+
239
+ Example #Points Analysis Synthesis
240
+
241
+ 1-4
242
+ Fishes (Fig. 1) 7699 73ms 111ms
243
+
244
+ 1-4
245
+ Biology (Fig. 12) 3094 27ms 86ms
246
+
247
+ 1-4
248
+ Ants (Fig. 14 top) 9447 134ms 233ms
249
+
250
+ 1-4
251
+ Balloon (Fig. 14 bottom) 4034 42ms 68ms
252
+
253
+ 1-4
254
+ Trunks (Fig. 15) 3164 68ms 83ms
255
+
256
+ 1-4
257
+
258
+ § 6.5 DISCUSSION AND LIMITATIONS
259
+
260
+ The specificity of our method compared to previous work is that it does not require any neighborhood matching at the synthesis stage, given that our hierarchical representation already captures correlations. This leads to real-time performances, suitable to our application context.
261
+
262
+ However, the notion of anisotropy being central in our method, it makes it unsuitable to synthesize isotropic distributions. Indeed computing meaningful fiber directions then more difficult, which prevents further extraction a structural hierarchy. While this is the main limitation of our framework. Therefore, in an authoring tool, our solution should be compared with a former method handling isotropic distributions.
263
+
264
+ A useful extension would be to give the user the possibility of choosing among different perceptual hypotheses, for instance, regarding explicit repetitiveness, which may not be always desired, or to enable overlaps between bounded and unbounded shapes. For instance, in the biology illustration of Fig. 12, the cells depicted in pink/red should remain attached to the underlying fibers, which is not the case in our solution. Indeed, we never cluster bounded strokes with unbounded strokes, even if they overlap. This could easily be added as an option. Generating waving curves rather than having unbounded strokes intersect a design choice, which could also be disabled be the user if necessary.
265
+
266
+ Fig. 18 presents a failure case for our solutions, as bounding boxes around strokes overlap, while these strokes should not be grouped. To solve this problem, we could allow the choice between bounding-boxes-based distances and centroid-based distances as clustering criteria for bounded strokes. This would facilitate the processing of any dense distribution.
267
+
268
+ As a last limitation, we do not consider branching curves, which could be another kind of unbounded elements in the input texture pattern. Since it processes the input strokes one by one, our current method would not capture the branching. The input would be split into isolated curves, which would probably intersect when repeated in the output. In addition, even if grouping was forced, our current way of representing unbounded elements using a single linear ribbon of given width would fail. Therefore, seamlessly extending patterns that include such branching structures remains an open problem.
269
+
270
+ < g r a p h i c s >
271
+
272
+ Figure 18: Input example from [12], where our current stroke clustering method fails.
273
+
274
+ § 7 CONCLUSION
275
+
276
+ Motivated by the interactive design of vector textures, we presented a multi-scale method to efficiently extract anisotropic properties from an input pattern, and seamlessly extend it to a larger $2\mathrm{D}$ domain. While our method runs in real time, visual quality of results compare well with those of state-of-the-art vector texture generation methods, including those requiring either higher computational time and/or training data to learn from. M
277
+
278
+ The new Support Structure Hierarchy we introduced is crucial to our method. Extracted at the analysis stage based on a new perceived distance between the salient anisotropic structures within in the input domain, it allows to capture and reproduce the multi-scale structures in an efficient way, while maintaining a good level of visual diversity in the synthesized distribution of shapes. In terms of the interface, our system can be used to quickly design new vector textures by interactively creating new patterns or combining existing ones.
279
+
280
+ Future work While our solution is well suited to most structured shape-patterns, our use of linear ribbon-like shapes to capture multi-scale anisotropy prevents us from handling more complex, branching structures. Addressing this specific case would be an interesting avenue for future work. In addition, a challenging open problem would be to generate a $3\mathrm{D}$ texture from the $2\mathrm{D}$ exemplar interactively sketched by the user. In cases such as biological illustrations, this would enable users to navigate in a 3D structure created from the sketch, leading to a better understanding the depicted environment.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/Uh8fD3uPiv6/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,379 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Task-based Evaluation of 3D Radial Layouts for Centrality Visualization
2
+
3
+ ## Abstract
4
+
5
+ In this paper we propose improvements to the 3D radial layouts that make it possible to visualize centrality measures of nodes in a graph. Our improvements mainly relate edge drawing and the evaluation of the 3D radial layouts. First, we projected the edges onto the visualization surfaces in order to reduce the nodes overlap. Secondly, we proposed a human-centered evaluation in order to compare the efficiency score, the time to complete tasks and the number of clicks of the $3\mathrm{D}$ radial layouts to those of the $2\mathrm{D}$ radial layouts. The results showed that even if the overall improvements in terms of time or errors are not statistically significant between the various visualization surfaces, the participants have a better feeling on the $3\mathrm{D}$ and therefore the user experience is able to be improved in data visualization.
6
+
7
+ Index Terms: Human-centered computing-3D graph visualization-Centrality visualization-Layout evaluation
8
+
9
+ ## 1 INTRODUCTION
10
+
11
+ Centrality measures are topological measures that describe the importance of the nodes in a graph. There has been a lot of work carried out in this topic for network analysis in order to answer the question "Which are the most important nodes in a graph?" [16, 17]. Other works in graph drawing chose to visually reveal these properties in order to facilitate their exploratory analysis $\left\lbrack {2,{18}}\right\rbrack$ . For example, in graph analytics, some works are interested in understanding and describing the interaction structure by analyzing the topology of the graph $\left\lbrack {6,{21}}\right\rbrack$ . Some others are interested in identifying and characterizing the nodes that are particularly important [27] and how their neighbors are connected to each other [29].
12
+
13
+ However, visualizing these measures in 2D could be difficult when the size of the graph is important in terms of the number of nodes and edges. Indeed, there would be a lot of nodes and edges overlap and edge crossings, which are less of a problem in 3D than 2D [26]. Kobina et al. [13] therefore proposed new 3D methods based on the 2D radial layouts that highlight the centrality of the nodes by optimizing the spatial distribution of the nodes. Nevertheless, in 3D some edges could hide others depending on the position of the observer or the 3D layouts, as can be seen in the proposed methods of Kobina et al. [13] using straight edges.
14
+
15
+ So, we first propose improvements to the 3D radial layouts by projecting the edges onto the visualization surfaces in order to reduce the nodes overlap. The purpose of our improvements is to provide a better overall view of a complex and large graph than the 3D radial techniques and to reduce the time in exploring and analyzing such a graph. We then propose a task-based evaluation using a well-known centrality measure in order to compare the efficiency score, the time to complete tasks and the number of clicks of the 3D radial layouts to those of the $2\mathrm{D}$ radial layouts. The evaluation tasks are related to the central nodes, to the peripheral nodes and to the dense areas of a graph. The purpose of our evaluation is to show that the 3D radial methods could be better to explore and to analyze graphs whatever the interest, compared to the 2D radial layouts.
16
+
17
+ This paper is structured as follows: in section 2 we recall some notion about centrality measures in graphs. We review related work on centrality visualization in section 3 . Then we present our improvements in section 4 and the human-centered evaluation of these improvements in section 5 . In section 6 we present the evaluation results following experiments while in section 7 we present our discussion of the various results. In section 8 we present our conclusion and we finally present our future work in section 9 .
18
+
19
+ ## 2 CENTRALITY MEASURES IN GRAPHS
20
+
21
+ In graph analytics, centrality measures [22] characterize the topological position of the nodes in a graph. In other words, centrality measures make it possible to identify important nodes in the graph and further provide relevant analytical information about the graph and its nodes.
22
+
23
+ The importance of a node in a graph can be characterized by centrality measures, the clustering coefficient [10] also known as a high density of triangles. Some centrality measures, such as degree centrality, can be computed using local information of the node. The degree centrality quantifies the number of neighbors of a node. Betweenness centrality and closeness centrality $\left\lbrack {8,9}\right\rbrack$ use global information of the graph. The betweenness centrality is based on the frequency at which a node is between pairs of other nodes on their shortest paths. In other words, betweenness centrality is a measure of how often a node is a bridge between other nodes. The closeness centrality is the inverse of the sum of distances to all other nodes of the graph.
24
+
25
+ The clustering coefficient measures to what extent the neighbors of a node are connected to each other. If the neighbors of the node $i$ are all connected to each other, then the node $i$ has a high clustering coefficient.
26
+
27
+ ## 3 CENTRALITY VISUALIZATION
28
+
29
+ Many works in graph drawing made it possible to convey relational information such as centrality measures and clustering coefficient. So, Brandes et al. [1] and Brandes and Pich [2] proposed radial layouts that make it possible to highlight the betweenness and the closeness centralities of the nodes in a graph. In these methods, each node is constrained to lie on a circle according to its centrality value. Thus, nodes with a high centrality value are close to the center and those of low value are on the periphery.
30
+
31
+ Dwyer et al. [5] also proposed 3D parallel coordinates, orbit-based and hierarchy-based methods to simultaneously compare five centrality measures (degree, eccentricity, eigenvector, closeness, betweenness). The difference between these three methods is how centrality values are mapped to the node position. So, for 3D parallel coordinates nodes are placed on vertical lines; for orbit-based nodes are placed on concentric circles and for hierarchy-based nodes are placed on horizontal lines. On the other hand, Raj and Whitaker [18] proposed an anisotropic radial layout that makes it possible to highlight the betweenness centrality of the nodes in a graph. In this method, they proposed to use closed curves instead of concentric circles, arguing that the use of closed curves offers more flexibility to preserve the graph structure, compared to previous radial methods.
32
+
33
+ However, it would be difficult to visually identify some nodes that have the same centrality value, compared to the radial layouts. The proposed methods of Dwyer et al. make it possible to compare many centrality measures, but it would be difficult to identify the central nodes, compared to that of Brandes and Pich. On the other hand, 2D methods suffer from lack of display space when one needs to display a large graph in terms of number of nodes and edges.
34
+
35
+ So, Kobina et al. [13] proposed 3D extensions of the radial layouts of Brandes and Pich [2] in order to better handle the visualization of complex and large graphs (see Fig. 1). Their methods consist in projecting 2D graph representations on $3\mathrm{D}$ surfaces. These methods reduce nodes and edges overlap and improve the perception of the nodes connectivity. However, some nodes and edges are less visible depending on the projection surface and edge drawing method. Indeed, the use of straight edges caused some to be inside the half-sphere and others to cross the half-sphere. Furthermore, most of the edges are on the surface for the conical projection and outside the surface for the projection on the torus portion. Some nodes and edges are therefore less visible.
36
+
37
+ ## 4 IMPROVEMENT OF THE 3D RADIAL LAYOUTS
38
+
39
+ In order to reduce nodes and edges overlap in the proposed methods of Kobina et al. [13], we projected the edges onto the visualization surfaces.
40
+
41
+ Let $e$ be an edge to be projected onto a visualization surface and that connects nodes $j$ and $k$ , and ${P}_{i}$ be every point belonging to $e$ .
42
+
43
+ ${P}_{i} = {P}_{j} + \left( {{P}_{k} - {P}_{j}}\right) t$ where ${P}_{j}$ and ${P}_{k}$ are respectively the position of nodes $j$ and $k$ , and $t = \frac{i}{n - 1}$ where $n$ is the number of control points of the edge $e$ .
44
+
45
+ ### 4.1 Edge projection onto the cone
46
+
47
+ In this section, we describe the various steps that are relevant to the proposed method of projecting edges onto the cone:
48
+
49
+ - Compute the angle $\theta$ between the $\mathrm{x}$ axis and the $\mathrm{z}$ axis of the point to be projected: $\theta = \frac{180}{\pi }$ atan $2\left( {{z}_{{P}_{i}},{x}_{{P}_{i}}}\right)$
50
+
51
+ - Rotate by $\theta$ about $y$ axis. Let $R$ be the rotation result:
52
+
53
+ $$
54
+ R = \left\lbrack \begin{matrix} \cos \theta & 0 & - \sin \theta \\ 0 & 1 & 0 \\ \sin \theta & 0 & \cos \theta \end{matrix}\right\rbrack \cdot \left\lbrack \begin{array}{l} x \\ y \\ z \end{array}\right\rbrack
55
+ $$
56
+
57
+ - Compute the projected point $\operatorname{Proj} = \frac{{x}_{{P}_{i}}{x}_{R} + {y}_{{P}_{i}}{y}_{R} + {z}_{{P}_{i}}{z}_{R}}{\parallel R\parallel } \cdot R$
58
+
59
+ - Compute the altitude ${y}_{\text{Proj }} = 1 - \sqrt{{x}_{\text{Proj }}^{2} + {z}_{\text{Proj }}^{2}}$
60
+
61
+ ### 4.2 Edge projection onto the half-sphere
62
+
63
+ Here we describe the projection method of the edges onto the half-sphere:
64
+
65
+ - Compute the projected point $\operatorname{Proj} = \frac{{P}_{i}}{\begin{Vmatrix}{P}_{i}\end{Vmatrix}}$
66
+
67
+ - Compute the altitude ${y}_{\text{Proj }} = \sqrt{1 - \left( {{x}_{\text{Proj }}^{2} + {z}_{\text{Proj }}^{2}}\right) }$
68
+
69
+ ### 4.3 Edge projection onto the torus portion
70
+
71
+ In this section, we describe the projection method of the edges onto the torus portion in four steps:
72
+
73
+ - Compute the angle $\theta$ between the $\mathrm{x}$ axis and the $\mathrm{z}$ axis of ${P}_{i}$ , the point to be projected: $\theta = \frac{180}{\pi }$ atan $2\left( {{z}_{{P}_{i}},{x}_{{P}_{i}}}\right)$
74
+
75
+ - Rotate by $\theta$ about $y$ axis. Let $R$ be the rotation result:
76
+
77
+ $$
78
+ R = \left\lbrack \begin{matrix} \cos \theta & 0 & - \sin \theta \\ 0 & 1 & 0 \\ \sin \theta & 0 & \cos \theta \end{matrix}\right\rbrack \cdot \left\lbrack \begin{array}{l} x \\ y \\ z \end{array}\right\rbrack
79
+ $$
80
+
81
+ - Compute the projected point $\operatorname{Proj} = \frac{{P}_{i}}{\begin{Vmatrix}{P}_{i}\end{Vmatrix}} + R$
82
+
83
+ - Compute the altitude of the point:
84
+
85
+ $$
86
+ {y}_{\text{Proj }} = 1 - \sqrt{1 - \left( {\left( {r - 1}\right) \left( {r - 1}\right) }\right) }\text{, with}r = \sqrt{{x}_{\text{Proj }}^{2} + {z}_{\text{Proj }}^{2}}\text{.}
87
+ $$
88
+
89
+ Fig. 2 illustrates the result of our projected edges, compared to that of straight edges used in the proposed methods of Kobina et al. [13] (Fig. 1).
90
+
91
+ Thus, by projecting the edges onto the visualization surfaces, we improved the readability of the graph. Furthermore, there are no edges that cross the visualization surface.
92
+
93
+ ## 5 EVALUATION
94
+
95
+ We conducted a human-centered evaluation through a series of tasks performed on generated graphs in order to compare the efficiency score, the time to complete a task and the number of clicks of the 3D layouts with projected edges (Fig. 2) to those of the 2D radial layouts We use these 3 metrics to determine if a kind of visualization is better or worse than the others.
96
+
97
+ ### 5.1 Tasks
98
+
99
+ Kobina et al. [13] suggested that the projections of the uniform 2D representation highlight either the center, the periphery, or either moderately the center and the periphery. So we chose these three following tasks that are related to the central nodes, to the peripheral nodes and to the dense areas of a graph:
100
+
101
+ - Task 1 (related to the central nodes). The participants were asked to find the node that has the greatest degree among the most central node's neighbors.
102
+
103
+ - Task 2 (related to the peripheral nodes). The participants were asked to find a least central node that has at least 2 neighbors.
104
+
105
+ - Task 3 (related to the dense areas of a graph). The participants were asked to find a node of degree at least 3 and that has the highest clustering coefficient except 100%.
106
+
107
+ ### 5.2 Hypothesis
108
+
109
+ Based on the proposed methods of Kobina et al. [13], we make the following hypotheses:
110
+
111
+ H1. The 2D that emphasizes the periphery is the worst of the visualization surfaces when one is interested in the central nodes.
112
+
113
+ H2. The 2D that emphasizes the center is the worst of the visualiza tion surfaces when tasks are related to the periphery.
114
+
115
+ H3. The combination of the peripheral emphasis and the different 3D projections highlights not only the peripheral nodes as the 2D peripheral emphasis, but also improves the visibility of the center.
116
+
117
+ H4. The combination of the central emphasis and the different 3D projections highlights not only the central nodes as the 2D central emphasis, but also improves the visibility of the periphery.
118
+
119
+ H5. One spends less time in exploring and analyzing graphs on the 3D surfaces than on the 2D.
120
+
121
+ H6. There are fewer clicks on the $3\mathrm{D}$ surfaces than on the $2\mathrm{D}$ representations.
122
+
123
+ H7. 3D surfaces are better suited for exploring the dense areas of a graph than $2\mathrm{D}$ representations.
124
+
125
+ ### 5.3 Experimental protocol and measures
126
+
127
+ We conducted an experimental study using a WebGL version of our graph visualization system because of the Covid-19. Here is the link to our experiment for a given configuration: https://anonymnam.github.io/radialvig3dxp.Each participant could therefore perform the experiment remotely on his own laptop. Kobina et al. [13] suggested that the combination of the uniform $2\mathrm{D}$ representation and the different projections makes it possible to obtain in addition an emphasis on the center or on the periphery. So in this study, our goal is to show that these 3D methods could be better to explore and to analyze graphs whatever the interest (the central or peripheral nodes, the dense areas), compared to the 2D representations. Indeed, since Kobina et al. [13] optimized the spatial distribution of nodes and we improved the edges drawing by projecting them onto the surfaces, there could be less time in exploration, less clicks and more accurate responses to different tasks, because the perception of the nodes connectivity is improved. Moreover, we want to analyze the usability of the 3D for exploring and analyzing graphs. On the other hand, we want to identify the best layout that could be used to visualize graphs.
128
+
129
+ ![01963e6f-3f43-747c-80a2-0693e22610ed_2_253_145_1280_438_0.jpg](images/01963e6f-3f43-747c-80a2-0693e22610ed_2_253_145_1280_438_0.jpg)
130
+
131
+ Figure 1: Betweenness centrality: uniform 3D radial visualization (419 nodes and 695 edges). The spherical projection spreads out more the peripheral nodes than the central nodes while the projection on the torus portion spreads out more the central nodes than the peripheral nodes. The conical projection evenly distributes nodes. Images are from [13].
132
+
133
+ ![01963e6f-3f43-747c-80a2-0693e22610ed_2_263_728_1270_416_0.jpg](images/01963e6f-3f43-747c-80a2-0693e22610ed_2_263_728_1270_416_0.jpg)
134
+
135
+ Figure 2: Betweenness centrality: uniform 3D radial visualization (419 nodes and 695 edges). Edges are projected onto the visualization surfaces, compared to straight edges observed in the proposed methods of Kobina et al. [13](Fig. 1).
136
+
137
+ For our experiment, we chose to use the betweenness centrality, because it has an interesting use and regardless of the centrality measure, the purpose of the evaluation remains the same. It will therefore be enough to assess the interest of the proposed methods. We first generated, thanks to the Stochastic Block Model algorithm $\left\lbrack {{11},{15},{25}}\right\rbrack ,6$ different graphs (250 nodes and 855 edges) that have equivalent topological characteristics (Fig. 3, Fig. 4), since it is difficult to find in databases several graphs of the same size with equivalent topological characteristics (density, clustering coefficient).
138
+
139
+ The stochastic Block Model is a probabilistic model based on community structure in graphs. This model partitions the nodes in blocks of arbitrary sizes, and places edges between pairs of nodes independently, with a probability that depends on the blocks [24]. Thus, the structure of each community in the graph varies enough to avoid a learning effect.
140
+
141
+ We then built 24 configurations with the various representation surfaces so that each surface and graph is performed at least once as first, using something similar to the concept of the Latin square [7, 19]. A Latin square is an $n \times n$ array filled with $n$ different symbols in such a way that each symbol occurs exactly once in each row and exactly once in each column. For our configurations, we respected a distribution order between 2D and 3D surfaces so that the running order of a 2D representation corresponds to that of the equivalent $3\mathrm{D}$ surface. For example, if a configuration starts with the 2D surfaces and the first surface is the one that emphasizes the center, then the first 3D surface will be the torus portion, since it is the most to highlight the center. So we make sure that each configuration is tested as many times before as after each of the other configurations
142
+
143
+ During the experiment and for each task and surface, we measure an efficiency score, the time spent to complete a task and the number of clicks to find an optimal response. As the experiment is done remotely, each participant's performance is automatically saved when he validates his response. Below is how we compute the efficiency score of the participants.
144
+
145
+ Task 1. Find the node that has the greatest degree among the most central node's neighbors.
146
+
147
+ $$
148
+ {\text{ score }}_{i} = \left\{ \begin{array}{ll} {100} * \left( {{de}{g}_{i}/{de}{g}_{ideal}}\right) , & \text{ if }d\left( {{ctr}, i}\right) = 1 \\ 0, & \text{ otherwise } \end{array}\right. \tag{1}
149
+ $$
150
+
151
+ where ${\deg }_{i}$ is the degree of the selected node ${}_{i}$ . de ${g}_{\text{ideal }}$ is the greatest degree among the central node’s neighbors and $d\left( {{ctr}, i}\right)$ is the shortest distance between the central node and node ${e}_{i}$ . Thus, ${\text{node}}_{i}$ must be directly connected to the central node, i.e. $d\left( {{ctr}, i}\right)$ must be equal to 1 .
152
+
153
+ ![01963e6f-3f43-747c-80a2-0693e22610ed_3_155_153_712_438_0.jpg](images/01963e6f-3f43-747c-80a2-0693e22610ed_3_155_153_712_438_0.jpg)
154
+
155
+ Figure 3: Comparison of generated graphs: all graphs have the same density, but a different clustering coefficient. The clustering coefficient is high if the number of the closed triplets in a graph is important.
156
+
157
+ ![01963e6f-3f43-747c-80a2-0693e22610ed_3_154_735_710_438_0.jpg](images/01963e6f-3f43-747c-80a2-0693e22610ed_3_154_735_710_438_0.jpg)
158
+
159
+ Figure 4: Comparison of generated graphs: all graphs have the same diameter, but different number of triangles. As for the clustering coefficient, the number of triangles is high if the number of the closed triplets in a graph is important.
160
+
161
+ Task 2. Find a least central node that has at least 2 neighbors.
162
+
163
+ $$
164
+ {\text{ score }}_{i} = \left\{ \begin{array}{ll} {100} * \left( {1 - {c}_{i}}\right) /\left( {1 - {c}_{\text{ideal }}}\right) , & \text{ if }{c}_{\text{ideal }} \neq 1 \\ 0, & \text{ otherwise } \end{array}\right. \tag{2}
165
+ $$
166
+
167
+ where ${c}_{i}$ and ${c}_{ideal}$ are respectively the centrality value of the node ${e}_{i}$ and that of the ideal node. Furthermore, the score is 0 if the degree of the selected node is less than 2. Indeed, it is easy to check that the degree of the selected node is at least 2 . Thus, the score is 0 if the condition is not met. Otherwise, the score varies from 0 at the center to 1 for a node of degree at least 2 and the most on the periphery.
168
+
169
+ Task 3. Find a node of degree at least 3 and that has the highest clustering coefficient except 100%.
170
+
171
+ $$
172
+ {\text{ score }}_{i} = \left\{ \begin{array}{ll} {100} * \left( {{cc}{f}_{i} - {cc}{f}_{\text{worst }}}\right) /d, & \text{ if }d > 0 \\ 0, & \text{ otherwise } \end{array}\right. \tag{3}
173
+ $$
174
+
175
+ where $d = {cc}{f}_{\text{ideal }} - {cc}{f}_{\text{worst }}.{cc}{f}_{i},{cc}{f}_{\text{worst }}$ and ${cc}{f}_{\text{ideal }}$ are respectively the clustering coefficient of the node ${e}_{i}$ , the worst clustering coefficient and the highest clustering coefficient except 100%. So, the score is 0 if the degree of the selected node is less than 3 or if the clustering coefficient of the selected node is ${100}\%$ . Otherwise, we compute the score using equation 3 .
176
+
177
+ At the end of the experiment, each participant completes questionnaires related to the usability of the system and the user experience. Since our experiment is done remotely, we organized a video conference for each participant in order to supervise the experiment's process. The experiment consists of a training phase and an evaluation phase. Before starting the training phase, each participant is instructed about the experiment procedure, its environment, navigation and interaction techniques. For example, when the mouse hovers a node, a tooltip shows its clustering coefficient value and its degree. He is also given the essential notions about graphs in order to ensure that he has the useful knowledge for the experiment. In the training phase, the participant is asked to perform the above tasks on a small graph (the karate club's graph [28]) and on each surface Once familiar with the system, he moves on to the evaluation phase, but with generated graphs. If the participant is ready to start the training or the evaluation, he clicks on a start button to see the first task to complete and the next task is automatically displayed after validating the previous task's response.
178
+
179
+ ### 5.4 Participants
180
+
181
+ For this project, we were needing a number of participants that would be a multiple of 24 in order to encounter the same number of these 24 configurations mentioned above. So, there were 24 participants (9 female, 15 male) and they were recruited among our colleagues in the laboratory and among students: ${50}\%$ are between 18 and 25 years old, 37.5% are between 25 and 35, and 12.5% are more than 35 years old. Moreover, most participants had no experience in data analysis and data visualization, but some of them had gaming experience.
182
+
183
+ ## 6 RESULTS
184
+
185
+ ### 6.1 User performance
186
+
187
+ We present here the main results from the analysis of the data collected during our experiment through nonparametric tests using the Kruskal method [14] and post-hoc tests using the Dunn's method $\left\lbrack {4,{20}}\right\rbrack$ . We used nonparametric tests since none of the samples comes from a normal distribution (normality tests were done using the Shapiro-Wilk method [23]). As a reminder, the variables analyzed are the efficiency score, the time and the number of clicks for each task and each surface.
188
+
189
+ #### 6.1.1 Task 1: Find the Node that has the Greatest Degree among the most Central Node's Neighbors
190
+
191
+ Efficiency score. After an exploratory data analysis using box plots (Fig. 5), the nonparametric test showed that there is a statistically significant difference between the visualization surfaces and cannot be due to chance $\left( {F - \text{statistic} = {31.46}, p = {0.000} < {0.05}}\right)$ . So, we rejected the null hypothesis that the efficiency score is the same for all the visualization surfaces when one is interested in the central nodes. The result of the multiple pairwise comparison (Table 1) showed that the $2\mathrm{D}$ that emphasizes the periphery had a difference of medians.
192
+
193
+ From the statistic test results, we validate hypothesis H1 that the $2\mathrm{D}$ representation that emphasizes the periphery is worse to perform tasks that are related to the central nodes, compared to all other surfaces. However, we validate hypothesis H3 that the 3D projections of the peripheral emphasis not only give the same benefit on the periphery, but also provide the visibility of the center.
194
+
195
+ Time. Considering the results of Fig. 6 we could say that the participants spent more time on the 2D that emphasizes the periphery, compared to all other visualization surfaces. However, we failed to reject the hypothesis of equality of medians $(F -$ statistic $=$ ${5.990}, p = {0.307} > {0.05})$ . From our exploratory analysis results,
196
+
197
+ Table 1: Task 1: Efficiency score: P-values of the multiple pairwise comparison using Dunn’s method (significant p-values starred (*p $< {0.05}$ , ** p $<$ 0.01, *** p $\leq$ 0.001)).
198
+
199
+ <table><tr><td/><td>2D central</td><td>2D peripheral</td><td>2D uniform</td><td>Cone</td><td>Half sphere</td><td>Torus</td></tr><tr><td>2D central</td><td>1</td><td>0.00002***</td><td>1</td><td>1</td><td>1</td><td>1</td></tr><tr><td>2D peripheral</td><td>0.00002***</td><td>1</td><td>0.0008***</td><td>0.00013***</td><td>0.00067***</td><td>0.00072***</td></tr><tr><td>2D uniform</td><td>1</td><td>0.0008***</td><td>1</td><td>1</td><td>1</td><td>1</td></tr><tr><td>Cone</td><td>1</td><td>0.00013***</td><td>1</td><td>1</td><td>1</td><td>1</td></tr><tr><td>Half sphere</td><td>1</td><td>0.00067***</td><td>1</td><td>1</td><td>1</td><td>1</td></tr><tr><td>Torus</td><td>1</td><td>0.0007***</td><td>1</td><td>1</td><td>1</td><td>1</td></tr></table>
200
+
201
+ ![01963e6f-3f43-747c-80a2-0693e22610ed_4_156_519_710_387_0.jpg](images/01963e6f-3f43-747c-80a2-0693e22610ed_4_156_519_710_387_0.jpg)
202
+
203
+ Figure 5: Task 1: Efficiency score: Descriptive representation (mean in red dashes, median in purple). The low mean of the 2D that emphasizes the periphery shows that the participants had a low efficiency score on this surface, compared to all other surfaces.
204
+
205
+ we cannot validate hypothesis $\mathbf{H}1$ , so we cannot prove that the 2D that emphasizes the periphery is worse to perform a task that is related to the central node, compared to all other visualization surfaces. Moreover, we reject hypothesis H5 that the participants spend less time on the 3D surfaces, compared to the 2D surfaces.
206
+
207
+ ![01963e6f-3f43-747c-80a2-0693e22610ed_4_158_1299_706_384_0.jpg](images/01963e6f-3f43-747c-80a2-0693e22610ed_4_158_1299_706_384_0.jpg)
208
+
209
+ Figure 6: Task 1: Time: Descriptive representation (mean in red dashes, median in purple). Here, we could say that the participants spent more time on the 2D that emphasizes the periphery, compared to all other visualization surfaces.
210
+
211
+ Number of clicks. From the results of Fig. 7, we validate hypothesis $\mathbf{{H6}}$ that the participants clicked less on the 3D surfaces, compared to the $2\mathrm{D}$ representations. Furthermore, the nonparametric test result showed that there is a statistically significant difference between the visualization surfaces, since the F-statistic is 12.554 and the corresponding p-value is ${0.028} < {0.05}$ . So we conclude that the type of surface leads to statistically significant differences in the number of clicks. A multiple pairwise comparison (Table 2) confirmed our exploratory analysis that the $2\mathrm{D}$ that emphasizes the periphery is
212
+
213
+ different from the other surfaces.
214
+
215
+ ![01963e6f-3f43-747c-80a2-0693e22610ed_4_932_585_707_385_0.jpg](images/01963e6f-3f43-747c-80a2-0693e22610ed_4_932_585_707_385_0.jpg)
216
+
217
+ Figure 7: Task 1: Number of clicks: Descriptive representation (mean in red dashes, median in purple). We could suppose that the participants clicked less on the 3D surfaces, compared to the 2D.
218
+
219
+ Unlike the score analysis, there is a statistically significant difference between the $2\mathrm{D}$ that emphasizes the periphery and two $3\mathrm{D}$ surfaces (the half-sphere and the torus portion).
220
+
221
+ Ultimately, the 3D surfaces are well suited for carrying out tasks that are related to the central nodes because Fig. 7 and Table 2 show that our hypothesis $\mathbf{{H6}}$ is validated for the number of clicks. Moreover, we validated hypothesis $\mathbf{{H1}}$ and $\mathbf{{H3}}$ for the efficiency score. However we cannot prove that our hypotheses $\mathbf{{H1}}$ and $\mathbf{{H5}}$ could be validated with respect to the time of the task.
222
+
223
+ #### 6.1.2 Task 2: Find a least Central Node that has at least 2 Neighbors
224
+
225
+ Efficiency score. From an exploratory analysis (Fig. 8) we validate hypothesis $\mathbf{{H2}}$ that the $2\mathrm{D}$ representation that emphasizes the center is worse when a task is related to the peripheral nodes, compared to all other visualization surfaces, since the participants did not have good scores on the 2D that emphasizes the center. Moreover, there is a difference that is statistically significant between the $2\mathrm{D}$ that emphasizes the center and all the other surfaces (see Table 3), because the F-statistic is 40.31 and the corresponding p-value is ${0.000} < {0.05}$ . Nonetheless, we validate hypothesis $\mathbf{{H4}}$ that the $3\mathrm{D}$ projections of the central emphasis make it possible not only to get the same visual effect on the center, but also to improve the visibility of the periphery.
226
+
227
+ Time. As far as the time analysis is concerned, we could say that the participants spent less time on the 3D surfaces and the uniform $2\mathrm{D}$ , compared to the $2\mathrm{D}$ surfaces that emphasize the center and the periphery (Fig. 9). However, we failed to reject the hypothesis of equality of medians $\left( {F - \text{statistic} = {1.65}, p = {0.90} > {0.05}}\right)$ . So, we reject hypothesis $\mathbf{{H5}}$ that the participants spent less time on the $3\mathrm{D}$ surfaces.
228
+
229
+ Number of clicks. Fig. 10 shows high values of medians and means for the $2\mathrm{D}$ that emphasizes the periphery and the cone, compared to all other surfaces. So, it could suggest that the participants clicked more on the $2\mathrm{D}$ that emphasizes the periphery and on the cone. On the other hand, the nonparametric test failed to reject the hypothesis of median equality $\left( {F - \text{statistic} = {2.93}, p = {0.71} > {0.05}}\right)$ . So, as
230
+
231
+ Table 2: Task 1: Number of clicks: P-values of the multiple pairwise comparison using Dunn’s method (significant p-values starred (*p $< {0.05}$ , ** p $<$ 0.01, *** p $\leq$ 0.001)).
232
+
233
+ <table><tr><td/><td>2D central</td><td>2D peripheral</td><td>2D uniform</td><td>Cone</td><td>Half sphere</td><td>Torus</td></tr><tr><td>2D central</td><td>1</td><td>0.679</td><td>1</td><td>1</td><td>1</td><td>1</td></tr><tr><td>2D peripheral</td><td>0.679</td><td>1</td><td>0.517</td><td>0.13</td><td>0.0435*</td><td>0.0377*</td></tr><tr><td>2D uniform</td><td>1</td><td>0.517</td><td>1</td><td>1</td><td>1</td><td>1</td></tr><tr><td>Cone</td><td>1</td><td>0.13</td><td>1</td><td>1</td><td>1</td><td>1</td></tr><tr><td>Half sphere</td><td>1</td><td>0.0435*</td><td>1</td><td>1</td><td>1</td><td>1</td></tr><tr><td>Torus</td><td>1</td><td>0.0377*</td><td>1</td><td>1</td><td>1</td><td>1</td></tr></table>
234
+
235
+ Table 3: Task 2: Efficiency score: P-values of the multiple pairwise comparison using Dunn’s method (significant p-values starred (*p $< {0.05}$ , ** p $<$ 0.01, *** p $\leq$ 0.001)).
236
+
237
+ <table><tr><td/><td>2D central</td><td>2D peripheral</td><td>2D uniform</td><td>Cone</td><td>Half sphere</td><td>Torus</td></tr><tr><td>2D central</td><td>1</td><td>0.00000***</td><td>0.002**</td><td>0.00003***</td><td>0.00000***</td><td>0.001***</td></tr><tr><td>2D peripheral</td><td>0.00000***</td><td>1</td><td>1</td><td>1</td><td>1</td><td>1</td></tr><tr><td>2D uniform</td><td>0.002**</td><td>1</td><td>1</td><td>1</td><td>0.752</td><td>1</td></tr><tr><td>Cone</td><td>0.00003***</td><td>1</td><td>1</td><td>1</td><td>1</td><td>1</td></tr><tr><td>Half sphere</td><td>0.00000***</td><td>1</td><td>0.752</td><td>1</td><td>1</td><td>0.986</td></tr><tr><td>Torus</td><td>0.001***</td><td>1</td><td>1</td><td>1</td><td>0.986</td><td>1</td></tr></table>
238
+
239
+ ![01963e6f-3f43-747c-80a2-0693e22610ed_5_157_861_709_384_0.jpg](images/01963e6f-3f43-747c-80a2-0693e22610ed_5_157_861_709_384_0.jpg)
240
+
241
+ Figure 8: Task 2: Efficiency score: Descriptive representation (mean in red dashes, median in purple). We could say that the participants did not have good scores on the 2D that emphasizes the center, compared to all other surfaces.
242
+
243
+ ![01963e6f-3f43-747c-80a2-0693e22610ed_5_158_1435_706_386_0.jpg](images/01963e6f-3f43-747c-80a2-0693e22610ed_5_158_1435_706_386_0.jpg)
244
+
245
+ Figure 9: Task 2: Time: Descriptive representation (mean in red dashes, median in purple). We could say that the participants spent less time on the 3D surfaces and the uniform 2D.
246
+
247
+ for the time analysis, the difference in medians observed could suggest that the $2\mathrm{D}$ that emphasizes the periphery and the cone are worse when one is interested in the peripheral nodes. So, we reject hypothesis $\mathbf{{H6}}$ that there are fewer clicks on the $3\mathrm{D}$ surfaces.
248
+
249
+ ![01963e6f-3f43-747c-80a2-0693e22610ed_5_934_1024_704_383_0.jpg](images/01963e6f-3f43-747c-80a2-0693e22610ed_5_934_1024_704_383_0.jpg)
250
+
251
+ Figure 10: Task 2: Number of clicks: Descriptive representation (mean in red dashes, median in purple). It could suggest that the participants clicked more on the 2D that emphasizes the periphery and on the cone.
252
+
253
+ Based on the various analyses of task 2, that of the efficiency score makes it possible to validate hypotheses $\mathbf{{H2}}$ that the $2\mathrm{D}$ that emphasizes the center is the worst of the visualization surfaces when tasks are related to the peripheral nodes, and H4 that our 3D projections make it possible not only to get the same benefit on the center, but also to improve the visibility of the periphery. Furthermore, Table 3 shows that the half-sphere and the cone are well suited when one is interested in the peripheral nodes. However, analyses of time and number of clicks show that the difference in medians could suggest that the 2D that emphasizes the periphery is worse, compared to other surfaces and that the 3D surfaces are better, but we cannot prove that hypotheses H5 and H6 could be validated.
254
+
255
+ #### 6.1.3 Task 3: Find a Node of Degree at least 3 and that has the Highest Clustering Coefficient except 100%
256
+
257
+ Efficiency score. From an exploratory analysis results (Fig. 11) we could suppose that the participants got good scores on the 2D that emphasizes the center. On the other hand, we failed to reject the null hypothesis that the efficiency score is the same for all the visualization surfaces, since the test statistic is 6.0 and the corresponding p-value is ${0.31} > {0.05}$ . So, the difference in medians could lead us to say that the $2\mathrm{D}$ that emphasizes the center is better for exploring the dense areas of the graph, compared to all other surfaces and that our hypothesis $\mathbf{{H7}}$ (3D surfaces are better suited for exploring the dense areas of a graph) is rejected, but the statistic analysis failed to demonstrate it.
258
+
259
+ ![01963e6f-3f43-747c-80a2-0693e22610ed_6_158_458_706_386_0.jpg](images/01963e6f-3f43-747c-80a2-0693e22610ed_6_158_458_706_386_0.jpg)
260
+
261
+ Figure 11: Task 3: Efficiency score: Descriptive representation (mean in red dashes, median in purple). We could suppose that the participants got good efficiency scores on the 2D that emphasizes the center.
262
+
263
+ Time. As for the score analysis, Fig. 12 shows the result of an exploratory data analysis that could lead one to think that the participants spent less time on the uniform 2D, compared to all other surfaces. However, the median values are not significantly different, since the nonparametric test did not reject the hypothesis of median equality $\left( {F - \text{statistic} = {1.04}, p = {0.96} > {0.05}}\right)$ . So, we reject hypotheses $\mathbf{{H5}}$ that the participants spend less time on the $3\mathrm{D}$ surfaces and H7 that the 3D surfaces are better than the 2D surfaces to explore the dense areas of a graph.
264
+
265
+ ![01963e6f-3f43-747c-80a2-0693e22610ed_6_158_1326_703_380_0.jpg](images/01963e6f-3f43-747c-80a2-0693e22610ed_6_158_1326_703_380_0.jpg)
266
+
267
+ Figure 12: Task 3: Time: Descriptive representation (mean in red dashes, median in purple). We could suppose that participant spent less time on the uniform 2D, compared to other surfaces.
268
+
269
+ Number of clicks. Fig. 13 shows that the median value of the torus portion is smaller than the median values of other visualization surfaces. So we could say that the participants clicked less on the torus portion, compared to all other surfaces. We could then validate hypothesis $\mathbf{{H6}}$ that there are less clicks on the $3\mathrm{D}$ surfaces. However, we failed to reject the null hypothesis that the number of clicks is the same for all the visualization surfaces when tasks are related to the dense areas $\left( {F - \text{statistic} = {5.0}, p = {0.42} > {0.05}}\right)$ . So, we reject hypothesis $\mathbf{{H6}}$ that there are fewer clicks on the $3\mathrm{D}$ surfaces, and hypothesis H7 that the 3D surfaces are better suited than the $2\mathrm{D}$ to explore the dense areas of a graph.
270
+
271
+ ![01963e6f-3f43-747c-80a2-0693e22610ed_6_934_259_705_385_0.jpg](images/01963e6f-3f43-747c-80a2-0693e22610ed_6_934_259_705_385_0.jpg)
272
+
273
+ Figure 13: Task 3: Number of clicks: Descriptive representation (mean in red dashes, median in purple). It could suggest that the participants clicked less on the torus portion, compared to all other surfaces.
274
+
275
+ Unlike the various analyses carried out for tasks 1 and 2, those of task 3 showed in exploratory analysis that some 3D visualization surfaces are better than the $2\mathrm{D}$ surfaces, but the statistic tests showed that the differences of medians in efficiency score, in time and in number of clicks are not statistically significant when one is interested in the dense areas of the graph. So, we reject hypotheses H5 that the participants spend less time in exploring and analyzing graphs on the $3\mathrm{D}$ surfaces, $\mathbf{{H6}}$ that there are fewer clicks on the $3\mathrm{D}$ surfaces and $\mathbf{{H7}}$ that the $3\mathrm{D}$ surfaces are better than the $2\mathrm{D}$ to explore the dense areas of a graph.
276
+
277
+ ### 6.2 User experience
278
+
279
+ As mentioned above (in Sect. 5.3), at the end of the experiment, the participants were asked to complete a questionnaire related to the system usability and to their experience. As far as their experience is concerned, they were asked whether they understood the requested tasks, if they had difficulty interacting with the system, and if they had visual fatigue. The results were that 23 participants over 24 understood the requested tasks, 7 over 24 had difficulty interacting with the system and 7 participants over 24 declared having visual fatigue.
280
+
281
+ The participants were also asked to specify the surfaces that enabled them to better perform the requested tasks, on the one hand, and to identify the surfaces with which they had difficulty completing the requested tasks, on the other hand. Based on their feedback, 3D surfaces have significantly contributed to the successful completion of the various tasks, compared to the 2D representations (uniform $2\mathrm{D}$ , the $2\mathrm{D}$ that emphasizes the center or the periphery). Fig. 14 and Fig. 15 illustrate the distribution of user preferences for a successful and unsuccessful completion, respectively. Moreover, Fig. 14 shows that the $2\mathrm{D}$ that emphasizes the center and the $2\mathrm{D}$ that emphasizes the periphery alone total ${80}\%$ of votes while the cone makes $0\%$ .
282
+
283
+ ## 7 Discussion
284
+
285
+ Some nodes would be less visible with the use of the straight edges in the proposed methods of Kobina et al. [13]. Indeed, combining the peripheral emphasis and the projection of the nodes and edges on the half-sphere or on the torus portion, some intermediate nodes would be less visible due to the surface, unlike the conical projection. Furthermore, with uniform projections, some nodes and edges would be less visible in the dense areas according to the projection surface. So, projecting the edges onto the visualization surface, we reduced the overlap of the nodes and the edges, and we therefore improved the overall readability of the graph.
286
+
287
+ ![01963e6f-3f43-747c-80a2-0693e22610ed_7_156_155_701_411_0.jpg](images/01963e6f-3f43-747c-80a2-0693e22610ed_7_156_155_701_411_0.jpg)
288
+
289
+ Figure 14: Surfaces that the participants prefer when performing tasks.
290
+
291
+ ![01963e6f-3f43-747c-80a2-0693e22610ed_7_157_703_701_411_0.jpg](images/01963e6f-3f43-747c-80a2-0693e22610ed_7_157_703_701_411_0.jpg)
292
+
293
+ Figure 15: Surfaces that the participants do not like when performing tasks.
294
+
295
+ As far as our evaluation is concerned, the results did not allow us to identify which representation is best suited to visualize large graphs and to improve graph analysis. However, we partially validated hypotheses $\mathbf{H}1,\mathbf{H}2,\mathbf{H}3,\mathbf{H}4$ and $\mathbf{H}6$ , since some statistic test results showed that there are differences in efficiency score and in number of clicks.
296
+
297
+ Indeed, these results made it possible to validate hypotheses $\mathbf{{H1}}$ that the $2\mathrm{D}$ that emphasizes the periphery is the worst of the surfaces to visualize the center, and $\mathbf{{H2}}$ that the $2\mathrm{D}$ that emphasizes the center is the worst of the surfaces to visualize the periphery with respect to the efficiency score of tasks 1 and 2. Moreover, we validated hypotheses: 1) $\mathbf{{H3}}$ that the combination of the peripheral emphasis and different $3\mathrm{D}$ projections makes it possible not only to get the same advantages on the periphery as the 2D peripheral emphasis, but also to improve the visibility of the center; 2) H4 that combining the central emphasis and different 3D projections makes it possible not only to get the same benefits on the center as the 2D central emphasis, but also to improve the visibility of the periphery, always regarding the efficiency score of tasks 1 and 2. We also validated hypothesis $\mathbf{{H6}}$ that there are fewer clicks on the 3D surfaces regarding the number of clicks of task 1.
298
+
299
+ On the other hand, we rejected hypotheses H5 and H7, since we were not able to prove that: 1) participants spend less time on the 3D surfaces and 2) the $3\mathrm{D}$ surfaces are better than the $2\mathrm{D}$ to explore the dense areas of a graph. We could therefore say that the 2D versus 3D debate still persists [3]. On the other hand, participants' feedback showed that the 3D surfaces could be well suited for completing the various requested tasks successfully, compared to the 2D surfaces.
300
+
301
+ ## 8 CONCLUSION
302
+
303
+ In this work, we improve the edge drawing of some 3D graph visualization methods previously proposed. Our improvements consist in projecting the edges onto each visualization surface in order to reduce the nodes and edges overlap.
304
+
305
+ An online human-centered experimental study was conducted in order to compare the efficiency score, the time to complete tasks and the number of clicks of the various visualization surfaces. We showed through our experiment that there is no difference that is statistically significant in terms of time or errors between these surfaces. However, the participants have a better feeling on the 3D when carrying out the requested tasks, compared to the 2D layouts. Thus, adding a third dimension to the $2\mathrm{D}$ radial views improves the user experience.
306
+
307
+ ## 9 FUTURE WORK
308
+
309
+ In the future, we will also study in detail the results obtained with large graphs in order to check whether current trends are confirmed. Moreover, we projected the $2\mathrm{D}$ views on other types of $3\mathrm{D}$ surfaces (a parabola, a Gaussian, a hyperboloid and a square root). Thus, we will study in more details the results of these contributions in order to identify the most appropriate approach or combination of approaches that could be used to visualize large and complex graphs.
310
+
311
+ In order to declutter graphs in the proposed methods of Kobina et al. [13], we have already implemented the kernel density estimation edge bundling algorithm using computer graphics acceleration techniques. Fig. 16 illustrates the result of a graph which was generated using Stochastic Block Model algorithm presented in section 5.3. So, with the bundled graph, it is possible to see how groups of nodes are connected to each other, compared to the unbundled graph. However, we lose the detailed connectivity of a node (for instance, edges between a node and its neighbors). It could be therefore useful to combine the bundled and the unbundled edges for further analysis if one would need to switch between detailed and bundled views.
312
+
313
+ ## REFERENCES
314
+
315
+ [1] U. Brandes, P. Kenis, and D. Wagner. Communicating centrality in policy network drawings. IEEE Transactions on Visualization and Computer Graphics, 9:241-253, 2003.
316
+
317
+ [2] U. Brandes and C. Pich. More flexible radial layout. Journal of Graph Algorithms and Applications, 15:151-173, 2011.
318
+
319
+ [3] G. Cliquet, M. Perreira, F. Picarougne, Y. Prié, and T. Vigier. Towards hmd-based immersive analytics. In Immersive Analytics workshop of IEEE VIS 2017, 2017.
320
+
321
+ [4] O. J. Dunn. Multiple comparisons using rank sums. Technometrics, 6(3):241-252, 1964. doi: 10.1080/00401706.1964.10490181
322
+
323
+ [5] T. Dwyer, S. Hong, D. Koschützki, F. Schreiber, and K. Xu. Visual analysis of network centralities. In K. Misue, K. Sugiyama, and J. Tanaka, eds., Asia-Pacific Symposium on Information Visualisation, APVIS 2006, Tokyo, Japan, February 1-3, 2006, vol. 60 of CRPIT, pp. 189-197. Australian Computer Society, 2006.
324
+
325
+ [6] Z. A. El Mouden, R. M. Taj, A. Jakimi, and M. Hajar. Towards using graph analytics for tracking covid-19. Procedia Computer Science, 177:204-211, 2020. The 11th International Conference on Emerging Ubiquitous Systems and Pervasive Networks (EUSPN 2020) / The 10th International Conference on Current and Future Trends of Information and Communication Technologies in Healthcare (ICTH 2020) / Affiliated Workshops. doi: 10.1016/j.procs.2020.10.029
326
+
327
+ [7] G. H. Freeman. Complete latin squares and related experimental designs. Journal of the Royal Statistical Society. Series B (Methodological), 41(2):253-262, 1979.
328
+
329
+ [8] L. C. Freeman. A set of measures of centrality based on betweenness. Sociometry, 40(1):35-41, 1977.
330
+
331
+ [9] L. C. Freeman. Centrality in social networks conceptual clarification. Social Networks, 1(3):215-239, 1978. doi: 10.1016/0378-8733(78)90021 -7
332
+
333
+ [10] D. L. Hansen, B. Shneiderman, M. A. Smith, and I. Himelboim. Chapter 3 - social network analysis: Measuring, mapping, and modeling
334
+
335
+ ![01963e6f-3f43-747c-80a2-0693e22610ed_8_257_162_1291_617_0.jpg](images/01963e6f-3f43-747c-80a2-0693e22610ed_8_257_162_1291_617_0.jpg)
336
+
337
+ Figure 16: Top view from the cone of a generated graph (500 nodes and 3294 edges). Edge bundling makes it possible to declutter the graph.
338
+
339
+ collections of connections. In D. L. Hansen, B. Shneiderman, M. A.
340
+
341
+ Smith, and I. Himelboim, eds., Analyzing Social Media Networks with NodeXL (Second Edition), pp. 31-51. Morgan Kaufmann, second edition ed., 2020. doi: 10.1016/B978-0-12-817756-3.00003-0
342
+
343
+ [11] P. W. Holland, K. B. Laskey, and S. Leinhardt. Stochastic blockmodels: First steps. Social Networks, 5(2):109-137, 1983. doi: 10.1016/0378 $- {8733}\left( {83}\right) {90021} - 7$
344
+
345
+ [12] C. Hurter, O. Ersoy, and A. Telea. Graph bundling by kernel density estimation. Computer Graphics Forum, 31:865-874, 06 2012. doi: 10. 1111/j.1467-8659.2012.03079.x
346
+
347
+ [13] P. Kobina, T. Duval, and L. Brisson. 3d radial layout for centrality visualization in graphs. In L. T. D. Paolis and P. Bourdot, eds., Augmented Reality, Virtual Reality, and Computer Graphics - 7th International Conference, AVR 2020, Lecce, Italy, September 7-10, 2020, Proceedings, Part I, vol. 12242 of Lecture Notes in Computer Science, pp. 452-460. Springer, 2020. doi: 10.1007/978-3-030-58465-8_33
348
+
349
+ [14] W. H. Kruskal and W. A. Wallis. Use of ranks in one-criterion variance analysis. Journal of the American Statistical Association, 47(260):583- 621, 1952.
350
+
351
+ [15] C. Lee and D. J. Wilkinson. A review of stochastic block models and extensions for graph clustering. Applied Network Science, 4(1), Dec 2019. doi: 10.1007/s41109-019-0232-2
352
+
353
+ [16] F. Martino and A. Spoto. Social network analysis: A brief theoretical review and further perspectives in the study of information technology. PsychNology Journal, 4:53-86, 01 2006.
354
+
355
+ [17] R. Y. Nooraie, J. E. M. Sale, A. Marin, and L. E. Ross. Social network analysis: An example of fusion between quantitative and qualitative methods. Journal of Mixed Methods Research, 14(1):110-124, 2020. doi: 10.1177/1558689818804060
356
+
357
+ [18] M. Raj and R. T. Whitaker. Anisotropic radial layout for visualizing centrality and structure in graphs. In F. Frati and K. Ma, eds., Graph Drawing and Network Visualization - 25th International Symposium, GD 2017, Boston, MA, USA, September 25-27, 2017, Revised Selected Papers, vol. 10692 of Lecture Notes in Computer Science, pp. 351-364. Springer, 2017. doi: 10.1007/978-3-319-73915-1_28
358
+
359
+ [19] J. T. Richardson. The use of latin-square designs in educational and psychological research. Educational Research Review, 24:84-97, 2018. doi: 10.1016/j.edurev.2018.03.003
360
+
361
+ [20] L. Sangseok and L. D. Kyu. What is the proper way to apply the multiple comparison test? Korean J Anesthesiol, 71(5):353-360, 2018. doi: 10.4097/kja.d.18.00242
362
+
363
+ [21] M. Saqr, U. Fors, and J. Nouri. Using social network analysis to understand online problem-based learning and predict performance. PLOS ONE, 13(9):1-20, 09 2018. doi: 10.1371/journal.pone.0203590
364
+
365
+ [22] A. Saxena and S. Iyengar. Centrality measures in complex networks: A survey. ArXiv, abs/2011.07190, 2020.
366
+
367
+ [23] S. S. Shapiro and M. B. Wilk. An analysis of variance test for normality (complete samples). Biometrika, 52(3/4):591-611, 1965.
368
+
369
+ [24] T. A. Snijders and K. Nowicki. Estimation and prediction for stochastic blockmodels for graphs with latent block structure. Journal of Classification, 14(1):75-100, Jan 1997. doi: 10.1007/s003579900004
370
+
371
+ [25] N. Stanley, T. Bonacci, R. Kwitt, M. Niethammer, and P. Mucha. Stochastic block models with multiple continuous attributes. Applied Network Science, 4:1-22, 08 2019. doi: 10.1007/s41109-019-0170-z
372
+
373
+ [26] A. R. Teyseyre and M. R. Campo. An overview of 3d software visualization. IEEE Transactions on Visualization and Computer Graphics, 15(1):87-105, 2009. doi: 10.1109/TVCG.2008.86
374
+
375
+ [27] J. Wang, X. Hou, K. Li, and Y. Ding. A novel weight neighborhood centrality algorithm for identifying influential spreaders in complex networks. Physica A: Statistical Mechanics and its Applications, 475:88- 105, 2017. doi: 10.1016/j.physa.2017.02.007
376
+
377
+ [28] W. W. Zachary. An information flow model for conflict and fission in small groups. Journal of anthropological research, 33:452-473, 1977.
378
+
379
+ [29] H. Zhang, Y. Zhu, L. Qin, H. Cheng, and J. X. Yu. Efficient local clustering coefficient estimation in massive graphs. In S. Candan, L. Chen, T. B. Pedersen, L. Chang, and W. Hua, eds., Database Systems for Advanced Applications, pp. 371-386. Springer International Publishing, Cham, 2017.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/Uh8fD3uPiv6/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,375 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § TASK-BASED EVALUATION OF 3D RADIAL LAYOUTS FOR CENTRALITY VISUALIZATION
2
+
3
+ § ABSTRACT
4
+
5
+ In this paper we propose improvements to the 3D radial layouts that make it possible to visualize centrality measures of nodes in a graph. Our improvements mainly relate edge drawing and the evaluation of the 3D radial layouts. First, we projected the edges onto the visualization surfaces in order to reduce the nodes overlap. Secondly, we proposed a human-centered evaluation in order to compare the efficiency score, the time to complete tasks and the number of clicks of the $3\mathrm{D}$ radial layouts to those of the $2\mathrm{D}$ radial layouts. The results showed that even if the overall improvements in terms of time or errors are not statistically significant between the various visualization surfaces, the participants have a better feeling on the $3\mathrm{D}$ and therefore the user experience is able to be improved in data visualization.
6
+
7
+ Index Terms: Human-centered computing-3D graph visualization-Centrality visualization-Layout evaluation
8
+
9
+ § 1 INTRODUCTION
10
+
11
+ Centrality measures are topological measures that describe the importance of the nodes in a graph. There has been a lot of work carried out in this topic for network analysis in order to answer the question "Which are the most important nodes in a graph?" [16, 17]. Other works in graph drawing chose to visually reveal these properties in order to facilitate their exploratory analysis $\left\lbrack {2,{18}}\right\rbrack$ . For example, in graph analytics, some works are interested in understanding and describing the interaction structure by analyzing the topology of the graph $\left\lbrack {6,{21}}\right\rbrack$ . Some others are interested in identifying and characterizing the nodes that are particularly important [27] and how their neighbors are connected to each other [29].
12
+
13
+ However, visualizing these measures in 2D could be difficult when the size of the graph is important in terms of the number of nodes and edges. Indeed, there would be a lot of nodes and edges overlap and edge crossings, which are less of a problem in 3D than 2D [26]. Kobina et al. [13] therefore proposed new 3D methods based on the 2D radial layouts that highlight the centrality of the nodes by optimizing the spatial distribution of the nodes. Nevertheless, in 3D some edges could hide others depending on the position of the observer or the 3D layouts, as can be seen in the proposed methods of Kobina et al. [13] using straight edges.
14
+
15
+ So, we first propose improvements to the 3D radial layouts by projecting the edges onto the visualization surfaces in order to reduce the nodes overlap. The purpose of our improvements is to provide a better overall view of a complex and large graph than the 3D radial techniques and to reduce the time in exploring and analyzing such a graph. We then propose a task-based evaluation using a well-known centrality measure in order to compare the efficiency score, the time to complete tasks and the number of clicks of the 3D radial layouts to those of the $2\mathrm{D}$ radial layouts. The evaluation tasks are related to the central nodes, to the peripheral nodes and to the dense areas of a graph. The purpose of our evaluation is to show that the 3D radial methods could be better to explore and to analyze graphs whatever the interest, compared to the 2D radial layouts.
16
+
17
+ This paper is structured as follows: in section 2 we recall some notion about centrality measures in graphs. We review related work on centrality visualization in section 3 . Then we present our improvements in section 4 and the human-centered evaluation of these improvements in section 5 . In section 6 we present the evaluation results following experiments while in section 7 we present our discussion of the various results. In section 8 we present our conclusion and we finally present our future work in section 9 .
18
+
19
+ § 2 CENTRALITY MEASURES IN GRAPHS
20
+
21
+ In graph analytics, centrality measures [22] characterize the topological position of the nodes in a graph. In other words, centrality measures make it possible to identify important nodes in the graph and further provide relevant analytical information about the graph and its nodes.
22
+
23
+ The importance of a node in a graph can be characterized by centrality measures, the clustering coefficient [10] also known as a high density of triangles. Some centrality measures, such as degree centrality, can be computed using local information of the node. The degree centrality quantifies the number of neighbors of a node. Betweenness centrality and closeness centrality $\left\lbrack {8,9}\right\rbrack$ use global information of the graph. The betweenness centrality is based on the frequency at which a node is between pairs of other nodes on their shortest paths. In other words, betweenness centrality is a measure of how often a node is a bridge between other nodes. The closeness centrality is the inverse of the sum of distances to all other nodes of the graph.
24
+
25
+ The clustering coefficient measures to what extent the neighbors of a node are connected to each other. If the neighbors of the node $i$ are all connected to each other, then the node $i$ has a high clustering coefficient.
26
+
27
+ § 3 CENTRALITY VISUALIZATION
28
+
29
+ Many works in graph drawing made it possible to convey relational information such as centrality measures and clustering coefficient. So, Brandes et al. [1] and Brandes and Pich [2] proposed radial layouts that make it possible to highlight the betweenness and the closeness centralities of the nodes in a graph. In these methods, each node is constrained to lie on a circle according to its centrality value. Thus, nodes with a high centrality value are close to the center and those of low value are on the periphery.
30
+
31
+ Dwyer et al. [5] also proposed 3D parallel coordinates, orbit-based and hierarchy-based methods to simultaneously compare five centrality measures (degree, eccentricity, eigenvector, closeness, betweenness). The difference between these three methods is how centrality values are mapped to the node position. So, for 3D parallel coordinates nodes are placed on vertical lines; for orbit-based nodes are placed on concentric circles and for hierarchy-based nodes are placed on horizontal lines. On the other hand, Raj and Whitaker [18] proposed an anisotropic radial layout that makes it possible to highlight the betweenness centrality of the nodes in a graph. In this method, they proposed to use closed curves instead of concentric circles, arguing that the use of closed curves offers more flexibility to preserve the graph structure, compared to previous radial methods.
32
+
33
+ However, it would be difficult to visually identify some nodes that have the same centrality value, compared to the radial layouts. The proposed methods of Dwyer et al. make it possible to compare many centrality measures, but it would be difficult to identify the central nodes, compared to that of Brandes and Pich. On the other hand, 2D methods suffer from lack of display space when one needs to display a large graph in terms of number of nodes and edges.
34
+
35
+ So, Kobina et al. [13] proposed 3D extensions of the radial layouts of Brandes and Pich [2] in order to better handle the visualization of complex and large graphs (see Fig. 1). Their methods consist in projecting 2D graph representations on $3\mathrm{D}$ surfaces. These methods reduce nodes and edges overlap and improve the perception of the nodes connectivity. However, some nodes and edges are less visible depending on the projection surface and edge drawing method. Indeed, the use of straight edges caused some to be inside the half-sphere and others to cross the half-sphere. Furthermore, most of the edges are on the surface for the conical projection and outside the surface for the projection on the torus portion. Some nodes and edges are therefore less visible.
36
+
37
+ § 4 IMPROVEMENT OF THE 3D RADIAL LAYOUTS
38
+
39
+ In order to reduce nodes and edges overlap in the proposed methods of Kobina et al. [13], we projected the edges onto the visualization surfaces.
40
+
41
+ Let $e$ be an edge to be projected onto a visualization surface and that connects nodes $j$ and $k$ , and ${P}_{i}$ be every point belonging to $e$ .
42
+
43
+ ${P}_{i} = {P}_{j} + \left( {{P}_{k} - {P}_{j}}\right) t$ where ${P}_{j}$ and ${P}_{k}$ are respectively the position of nodes $j$ and $k$ , and $t = \frac{i}{n - 1}$ where $n$ is the number of control points of the edge $e$ .
44
+
45
+ § 4.1 EDGE PROJECTION ONTO THE CONE
46
+
47
+ In this section, we describe the various steps that are relevant to the proposed method of projecting edges onto the cone:
48
+
49
+ * Compute the angle $\theta$ between the $\mathrm{x}$ axis and the $\mathrm{z}$ axis of the point to be projected: $\theta = \frac{180}{\pi }$ atan $2\left( {{z}_{{P}_{i}},{x}_{{P}_{i}}}\right)$
50
+
51
+ * Rotate by $\theta$ about $y$ axis. Let $R$ be the rotation result:
52
+
53
+ $$
54
+ R = \left\lbrack \begin{matrix} \cos \theta & 0 & - \sin \theta \\ 0 & 1 & 0 \\ \sin \theta & 0 & \cos \theta \end{matrix}\right\rbrack \cdot \left\lbrack \begin{array}{l} x \\ y \\ z \end{array}\right\rbrack
55
+ $$
56
+
57
+ * Compute the projected point $\operatorname{Proj} = \frac{{x}_{{P}_{i}}{x}_{R} + {y}_{{P}_{i}}{y}_{R} + {z}_{{P}_{i}}{z}_{R}}{\parallel R\parallel } \cdot R$
58
+
59
+ * Compute the altitude ${y}_{\text{ Proj }} = 1 - \sqrt{{x}_{\text{ Proj }}^{2} + {z}_{\text{ Proj }}^{2}}$
60
+
61
+ § 4.2 EDGE PROJECTION ONTO THE HALF-SPHERE
62
+
63
+ Here we describe the projection method of the edges onto the half-sphere:
64
+
65
+ * Compute the projected point $\operatorname{Proj} = \frac{{P}_{i}}{\begin{Vmatrix}{P}_{i}\end{Vmatrix}}$
66
+
67
+ * Compute the altitude ${y}_{\text{ Proj }} = \sqrt{1 - \left( {{x}_{\text{ Proj }}^{2} + {z}_{\text{ Proj }}^{2}}\right) }$
68
+
69
+ § 4.3 EDGE PROJECTION ONTO THE TORUS PORTION
70
+
71
+ In this section, we describe the projection method of the edges onto the torus portion in four steps:
72
+
73
+ * Compute the angle $\theta$ between the $\mathrm{x}$ axis and the $\mathrm{z}$ axis of ${P}_{i}$ , the point to be projected: $\theta = \frac{180}{\pi }$ atan $2\left( {{z}_{{P}_{i}},{x}_{{P}_{i}}}\right)$
74
+
75
+ * Rotate by $\theta$ about $y$ axis. Let $R$ be the rotation result:
76
+
77
+ $$
78
+ R = \left\lbrack \begin{matrix} \cos \theta & 0 & - \sin \theta \\ 0 & 1 & 0 \\ \sin \theta & 0 & \cos \theta \end{matrix}\right\rbrack \cdot \left\lbrack \begin{array}{l} x \\ y \\ z \end{array}\right\rbrack
79
+ $$
80
+
81
+ * Compute the projected point $\operatorname{Proj} = \frac{{P}_{i}}{\begin{Vmatrix}{P}_{i}\end{Vmatrix}} + R$
82
+
83
+ * Compute the altitude of the point:
84
+
85
+ $$
86
+ {y}_{\text{ Proj }} = 1 - \sqrt{1 - \left( {\left( {r - 1}\right) \left( {r - 1}\right) }\right) }\text{ , with }r = \sqrt{{x}_{\text{ Proj }}^{2} + {z}_{\text{ Proj }}^{2}}\text{ . }
87
+ $$
88
+
89
+ Fig. 2 illustrates the result of our projected edges, compared to that of straight edges used in the proposed methods of Kobina et al. [13] (Fig. 1).
90
+
91
+ Thus, by projecting the edges onto the visualization surfaces, we improved the readability of the graph. Furthermore, there are no edges that cross the visualization surface.
92
+
93
+ § 5 EVALUATION
94
+
95
+ We conducted a human-centered evaluation through a series of tasks performed on generated graphs in order to compare the efficiency score, the time to complete a task and the number of clicks of the 3D layouts with projected edges (Fig. 2) to those of the 2D radial layouts We use these 3 metrics to determine if a kind of visualization is better or worse than the others.
96
+
97
+ § 5.1 TASKS
98
+
99
+ Kobina et al. [13] suggested that the projections of the uniform 2D representation highlight either the center, the periphery, or either moderately the center and the periphery. So we chose these three following tasks that are related to the central nodes, to the peripheral nodes and to the dense areas of a graph:
100
+
101
+ * Task 1 (related to the central nodes). The participants were asked to find the node that has the greatest degree among the most central node's neighbors.
102
+
103
+ * Task 2 (related to the peripheral nodes). The participants were asked to find a least central node that has at least 2 neighbors.
104
+
105
+ * Task 3 (related to the dense areas of a graph). The participants were asked to find a node of degree at least 3 and that has the highest clustering coefficient except 100%.
106
+
107
+ § 5.2 HYPOTHESIS
108
+
109
+ Based on the proposed methods of Kobina et al. [13], we make the following hypotheses:
110
+
111
+ H1. The 2D that emphasizes the periphery is the worst of the visualization surfaces when one is interested in the central nodes.
112
+
113
+ H2. The 2D that emphasizes the center is the worst of the visualiza tion surfaces when tasks are related to the periphery.
114
+
115
+ H3. The combination of the peripheral emphasis and the different 3D projections highlights not only the peripheral nodes as the 2D peripheral emphasis, but also improves the visibility of the center.
116
+
117
+ H4. The combination of the central emphasis and the different 3D projections highlights not only the central nodes as the 2D central emphasis, but also improves the visibility of the periphery.
118
+
119
+ H5. One spends less time in exploring and analyzing graphs on the 3D surfaces than on the 2D.
120
+
121
+ H6. There are fewer clicks on the $3\mathrm{D}$ surfaces than on the $2\mathrm{D}$ representations.
122
+
123
+ H7. 3D surfaces are better suited for exploring the dense areas of a graph than $2\mathrm{D}$ representations.
124
+
125
+ § 5.3 EXPERIMENTAL PROTOCOL AND MEASURES
126
+
127
+ We conducted an experimental study using a WebGL version of our graph visualization system because of the Covid-19. Here is the link to our experiment for a given configuration: https://anonymnam.github.io/radialvig3dxp.Each participant could therefore perform the experiment remotely on his own laptop. Kobina et al. [13] suggested that the combination of the uniform $2\mathrm{D}$ representation and the different projections makes it possible to obtain in addition an emphasis on the center or on the periphery. So in this study, our goal is to show that these 3D methods could be better to explore and to analyze graphs whatever the interest (the central or peripheral nodes, the dense areas), compared to the 2D representations. Indeed, since Kobina et al. [13] optimized the spatial distribution of nodes and we improved the edges drawing by projecting them onto the surfaces, there could be less time in exploration, less clicks and more accurate responses to different tasks, because the perception of the nodes connectivity is improved. Moreover, we want to analyze the usability of the 3D for exploring and analyzing graphs. On the other hand, we want to identify the best layout that could be used to visualize graphs.
128
+
129
+ spherical projection conical projection torus portion
130
+
131
+ Figure 1: Betweenness centrality: uniform 3D radial visualization (419 nodes and 695 edges). The spherical projection spreads out more the peripheral nodes than the central nodes while the projection on the torus portion spreads out more the central nodes than the peripheral nodes. The conical projection evenly distributes nodes. Images are from [13].
132
+
133
+ spherical projection torus portion conical projection
134
+
135
+ Figure 2: Betweenness centrality: uniform 3D radial visualization (419 nodes and 695 edges). Edges are projected onto the visualization surfaces, compared to straight edges observed in the proposed methods of Kobina et al. [13](Fig. 1).
136
+
137
+ For our experiment, we chose to use the betweenness centrality, because it has an interesting use and regardless of the centrality measure, the purpose of the evaluation remains the same. It will therefore be enough to assess the interest of the proposed methods. We first generated, thanks to the Stochastic Block Model algorithm $\left\lbrack {{11},{15},{25}}\right\rbrack ,6$ different graphs (250 nodes and 855 edges) that have equivalent topological characteristics (Fig. 3, Fig. 4), since it is difficult to find in databases several graphs of the same size with equivalent topological characteristics (density, clustering coefficient).
138
+
139
+ The stochastic Block Model is a probabilistic model based on community structure in graphs. This model partitions the nodes in blocks of arbitrary sizes, and places edges between pairs of nodes independently, with a probability that depends on the blocks [24]. Thus, the structure of each community in the graph varies enough to avoid a learning effect.
140
+
141
+ We then built 24 configurations with the various representation surfaces so that each surface and graph is performed at least once as first, using something similar to the concept of the Latin square [7, 19]. A Latin square is an $n \times n$ array filled with $n$ different symbols in such a way that each symbol occurs exactly once in each row and exactly once in each column. For our configurations, we respected a distribution order between 2D and 3D surfaces so that the running order of a 2D representation corresponds to that of the equivalent $3\mathrm{D}$ surface. For example, if a configuration starts with the 2D surfaces and the first surface is the one that emphasizes the center, then the first 3D surface will be the torus portion, since it is the most to highlight the center. So we make sure that each configuration is tested as many times before as after each of the other configurations
142
+
143
+ During the experiment and for each task and surface, we measure an efficiency score, the time spent to complete a task and the number of clicks to find an optimal response. As the experiment is done remotely, each participant's performance is automatically saved when he validates his response. Below is how we compute the efficiency score of the participants.
144
+
145
+ Task 1. Find the node that has the greatest degree among the most central node's neighbors.
146
+
147
+ $$
148
+ {\text{ score }}_{i} = \left\{ \begin{array}{ll} {100} * \left( {{de}{g}_{i}/{de}{g}_{ideal}}\right) , & \text{ if }d\left( {{ctr},i}\right) = 1 \\ 0, & \text{ otherwise } \end{array}\right. \tag{1}
149
+ $$
150
+
151
+ where ${\deg }_{i}$ is the degree of the selected node ${}_{i}$ . de ${g}_{\text{ ideal }}$ is the greatest degree among the central node’s neighbors and $d\left( {{ctr},i}\right)$ is the shortest distance between the central node and node ${e}_{i}$ . Thus, ${\text{ node }}_{i}$ must be directly connected to the central node, i.e. $d\left( {{ctr},i}\right)$ must be equal to 1 .
152
+
153
+ 0.07 Graphs global metrics density clustering coefficient Graph 4 Graph 5 Graph 6 0.06 0.05 0.04 values 0.03 0.02 0.01 0.00 Graph 1 Graph 2 Graph 3
154
+
155
+ Figure 3: Comparison of generated graphs: all graphs have the same density, but a different clustering coefficient. The clustering coefficient is high if the number of the closed triplets in a graph is important.
156
+
157
+ 200 Graphs global metrics number of triangles diameter Graph 4 Graph 5 Graph 6 150 values 100 50 0 Graph 1 Graph 2 Graph 3
158
+
159
+ Figure 4: Comparison of generated graphs: all graphs have the same diameter, but different number of triangles. As for the clustering coefficient, the number of triangles is high if the number of the closed triplets in a graph is important.
160
+
161
+ Task 2. Find a least central node that has at least 2 neighbors.
162
+
163
+ $$
164
+ {\text{ score }}_{i} = \left\{ \begin{array}{ll} {100} * \left( {1 - {c}_{i}}\right) /\left( {1 - {c}_{\text{ ideal }}}\right) , & \text{ if }{c}_{\text{ ideal }} \neq 1 \\ 0, & \text{ otherwise } \end{array}\right. \tag{2}
165
+ $$
166
+
167
+ where ${c}_{i}$ and ${c}_{ideal}$ are respectively the centrality value of the node ${e}_{i}$ and that of the ideal node. Furthermore, the score is 0 if the degree of the selected node is less than 2. Indeed, it is easy to check that the degree of the selected node is at least 2 . Thus, the score is 0 if the condition is not met. Otherwise, the score varies from 0 at the center to 1 for a node of degree at least 2 and the most on the periphery.
168
+
169
+ Task 3. Find a node of degree at least 3 and that has the highest clustering coefficient except 100%.
170
+
171
+ $$
172
+ {\text{ score }}_{i} = \left\{ \begin{array}{ll} {100} * \left( {{cc}{f}_{i} - {cc}{f}_{\text{ worst }}}\right) /d, & \text{ if }d > 0 \\ 0, & \text{ otherwise } \end{array}\right. \tag{3}
173
+ $$
174
+
175
+ where $d = {cc}{f}_{\text{ ideal }} - {cc}{f}_{\text{ worst }}.{cc}{f}_{i},{cc}{f}_{\text{ worst }}$ and ${cc}{f}_{\text{ ideal }}$ are respectively the clustering coefficient of the node ${e}_{i}$ , the worst clustering coefficient and the highest clustering coefficient except 100%. So, the score is 0 if the degree of the selected node is less than 3 or if the clustering coefficient of the selected node is ${100}\%$ . Otherwise, we compute the score using equation 3 .
176
+
177
+ At the end of the experiment, each participant completes questionnaires related to the usability of the system and the user experience. Since our experiment is done remotely, we organized a video conference for each participant in order to supervise the experiment's process. The experiment consists of a training phase and an evaluation phase. Before starting the training phase, each participant is instructed about the experiment procedure, its environment, navigation and interaction techniques. For example, when the mouse hovers a node, a tooltip shows its clustering coefficient value and its degree. He is also given the essential notions about graphs in order to ensure that he has the useful knowledge for the experiment. In the training phase, the participant is asked to perform the above tasks on a small graph (the karate club's graph [28]) and on each surface Once familiar with the system, he moves on to the evaluation phase, but with generated graphs. If the participant is ready to start the training or the evaluation, he clicks on a start button to see the first task to complete and the next task is automatically displayed after validating the previous task's response.
178
+
179
+ § 5.4 PARTICIPANTS
180
+
181
+ For this project, we were needing a number of participants that would be a multiple of 24 in order to encounter the same number of these 24 configurations mentioned above. So, there were 24 participants (9 female, 15 male) and they were recruited among our colleagues in the laboratory and among students: ${50}\%$ are between 18 and 25 years old, 37.5% are between 25 and 35, and 12.5% are more than 35 years old. Moreover, most participants had no experience in data analysis and data visualization, but some of them had gaming experience.
182
+
183
+ § 6 RESULTS
184
+
185
+ § 6.1 USER PERFORMANCE
186
+
187
+ We present here the main results from the analysis of the data collected during our experiment through nonparametric tests using the Kruskal method [14] and post-hoc tests using the Dunn's method $\left\lbrack {4,{20}}\right\rbrack$ . We used nonparametric tests since none of the samples comes from a normal distribution (normality tests were done using the Shapiro-Wilk method [23]). As a reminder, the variables analyzed are the efficiency score, the time and the number of clicks for each task and each surface.
188
+
189
+ § 6.1.1 TASK 1: FIND THE NODE THAT HAS THE GREATEST DEGREE AMONG THE MOST CENTRAL NODE'S NEIGHBORS
190
+
191
+ Efficiency score. After an exploratory data analysis using box plots (Fig. 5), the nonparametric test showed that there is a statistically significant difference between the visualization surfaces and cannot be due to chance $\left( {F - \text{ statistic } = {31.46},p = {0.000} < {0.05}}\right)$ . So, we rejected the null hypothesis that the efficiency score is the same for all the visualization surfaces when one is interested in the central nodes. The result of the multiple pairwise comparison (Table 1) showed that the $2\mathrm{D}$ that emphasizes the periphery had a difference of medians.
192
+
193
+ From the statistic test results, we validate hypothesis H1 that the $2\mathrm{D}$ representation that emphasizes the periphery is worse to perform tasks that are related to the central nodes, compared to all other surfaces. However, we validate hypothesis H3 that the 3D projections of the peripheral emphasis not only give the same benefit on the periphery, but also provide the visibility of the center.
194
+
195
+ Time. Considering the results of Fig. 6 we could say that the participants spent more time on the 2D that emphasizes the periphery, compared to all other visualization surfaces. However, we failed to reject the hypothesis of equality of medians $(F -$ statistic $=$ ${5.990},p = {0.307} > {0.05})$ . From our exploratory analysis results,
196
+
197
+ Table 1: Task 1: Efficiency score: P-values of the multiple pairwise comparison using Dunn’s method (significant p-values starred (*p $< {0.05}$ , ** p $<$ 0.01, *** p $\leq$ 0.001)).
198
+
199
+ max width=
200
+
201
+ X 2D central 2D peripheral 2D uniform Cone Half sphere Torus
202
+
203
+ 1-7
204
+ 2D central 1 0.00002*** 1 1 1 1
205
+
206
+ 1-7
207
+ 2D peripheral 0.00002*** 1 0.0008*** 0.00013*** 0.00067*** 0.00072***
208
+
209
+ 1-7
210
+ 2D uniform 1 0.0008*** 1 1 1 1
211
+
212
+ 1-7
213
+ Cone 1 0.00013*** 1 1 1 1
214
+
215
+ 1-7
216
+ Half sphere 1 0.00067*** 1 1 1 1
217
+
218
+ 1-7
219
+ Torus 1 0.0007*** 1 1 1 1
220
+
221
+ 1-7
222
+
223
+ Halfsphere 40 60 80 100 Torus 0 2D Peripheral 2D Central 2D Uniform 20
224
+
225
+ Figure 5: Task 1: Efficiency score: Descriptive representation (mean in red dashes, median in purple). The low mean of the 2D that emphasizes the periphery shows that the participants had a low efficiency score on this surface, compared to all other surfaces.
226
+
227
+ we cannot validate hypothesis $\mathbf{H}1$ , so we cannot prove that the 2D that emphasizes the periphery is worse to perform a task that is related to the central node, compared to all other visualization surfaces. Moreover, we reject hypothesis H5 that the participants spend less time on the 3D surfaces, compared to the 2D surfaces.
228
+
229
+ Halfsphere 80 100 120 140 160 Torus Cone 2D Peripheral 2D Central 2D Uniform 60
230
+
231
+ Figure 6: Task 1: Time: Descriptive representation (mean in red dashes, median in purple). Here, we could say that the participants spent more time on the 2D that emphasizes the periphery, compared to all other visualization surfaces.
232
+
233
+ Number of clicks. From the results of Fig. 7, we validate hypothesis $\mathbf{{H6}}$ that the participants clicked less on the 3D surfaces, compared to the $2\mathrm{D}$ representations. Furthermore, the nonparametric test result showed that there is a statistically significant difference between the visualization surfaces, since the F-statistic is 12.554 and the corresponding p-value is ${0.028} < {0.05}$ . So we conclude that the type of surface leads to statistically significant differences in the number of clicks. A multiple pairwise comparison (Table 2) confirmed our exploratory analysis that the $2\mathrm{D}$ that emphasizes the periphery is
234
+
235
+ different from the other surfaces. Halfsphere 。 10 Torus Cone 2D Peripheral 2D Central 2D Uniform
236
+
237
+ Figure 7: Task 1: Number of clicks: Descriptive representation (mean in red dashes, median in purple). We could suppose that the participants clicked less on the 3D surfaces, compared to the 2D.
238
+
239
+ Unlike the score analysis, there is a statistically significant difference between the $2\mathrm{D}$ that emphasizes the periphery and two $3\mathrm{D}$ surfaces (the half-sphere and the torus portion).
240
+
241
+ Ultimately, the 3D surfaces are well suited for carrying out tasks that are related to the central nodes because Fig. 7 and Table 2 show that our hypothesis $\mathbf{{H6}}$ is validated for the number of clicks. Moreover, we validated hypothesis $\mathbf{{H1}}$ and $\mathbf{{H3}}$ for the efficiency score. However we cannot prove that our hypotheses $\mathbf{{H1}}$ and $\mathbf{{H5}}$ could be validated with respect to the time of the task.
242
+
243
+ § 6.1.2 TASK 2: FIND A LEAST CENTRAL NODE THAT HAS AT LEAST 2 NEIGHBORS
244
+
245
+ Efficiency score. From an exploratory analysis (Fig. 8) we validate hypothesis $\mathbf{{H2}}$ that the $2\mathrm{D}$ representation that emphasizes the center is worse when a task is related to the peripheral nodes, compared to all other visualization surfaces, since the participants did not have good scores on the 2D that emphasizes the center. Moreover, there is a difference that is statistically significant between the $2\mathrm{D}$ that emphasizes the center and all the other surfaces (see Table 3), because the F-statistic is 40.31 and the corresponding p-value is ${0.000} < {0.05}$ . Nonetheless, we validate hypothesis $\mathbf{{H4}}$ that the $3\mathrm{D}$ projections of the central emphasis make it possible not only to get the same visual effect on the center, but also to improve the visibility of the periphery.
246
+
247
+ Time. As far as the time analysis is concerned, we could say that the participants spent less time on the 3D surfaces and the uniform $2\mathrm{D}$ , compared to the $2\mathrm{D}$ surfaces that emphasize the center and the periphery (Fig. 9). However, we failed to reject the hypothesis of equality of medians $\left( {F - \text{ statistic } = {1.65},p = {0.90} > {0.05}}\right)$ . So, we reject hypothesis $\mathbf{{H5}}$ that the participants spent less time on the $3\mathrm{D}$ surfaces.
248
+
249
+ Number of clicks. Fig. 10 shows high values of medians and means for the $2\mathrm{D}$ that emphasizes the periphery and the cone, compared to all other surfaces. So, it could suggest that the participants clicked more on the $2\mathrm{D}$ that emphasizes the periphery and on the cone. On the other hand, the nonparametric test failed to reject the hypothesis of median equality $\left( {F - \text{ statistic } = {2.93},p = {0.71} > {0.05}}\right)$ . So, as
250
+
251
+ Table 2: Task 1: Number of clicks: P-values of the multiple pairwise comparison using Dunn’s method (significant p-values starred (*p $< {0.05}$ , ** p $<$ 0.01, *** p $\leq$ 0.001)).
252
+
253
+ max width=
254
+
255
+ X 2D central 2D peripheral 2D uniform Cone Half sphere Torus
256
+
257
+ 1-7
258
+ 2D central 1 0.679 1 1 1 1
259
+
260
+ 1-7
261
+ 2D peripheral 0.679 1 0.517 0.13 0.0435* 0.0377*
262
+
263
+ 1-7
264
+ 2D uniform 1 0.517 1 1 1 1
265
+
266
+ 1-7
267
+ Cone 1 0.13 1 1 1 1
268
+
269
+ 1-7
270
+ Half sphere 1 0.0435* 1 1 1 1
271
+
272
+ 1-7
273
+ Torus 1 0.0377* 1 1 1 1
274
+
275
+ 1-7
276
+
277
+ Table 3: Task 2: Efficiency score: P-values of the multiple pairwise comparison using Dunn’s method (significant p-values starred (*p $< {0.05}$ , ** p $<$ 0.01, *** p $\leq$ 0.001)).
278
+
279
+ max width=
280
+
281
+ X 2D central 2D peripheral 2D uniform Cone Half sphere Torus
282
+
283
+ 1-7
284
+ 2D central 1 0.00000*** 0.002** 0.00003*** 0.00000*** 0.001***
285
+
286
+ 1-7
287
+ 2D peripheral 0.00000*** 1 1 1 1 1
288
+
289
+ 1-7
290
+ 2D uniform 0.002** 1 1 1 0.752 1
291
+
292
+ 1-7
293
+ Cone 0.00003*** 1 1 1 1 1
294
+
295
+ 1-7
296
+ Half sphere 0.00000*** 1 0.752 1 1 0.986
297
+
298
+ 1-7
299
+ Torus 0.001*** 1 1 1 0.986 1
300
+
301
+ 1-7
302
+
303
+ Halfsphere 。 40 80 100 Torus Cone 2D Peripheral 2D Central 2D Uniform 20
304
+
305
+ Figure 8: Task 2: Efficiency score: Descriptive representation (mean in red dashes, median in purple). We could say that the participants did not have good scores on the 2D that emphasizes the center, compared to all other surfaces.
306
+
307
+ Halfsphere 60 80 100 120 Torus Cone 2D Peripheral 2D Uniform 0 20
308
+
309
+ Figure 9: Task 2: Time: Descriptive representation (mean in red dashes, median in purple). We could say that the participants spent less time on the 3D surfaces and the uniform 2D.
310
+
311
+ for the time analysis, the difference in medians observed could suggest that the $2\mathrm{D}$ that emphasizes the periphery and the cone are worse when one is interested in the peripheral nodes. So, we reject hypothesis $\mathbf{{H6}}$ that there are fewer clicks on the $3\mathrm{D}$ surfaces.
312
+
313
+ Halfsphere 12.5 15.0 Torus Cone 2D Peripheral 2D Central 5.0
314
+
315
+ Figure 10: Task 2: Number of clicks: Descriptive representation (mean in red dashes, median in purple). It could suggest that the participants clicked more on the 2D that emphasizes the periphery and on the cone.
316
+
317
+ Based on the various analyses of task 2, that of the efficiency score makes it possible to validate hypotheses $\mathbf{{H2}}$ that the $2\mathrm{D}$ that emphasizes the center is the worst of the visualization surfaces when tasks are related to the peripheral nodes, and H4 that our 3D projections make it possible not only to get the same benefit on the center, but also to improve the visibility of the periphery. Furthermore, Table 3 shows that the half-sphere and the cone are well suited when one is interested in the peripheral nodes. However, analyses of time and number of clicks show that the difference in medians could suggest that the 2D that emphasizes the periphery is worse, compared to other surfaces and that the 3D surfaces are better, but we cannot prove that hypotheses H5 and H6 could be validated.
318
+
319
+ § 6.1.3 TASK 3: FIND A NODE OF DEGREE AT LEAST 3 AND THAT HAS THE HIGHEST CLUSTERING COEFFICIENT EXCEPT 100%
320
+
321
+ Efficiency score. From an exploratory analysis results (Fig. 11) we could suppose that the participants got good scores on the 2D that emphasizes the center. On the other hand, we failed to reject the null hypothesis that the efficiency score is the same for all the visualization surfaces, since the test statistic is 6.0 and the corresponding p-value is ${0.31} > {0.05}$ . So, the difference in medians could lead us to say that the $2\mathrm{D}$ that emphasizes the center is better for exploring the dense areas of the graph, compared to all other surfaces and that our hypothesis $\mathbf{{H7}}$ (3D surfaces are better suited for exploring the dense areas of a graph) is rejected, but the statistic analysis failed to demonstrate it.
322
+
323
+ Halfsphere 40 100 Torus Cone 2D Peripheral 2D Central 2D Uniform 20
324
+
325
+ Figure 11: Task 3: Efficiency score: Descriptive representation (mean in red dashes, median in purple). We could suppose that the participants got good efficiency scores on the 2D that emphasizes the center.
326
+
327
+ Time. As for the score analysis, Fig. 12 shows the result of an exploratory data analysis that could lead one to think that the participants spent less time on the uniform 2D, compared to all other surfaces. However, the median values are not significantly different, since the nonparametric test did not reject the hypothesis of median equality $\left( {F - \text{ statistic } = {1.04},p = {0.96} > {0.05}}\right)$ . So, we reject hypotheses $\mathbf{{H5}}$ that the participants spend less time on the $3\mathrm{D}$ surfaces and H7 that the 3D surfaces are better than the 2D surfaces to explore the dense areas of a graph.
328
+
329
+ Halfsphere 150 200 250 300 Torus 2D Peripheral 2D Central 2D Uniform 0 50 100
330
+
331
+ Figure 12: Task 3: Time: Descriptive representation (mean in red dashes, median in purple). We could suppose that participant spent less time on the uniform 2D, compared to other surfaces.
332
+
333
+ Number of clicks. Fig. 13 shows that the median value of the torus portion is smaller than the median values of other visualization surfaces. So we could say that the participants clicked less on the torus portion, compared to all other surfaces. We could then validate hypothesis $\mathbf{{H6}}$ that there are less clicks on the $3\mathrm{D}$ surfaces. However, we failed to reject the null hypothesis that the number of clicks is the same for all the visualization surfaces when tasks are related to the dense areas $\left( {F - \text{ statistic } = {5.0},p = {0.42} > {0.05}}\right)$ . So, we reject hypothesis $\mathbf{{H6}}$ that there are fewer clicks on the $3\mathrm{D}$ surfaces, and hypothesis H7 that the 3D surfaces are better suited than the $2\mathrm{D}$ to explore the dense areas of a graph.
334
+
335
+ Halfsphere 0 。 Torus Cone 2D Peripheral 2D Central 2D Uniform
336
+
337
+ Figure 13: Task 3: Number of clicks: Descriptive representation (mean in red dashes, median in purple). It could suggest that the participants clicked less on the torus portion, compared to all other surfaces.
338
+
339
+ Unlike the various analyses carried out for tasks 1 and 2, those of task 3 showed in exploratory analysis that some 3D visualization surfaces are better than the $2\mathrm{D}$ surfaces, but the statistic tests showed that the differences of medians in efficiency score, in time and in number of clicks are not statistically significant when one is interested in the dense areas of the graph. So, we reject hypotheses H5 that the participants spend less time in exploring and analyzing graphs on the $3\mathrm{D}$ surfaces, $\mathbf{{H6}}$ that there are fewer clicks on the $3\mathrm{D}$ surfaces and $\mathbf{{H7}}$ that the $3\mathrm{D}$ surfaces are better than the $2\mathrm{D}$ to explore the dense areas of a graph.
340
+
341
+ § 6.2 USER EXPERIENCE
342
+
343
+ As mentioned above (in Sect. 5.3), at the end of the experiment, the participants were asked to complete a questionnaire related to the system usability and to their experience. As far as their experience is concerned, they were asked whether they understood the requested tasks, if they had difficulty interacting with the system, and if they had visual fatigue. The results were that 23 participants over 24 understood the requested tasks, 7 over 24 had difficulty interacting with the system and 7 participants over 24 declared having visual fatigue.
344
+
345
+ The participants were also asked to specify the surfaces that enabled them to better perform the requested tasks, on the one hand, and to identify the surfaces with which they had difficulty completing the requested tasks, on the other hand. Based on their feedback, 3D surfaces have significantly contributed to the successful completion of the various tasks, compared to the 2D representations (uniform $2\mathrm{D}$ , the $2\mathrm{D}$ that emphasizes the center or the periphery). Fig. 14 and Fig. 15 illustrate the distribution of user preferences for a successful and unsuccessful completion, respectively. Moreover, Fig. 14 shows that the $2\mathrm{D}$ that emphasizes the center and the $2\mathrm{D}$ that emphasizes the periphery alone total ${80}\%$ of votes while the cone makes $0\%$ .
346
+
347
+ § 7 DISCUSSION
348
+
349
+ Some nodes would be less visible with the use of the straight edges in the proposed methods of Kobina et al. [13]. Indeed, combining the peripheral emphasis and the projection of the nodes and edges on the half-sphere or on the torus portion, some intermediate nodes would be less visible due to the surface, unlike the conical projection. Furthermore, with uniform projections, some nodes and edges would be less visible in the dense areas according to the projection surface. So, projecting the edges onto the visualization surface, we reduced the overlap of the nodes and the edges, and we therefore improved the overall readability of the graph.
350
+
351
+ 2D Uniform 14.0% 24.6% 26.3% 6 8 10 12 14 Number of votes 2D Central 3.5% Surfaces 2D Peripheral 7.0% Cone Torus Halfsphere 0 2 4
352
+
353
+ Figure 14: Surfaces that the participants prefer when performing tasks.
354
+
355
+ 2D Uniform 8.6% 12 16 Number of votes 2D Central Surfaces 2D Peripheral Cone 0.0% Torus 8.6% Halfsphere 2.9% 2
356
+
357
+ Figure 15: Surfaces that the participants do not like when performing tasks.
358
+
359
+ As far as our evaluation is concerned, the results did not allow us to identify which representation is best suited to visualize large graphs and to improve graph analysis. However, we partially validated hypotheses $\mathbf{H}1,\mathbf{H}2,\mathbf{H}3,\mathbf{H}4$ and $\mathbf{H}6$ , since some statistic test results showed that there are differences in efficiency score and in number of clicks.
360
+
361
+ Indeed, these results made it possible to validate hypotheses $\mathbf{{H1}}$ that the $2\mathrm{D}$ that emphasizes the periphery is the worst of the surfaces to visualize the center, and $\mathbf{{H2}}$ that the $2\mathrm{D}$ that emphasizes the center is the worst of the surfaces to visualize the periphery with respect to the efficiency score of tasks 1 and 2. Moreover, we validated hypotheses: 1) $\mathbf{{H3}}$ that the combination of the peripheral emphasis and different $3\mathrm{D}$ projections makes it possible not only to get the same advantages on the periphery as the 2D peripheral emphasis, but also to improve the visibility of the center; 2) H4 that combining the central emphasis and different 3D projections makes it possible not only to get the same benefits on the center as the 2D central emphasis, but also to improve the visibility of the periphery, always regarding the efficiency score of tasks 1 and 2. We also validated hypothesis $\mathbf{{H6}}$ that there are fewer clicks on the 3D surfaces regarding the number of clicks of task 1.
362
+
363
+ On the other hand, we rejected hypotheses H5 and H7, since we were not able to prove that: 1) participants spend less time on the 3D surfaces and 2) the $3\mathrm{D}$ surfaces are better than the $2\mathrm{D}$ to explore the dense areas of a graph. We could therefore say that the 2D versus 3D debate still persists [3]. On the other hand, participants' feedback showed that the 3D surfaces could be well suited for completing the various requested tasks successfully, compared to the 2D surfaces.
364
+
365
+ § 8 CONCLUSION
366
+
367
+ In this work, we improve the edge drawing of some 3D graph visualization methods previously proposed. Our improvements consist in projecting the edges onto each visualization surface in order to reduce the nodes and edges overlap.
368
+
369
+ An online human-centered experimental study was conducted in order to compare the efficiency score, the time to complete tasks and the number of clicks of the various visualization surfaces. We showed through our experiment that there is no difference that is statistically significant in terms of time or errors between these surfaces. However, the participants have a better feeling on the 3D when carrying out the requested tasks, compared to the 2D layouts. Thus, adding a third dimension to the $2\mathrm{D}$ radial views improves the user experience.
370
+
371
+ § 9 FUTURE WORK
372
+
373
+ In the future, we will also study in detail the results obtained with large graphs in order to check whether current trends are confirmed. Moreover, we projected the $2\mathrm{D}$ views on other types of $3\mathrm{D}$ surfaces (a parabola, a Gaussian, a hyperboloid and a square root). Thus, we will study in more details the results of these contributions in order to identify the most appropriate approach or combination of approaches that could be used to visualize large and complex graphs.
374
+
375
+ In order to declutter graphs in the proposed methods of Kobina et al. [13], we have already implemented the kernel density estimation edge bundling algorithm using computer graphics acceleration techniques. Fig. 16 illustrates the result of a graph which was generated using Stochastic Block Model algorithm presented in section 5.3. So, with the bundled graph, it is possible to see how groups of nodes are connected to each other, compared to the unbundled graph. However, we lose the detailed connectivity of a node (for instance, edges between a node and its neighbors). It could be therefore useful to combine the bundled and the unbundled edges for further analysis if one would need to switch between detailed and bundled views.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/dcbsb4qTmnt/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,363 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Auto-Cucumber: The Impact of Autocorrection Failures on Users’ Frustration
2
+
3
+ Ohoud Alharbi*
4
+
5
+ King Saud University
6
+
7
+ Wolfgang Stuerzlinger†
8
+
9
+ Simon Fraser University
10
+
11
+ ## Abstract
12
+
13
+ Many mobile users rely on autocorrection mechanisms during text entry on their smartphone. Previous studies investigated the effects of autocorrection mechanisms on typing speed and accuracy but did not explore the level of frustration and perceived mental workload often associated with autocorrection. Through a mixed-methods user study, we investigate the effect of autocorrection failures on increasing the user's frustration, mental and physical demand, performance, and effort in this paper. We identified that perceived mental and physical demand, and frustration are directly affected by autocorrection.
14
+
15
+ Index Terms: Human-centered computing-Interaction design and evaluation methods—Keyboards;
16
+
17
+ ## 1 INTRODUCTION
18
+
19
+ Empowered by the growth of text-based social media, many people prefer writing text messages or social media posts over making phone calls. To keep up with this growth, text entry methods have been improved by providing features that enable users to type as fast as possible and correcting their typing errors as they go. Yet, being fast and accurate can be a challenge on touch screen keyboards, due to various issues, including misspelling the word, using the wrong touch locations, missing a space, and compounded versions of these.
20
+
21
+ Still, a frustrating interaction with a computing device, resulting from typing errors or a wrong autocorrect, can cause users to experience negative emotions toward the system and to potentially abandon using some functionality [30]. In that moment of frustration, users might not be aware how much autocorrect has already improved and keeps improving with continuous use and upgrades to algorithms. To better understand the origins of current user reactions, this paper focuses on an analysis of the behaviors people exhibit in text entry with respect to autocorrect and its failures and the associated costs in terms of perceived mental and physical demand, and user frustration.
22
+
23
+ Text entry research typically collects data to evaluate the speed and accuracy of a new interaction technique, such as Drag-n-Drop, Drag-n-Throw, and Magic Key [53]. Studies have examined the effect of keyboard layouts on typing behavior, e.g., [6, 19, 21, 28, 49], while other studies have investigated the time users spent while interacting with autocorrections and the prediction panel while entering text, including when prediction and autocorrect approaches fail, e.g., $\left\lbrack {1,2,{10}}\right\rbrack$ . However, there are no studies that investigate the effect of failing autocorrections on the user's emotions and their level of frustration. Yet, cognitive theory research has shown that system failures can activate negative emotions such as anger, annoyance, and frustration [35].
24
+
25
+ This paper presents a user study that investigates the effect of various degrees of failing autocorrection on the user's frustration and perceived mental workload. We analyze the results through metrics related to individual keystrokes, but also use qualitative methods, such as survey questions, observations, and interviews. After a discussion of related work, we present the results of our study $\left( {\mathrm{N} = {20}}\right)$ to observe the effect of failing autocorrection on users' mental workload. Results show that perceived mental and physical demand, and frustration levels are affected by autocorrection. There is a need to further investigate ways to give users the ability to temporarily adjust the behavior of autocorrection without turning this generally beneficial feature permanently off. Based on user feedback, we propose mechanisms such as adding a (single-step) button on the keyboard to quickly toggle autocorrection, or displaying a confidence score at the side of the screen.
26
+
27
+ ## 2 RELATED WORK
28
+
29
+ Frustration can lead users to believe that they are failing a task [7]. Further, a frustrating interaction with a computing device can cause users to feel negatively toward the system and then encourage them to potentially turn off some aspects of its functionality, such as autocorrect [30]. If feelings of frustration are strong, they may even make a user abort or re-consider an action [46]. For instance, excessive download delays might have a negative impact on the brand perceived to be responsible for the delay [42]. Feelings of frustration are linked to the perceived duration of activities [8, 17]. There is much potential negative impact when users are frustrated and unable to respond to failures or give feedback [35].
30
+
31
+ Nevertheless, it is not always the case that negative emotions will increase as failures occur more frequently. While there will generally be a negative emotional response to failure, there may also be a lowering of expectations, which will tend to make emotional responses to subsequent failures less intense [36,44].
32
+
33
+ ### 2.1 Predictive Features
34
+
35
+ As errors contribute substantially to slow real-life text entry speed, facilitating error correction is a key challenge for text entry [26]. Errors are costly in time and effort, and can negatively affect user perception of text entry quality. Yet, the visibility of errors and suggestions for error correction can also increase both perception and interaction costs, which might even reduce text entry speed, e.g., $\left\lbrack {{27},{32},{39},{40}}\right\rbrack$ , and in some cases decrease writing accuracy $\left\lbrack 4\right\rbrack$ . Previous work has identified that word correction and completion features on mobile keyboards could save up to ${45}\%$ of keystrokes [16], but this promise rarely results in a corresponding increase in typing speed [15].
36
+
37
+ If an appropriate language model is used, predictive algorithms can support effective error correction and completion [16]. However, many other factors play a role in the effectiveness of the use of predictive features [31], including the experience of the user [38]. To enable us to study the effect of failures in a systematic manner and how users experience such failures, we strategically caused the autocorrection to fail with controlled frequencies in our study.
38
+
39
+ ### 2.2 Frustration and Mental Workload Assessment
40
+
41
+ Workload is a term used to characterize the effort associated with a job and refers to the amount of work that needs to be performed ('the work'), usually within a fixed period of time ('the load'). Mental workload is the level of measurable mental effort put forth by an individual in response to one or more cognitive tasks [52]. We can assess mental workload using physiological or self-report measures.
42
+
43
+ ---
44
+
45
+ *e-mail: omalharbi@ksu.edu.sa
46
+
47
+ ${}^{ \dagger }$ e-mail: w.s@sfu.ca
48
+
49
+ ---
50
+
51
+ Physiological measures used to measure mental workload can include frustration, since these feelings are accompanied by physiological changes. Ceaparu et al. [8] measured the physiological response associated with workload by simulating frustrating experiences that someone might have when they play a game. At specific intervals the mouse would fail, leading to frustration. Yet, emotional experiences may be influenced by many factors such as individuals' memory, life history, culture, age, and gender [25]. More research is thus needed to identify how different physiological methods, e.g., skin conductance and heart rate variability, can be combined to develop more objective measures of frustration that are both effective and reliable. Still, we believe that physiological measures are currently not yet reliable enough to be used as a main measure of frustration.
52
+
53
+ Alternatively, self-reports are a subjective assessment that rate perceived workload to assess a task, system, or other aspects of performance. With this approach, researchers ask participants to rate their response after an intervention or interruption.
54
+
55
+ To compare self-reports with Physiological measures, Cooper et al. [9] evaluated four sensors in terms of utility for frustration research: a camera that focused on the participants' face, a skin conductance bracelet, a pressure sensitive mouse, and a chair seat capable of detecting posture. Participants were presented with questions such as "how [interested/excited/confident/frustrated] do you feel right now?" and rated their current state on a scale of 1 to 5. The authors found that the most accurate results came from the self-reported assessment.
56
+
57
+ Further, the NASA TLX is a popular and well-validated self-report questionnaire to measure the experienced workload and was initially developed to measure workload in the military [23]. It has been applied in a variety of settings in human-computer interaction research [11]. The NASA TLX combines six scales, including mental demand, physical demand, effort, and frustration.
58
+
59
+ Frustration is an important component of mental workload. Many researchers developed questionnaires to specifically measure this emotion. Ceaparu et al. [8] forced a frustrating situation and asked participants to subjectively report on each frustrating experience, once it occurred during the session. Van Steenburg et al. [47] and Gelbrich [18] developed questionnaires that measure frustration in an imagined frustrating situation. Goldsmith et al. [20] developed an online questionnaire including scales that measure attitude and frustration tolerance [22]. Richins [41] used a method based on ratings of seven frustration-related adjectives (frustrated, uncomfortable, anxious, stressed, strained, annoyed, and awkward). Similarly, Wu and Lo [50] developed ten items aimed at measuring how a telecommunications service is performing relative to customer expectations. Droit-Volet and Wearden [14] measured the mood of participants throughout the day using an experience-sampling method or short survey.
60
+
61
+ The approach of repeatedly measuring mood states has been used in a number of further studies [12-14]. Since repeatedly using self-report measures is a standard method in human-computer interaction research, we decided to adopt this approach by repeatedly measuring workload and frustration states through a short survey based on the NASA TLX questions.
62
+
63
+ Finally, the complementary combination of self-reported measures together with qualitative analysis can yield an even better representation of a users' mental state [24]. For instance, an exploratory study [24] employed questionnaires, think-aloud protocols, and in-depth interviews to determine the primary points of critique and satisfaction with the information provided on a website, by examining the properties of the website, the search process, and the mood alterations of the participants in combination. Using the think-aloud method often provides good explanations about the users' thought process and reveals changes of mood [37].
64
+
65
+ Task: Copy the phrase below, while ignoring capitalization
66
+
67
+ i would suggest you and mark address this together
68
+
69
+ Next » 1/50
70
+
71
+ Error Rate:
72
+
73
+ Figure 1: The webpage that participants saw during the experiment.
74
+
75
+ #### 2.2.1 Motivation and Experimental Approach
76
+
77
+ Our work aims to highlight the potential side-effects of "smart" techniques that are automatically applied, such as autocorrection. We investigate the effect of failing autocorrections on the user's level of frustration and perceived mental workload. Based on the above review of methods to measure frustration and mental workload, we decided to combine different methods to arrive at a more complete picture of the outcome. Following previous work [24], we combine self-report questions with qualitative protocols, more specifically think-aloud and interviews, to better understand the reactions of our participants. According to previous work, this approach currently still yields a better representation of users mental states than using physiological measures [24]. We also follow Ceaparu et al.'s [8] approach by forcing a frustrating event (autocorrection failures) and asked participants to subjectively report on their experience during the session.
78
+
79
+ ## 3 APPARATUS
80
+
81
+ We used a web application for data collection. We implemented the system using HTML, CSS, JavaScript, and PHP. We then used Amazon Web Services (AWS) to host our web application. The application includes a custom autocorrection method that works independent of various operating system implementations. The system presents prompts with text for the user to enter and logs all occurring events at the keystroke level (Figure 1).
82
+
83
+ ### 3.1 Instructions
84
+
85
+ Participants initially needed to acknowledge that they had read the instructions and to also give their consent for data collection. These initial instructions asked participants to temporarily disable the predictive features on their phones. Once participants agreed to participate, they were instructed on the procedure and then started the English language text entry tasks. The main part of the experiment showed only a single line of instruction, a presented phrase, and a textbox to input that phrase, see Figure 1, as well as the user's own keyboard, which they used for text entry. We asked participants to use their own device and their own keyboard layout, because we wanted to eliminate the associated learning factor and any potential influence of such learning on their frustration. Users needed to tap on the "Next" button to move to the next phrase, where they then also saw an up-to-date average for their text entry speed and error rate. In between blocks of 5 phrases, participants were presented with questions about how much mental demand/physical demand/effort/frustration they felt at that moment, rated on a scale of 1 to 7, see Figure 2. We purposely removed the questions regarding temporal demand and performance from the NASA TLX, since in our instructions we asked participants to type as fast as possible and to maintain a low error rate. These questions appeared before the task and were then shown each time after the users had entered 5 phrases. Following previous work, we asked the users to answer the questions repeatedly to better understand the contingencies of their behavior [12-14]. We used transcription typing to measure participants' typing speed, as this approach enables us to study motor performance while excluding cognitive aspects related to the process of text generation [38].
86
+
87
+ ![01963e62-b6c6-7651-b2b9-3e7d2cf9c4b0_2_151_156_716_649_0.jpg](images/01963e62-b6c6-7651-b2b9-3e7d2cf9c4b0_2_151_156_716_649_0.jpg)
88
+
89
+ Figure 2: Our short survey to probe frustration, effort, and mental and physical demand.
90
+
91
+ ### 3.2 Custom Autocorrection
92
+
93
+ To ensure that we could correctly log every text entry action, we asked participants to disable their own predictive system, including their prediction panel and autocorrection. Another reason for this decision was that we needed to manipulate some internals of the autocorrection mechanisms in our study, something that current system APIs do not permit. We thus used a custom autocorrection algorithm that gets triggered when an inputted word does not match the word in the presented text.
94
+
95
+ For autocorrection we exposed participants to four different conditions: optimal, failures 10% of the time, failures 20% of the time, and no autocorrection. In the optimal condition, if the misspelled word is close enough to the intended one, our system autocorrects it to match the presented word. This conditions always produces perfect autocorrections, which is similar to the "100% accurate" autocorrect condition in [5]. This condition closely resembles an oracle.
96
+
97
+ For autocorrection that fails ${10}\% \left( {{20}\% }\right)$ of the time, we adjust the system to produce a correct autocorrection ${90}\% \left( {{80}\% }\right)$ of the time (using the optimal method), but produce only a "close-enough" result in the remaining ${10}\% \left( {{20}\% }\right)$ of the time. To create such an almost correct result, our implementation searches for similar words using the Levenshtein distance [33] and then chooses the one with the lowest editing distance, i.e., a word that looks like a plausible autocorrect. We used a dictionary with the 40,000 most frequent words from project Gutenberg ${}^{1}$ . We verified that our prediction algorithm matches commercial systems reasonably well. For this, we randomly chose phrases and compared the output of our system with that of an Android 9 keyboard using the same input test. We found that the output matches ${94}\%$ of the time, which is reasonably high and likely at a level that is not easily perceived to be different by naive users.
98
+
99
+ ### 3.3 Data Logging
100
+
101
+ Through our web-based system, we recorded each text change or touch event, which fairly closely corresponds to the keystroke logging level, with a corresponding timestamp. For each phrase, we recorded the following data: device orientation (portrait/landscape), presented text, typed text, the complete input stream, keystrokes per character, words per minute, and total time per phrase. Moreover, we also logged all autocorrections, cursor movements, and error messages that were triggered during text entry. This comprehensive logging enables us to fully replay the input of each phrase.
102
+
103
+ ### 3.4 Phrase Set
104
+
105
+ We used 30 phrases randomly selected from the Enron MobileEmail phrase set [48]. We removed all non-alphabetic characters, including punctuation, and made sure that the selected phrases contained at least three words. We decided to exclude non-alphabetic characters and punctuation in the study, as such characters introduce a potential confounding source of variation in the dependent measures and threaten internal validity [34]. The phrases in the set (774 sentences) were generally short to medium length, average 6.1 words (SD 1.68, ranging from 3 to 12), and contained on average 29.9 characters (SD 10.13, ranging from 14 to 67).
106
+
107
+ ## 4 User Study
108
+
109
+ The purpose of this study was to compare 4 conditions of auto-correction (optimal, failing ${10}\%$ , failing ${20}\%$ , and none) and to measure the associated perceived mental and physical workload of the user. Previous work identified that the largest error rate at which typists would attempt to type before autocorrect corrects errors ranges between approximately 15% and 25% [5]. In our pilot studies, we initially experimented with conditions that exaggerated the number of failures (up to ${40}\%$ failures on autocorrects). Yet, we observed that high error-rate conditions (larger than 25%) were extremely confusing for participants. Thus, we decided to exclude such conditions from our main study and to examine only the ${10}\%$ and ${20}\%$ options. With similar conditions, we also ran a pilot study with a within-subject design and found indications for a substantial carryover effect that influenced participants' answers, based on the sequence in which the conditions appeared.
110
+
111
+ ### 4.1 Design
112
+
113
+ We used a between-subjects design. Each participant entered 30 phrases with one of the 4 conditions (no, 20% failing, 10% failing, and optimal autocorrection), excluding two practice phrases. In total we collected (20 participants $\times {30}$ phrases) $= {600}$ phrases.
114
+
115
+ ### 4.2 Procedure
116
+
117
+ Before starting this study, participants were asked to complete a background questionnaire about their age, gender, English proficiency, and their experience with their current touchscreen device keyboard, including what they thought about the performance of their current autocorrection system. We also gave them a full demonstration of our system and let them experience text entry using it for entering a few training phrases (using the chosen condition for that participant, i.e., if their assigned condition was optimal auto-correction, they experienced this already in the training). During the study, participants were asked to enter 30 English phrases using our system and to answer questions about how much mental and physical demand, effort, and frustration they felt at the moment, see Figure 2. Each participant answered the questions seven times, once before the typing task started and the remaining six times after entering each block of five phrases. Additionally, we asked them to use the think-aloud method, which we explained to them during the training phase.
118
+
119
+ ---
120
+
121
+ ${}^{1}$ https://en.wiktionary.org/wiki/Wiktionary:Frequency_lists
122
+
123
+ ---
124
+
125
+ At the end of the session, we conducted a semi-structured interview targeting behaviors we had observed or comments users had made during the text entry sessions. Further, we also asked participants about their own stories around autocorrection, i.e., positive or negative episodes that they had encountered in the past. We also asked them about how they believed that autocorrection influenced their typing speed and correctness, and how autocorrection made them feel. Other questions inquired about the type of words that they find hardest to get correct with current autocorrect systems and finally if they had any design recommendations around autocorrec-tion.
126
+
127
+ Including signing consent forms, filling questionnaires, the main typing tasks, and the interview, the session lasted about 45 minutes on average. We used two cameras and tripods, as well as voice recording to assist observation. Figure 3 shows the setting of the experiment. One camera was directed at the mobile screen and the second at the participants' face to record their expressions. The user study was approved by the research ethics board of the local university.
128
+
129
+ ### 4.3 Participants
130
+
131
+ We recruited twenty participants (10 females, 10 males) for the study through advertising to a student participant pool at Simon Fraser University. Of these participants, 14 were between 18 and 24 years old and 6 between 25 and 34 . Half of the participants indicated that they are using a mobile keyboard with Latin characters, i.e., the modern English alphabet, constantly during the day, 30% more than once per hour, and ${20}\%$ more than once a day.
132
+
133
+ Even though our task did not require high English proficiency, we created a quick English quiz using material from http://iteslj.org towards an objective assessment of English skills. The "overall success rate" was the final score participants achieved in our language proficiency quiz that consisted of six grammar questions: two easy, two medium, and two hard. Results show that the success rate for the overall English proficiency quiz was ${92}\% \left( {\mathrm{{SD}} = {13}}\right)$ , which corresponds to reasonably high English proficiency, as is to be expected for a university environment. Given this level of proficiency, we did not follow up on this data.
134
+
135
+ Among our participants 65% used Android or variants (Samsung, OxygenOS, etc.), while 35% used Apple iOS. Most (90%) indicated that they normally have autocorrection activated on their devices. When we asked them to rate predictive features in their mobile devices on a 5-point Likert scale (very good, good, acceptable, poor, and very poor), 5% chose very good, 55% indicated good, 35% acceptable, and $5\%$ very poor.
136
+
137
+ ## 5 RESULTS
138
+
139
+ We used one-way ANOVA with alpha of 0.05 for all analyses. A Shapiro-Wilk test identified that the assumption of a normal distribution was satisfied, and all other preconditions of ANOVA were also met. We used Tukey's Honest Significant Difference (HSD) test for post-hoc analyses. To characterize effect sizes we used the partial eta squared measure.
140
+
141
+ ![01963e62-b6c6-7651-b2b9-3e7d2cf9c4b0_3_926_147_720_551_0.jpg](images/01963e62-b6c6-7651-b2b9-3e7d2cf9c4b0_3_926_147_720_551_0.jpg)
142
+
143
+ Figure 3: The experiment setting.
144
+
145
+ #### 5.0.1 Performance
146
+
147
+ In line with common text entry study protocols, we used the words per minute (WPM) metric to measure entry speed $\left\lbrack {3,{45}}\right\rbrack$ . Time was measured from the first keystroke to the last. We observed a statistically significant effect on entry speeds for the four conditions, $\mathrm{F}\left( {3,{136}}\right) = {3.491},\mathrm{p} < {.018}$ , with optimal being the fastest option, with a medium effect size ${\eta }_{p}^{2} = {.07}$ see Figure 4.
148
+
149
+ We also measured the verification time, i.e., the "reviewing time", which is the time participants took to review a phrase before moving to the next. For this, we measured the time from the last keystroke until the time participants pressed the "next" button. Verification times were statistically significantly, $\mathrm{F}\left( {3,{136}}\right) = {3.51},\mathrm{p} = {.04}$ , with a large effect size ${\eta }_{p}^{2} = {.4}$ . Optimal and ${10}\%$ autocorrection required less verification time, see Figure 4.
150
+
151
+ The difference in terms of the number of keystrokes per character (KSPC) for each condition was statistically significant [3, 45], $\mathrm{F}\left( {3,{136}}\right) = {4.97},\mathrm{\;p} = {.013}$ , with a large effect size ${\eta }_{p}^{2} = {.48}$ . No au-tocorrection had higher KSPC as shown in Figure 4, corresponding to more keystrokes spent on error correction.
152
+
153
+ We analyzed the average Error Rate (ER) of the final submitted text, and found it was not significantly different across conditions, $\mathrm{F}\left( {3,{136}}\right) = {2.256},\mathrm{p} = {.085}$ .
154
+
155
+ We further investigated the use of error correction methods, such as the number of backspaces and cursor movements. We found the use of backspaces to be statistically significant, $F\left( {3,{136}}\right) = {5.39}$ , $\mathrm{p} = {.009}$ , with a large effect size ${\eta }_{p}^{2} = {.5}$ , but the use of cursor movements is not significant $\mathrm{F}\left( {3,{136}}\right) = {2.36},\mathrm{\;p} = {.11}$ . Optimal and ${10}\%$ autocorrection required fewer backspaces.
156
+
157
+ The average rate of autocorrections events that occurred due to participants making typing errors with the ${20}\%$ failing condition were $\mathrm{M} = {12.60}\% \left( {\mathrm{{SD}} = {18.29}}\right)$ , for ${10}\%$ failing $\mathrm{M} = {10.51}\% (\mathrm{{SD}} =$ 5.47), and for the optimal condition were $\mathrm{M} = {17.99}\% \left( {\mathrm{{SD}} = {14.70}}\right)$ . Of those recorded events, the average percentage of forced failures, i.e., where the system simulated a failure, were ${19.66}\% \left( {\mathrm{{SD}} = {14.44}}\right)$ for the ${20}\%$ failing condition, ${7.5}\% \left( {\mathrm{{SD}} = {5.01}}\right)$ for the ${10}\%$ failing condition, and $0\%$ for optimal condition.
158
+
159
+ #### 5.0.2 The NASA Task Load Index
160
+
161
+ We observed a statistically significant effect on frustration, as measured by the corresponding question from the NASA TLX, F(3, ${136}) = {12.686},\mathrm{p} < {.001}$ , with a large effect size ${\eta }_{p}^{2} = {.22}$ . Optimal stood out by being the least frustrating. There was also a statistically significant effect on mental demand across conditions, $\mathrm{F}\left( {3,{136}}\right) = {15.361},\mathrm{p} < {.001}$ , with a large effect size ${\eta }_{p}^{2} = {.25}$ . No autocorrection was significantly more mentally demanding. Additionally, we observed a statistically significant effect on physical demand across conditions, $\mathrm{F}\left( {3,{136}}\right) = {19.51},\mathrm{p} < {.001}$ , with a large effect size ${\eta }_{p}^{2} = {.30}$ . Here, no autocorrection was followed by ${20}\%$ autocorrection as being the two most physically demanding conditions. Finally, we observed a statistically significant effect on effort across the conditions, $\mathrm{F}\left( {3,{136}}\right) = {8.55},\mathrm{p} < {.001}$ , with a large effect size ${\eta }_{p}^{2} = {.16}$ . No autocorrection and ${20}\%$ autocorrec-tion required most effort. The means and results from the post-hoc analyses are presented in Figure 5. As we had prompted participants with our survey seven times during the study to investigate changes in frustration and workload over time, we illustrate the fluctuations of the answers in Fig. 7.
162
+
163
+ ![01963e62-b6c6-7651-b2b9-3e7d2cf9c4b0_4_134_145_1525_1822_0.jpg](images/01963e62-b6c6-7651-b2b9-3e7d2cf9c4b0_4_134_145_1525_1822_0.jpg)
164
+
165
+ Figure 4: a) Average words per minute (WPM), b) average keystrokes per character (KSPC), and c) average verification time for each condition (seconds). The three asterisks (***) illustrate a significant difference with $p \leq {0.001}$ .
166
+
167
+ Figure 5: a) Average effort, b) average mental demand, and c) average physical demand for each condition. The three asterisks (***) illustrate a significant difference with $p \leq {0.001}$ .
168
+
169
+ ![01963e62-b6c6-7651-b2b9-3e7d2cf9c4b0_5_151_147_716_560_0.jpg](images/01963e62-b6c6-7651-b2b9-3e7d2cf9c4b0_5_151_147_716_560_0.jpg)
170
+
171
+ Figure 6: Average frustration for each condition. The three asterisks (***) illustrate a significant difference with $p \leq {0.001}$ .
172
+
173
+ #### 5.0.3 Interviews
174
+
175
+ At the end of the session, we conducted a semi-structured interview with each participant, focusing on any observed behaviors or comments users made during text entry. We analyzed what people told us, by first coding our interview data in a systematic manner and then identifying larger themes from that data.
176
+
177
+ When we asked participants about their experience with autocor-rection, Participant 4 mentioned that it offers easy help to accelerate typing. Participant 6 stated, "It helps me type so much faster than all my friends because they don't use it. So, I would say almost all of the time, [it] is a good experience," and Participant 2 said, "it's a mini helper." Still, Participant 11 indicated that it can slow them down, disturb, and hinder the communication. Participant 13 had a more balanced view and said, "It can be helpful, but also detrimental."
178
+
179
+ We also asked for stories about (positive or negative) episodes that participants had encountered with autocorrection. Participant 5 said, "my friend was complaining about autocorrect in a text and it was changed to 'auto cucumber'.", which was humorous enough to make it into the title of this paper. However, Participant 7 said, "a friend of mine sent an entirely different text to his wife because of autocorrection. She was so mad. He had to [provide] a lot of explanation to calm [her] down." Autocorrection also can lead to social embarrassment, as Participant 11 said, "due to autocorrection [I] typed a slang [word] instead of a person's surname. This was on a WhatsApp group chat. Later people mention this personally and I was so embarrassed," while Participant 13 said, "while messaging in a family group, autocorrection changed my wishes from 'dear' to 'dead'." Participants 2, 8 and 14 indicated that they sent a professional email to their employer and autocorrect changed some words to common slang terms. Participants 2 indicated that he sent his boss a curse word by accident because he used his phone to send an email. On the positive side, autocorrection can also lead to unexpected pleasant outcomes, including for Participant 4 who indicated that his friend got married because of an autocorrection changing "have" to "love," in a situation where the recipient seems to already have been in love with his friend.
180
+
181
+ ![01963e62-b6c6-7651-b2b9-3e7d2cf9c4b0_5_926_253_738_1620_0.jpg](images/01963e62-b6c6-7651-b2b9-3e7d2cf9c4b0_5_926_253_738_1620_0.jpg)
182
+
183
+ Figure 7: Average a) effort, b) physical demand, and c) mental demand for each condition for each survey prompt starting from the initial baseline prompt.
184
+
185
+ ![01963e62-b6c6-7651-b2b9-3e7d2cf9c4b0_6_149_529_732_1074_0.jpg](images/01963e62-b6c6-7651-b2b9-3e7d2cf9c4b0_6_149_529_732_1074_0.jpg)
186
+
187
+ Figure 8: Average a)frustration, d)mental demand, and b) word per minutes for each condition for each survey prompt starting from the initial baseline prompt.
188
+
189
+ We asked also how autocorrection makes participants feel. Some expressed positive emotions such as good, happy, confident comfortable, easy, safe, satisfied, less stress, and "makes life easier." Yet, others mentioned negative emotions such as frustrated, irritated, aggravated, bothersome, annoying, lazy, and unsatisfied. Some were neutral and indifferent. Participant 11 and 18 mentioned that au-tocorrection "weirded them out" because autocorrect can present sensitive data, such as passwords or names, which should not been stored, or personal suggestions that they do not recall typing into their phone.
190
+
191
+ Additionally, we asked participants about the type of words that are hardest to correct after an incorrect autocorrect. Participants mentioned errors due to grammar, especially tenses, mistakes in longer than average, complex, or new words, and surnames. Many discussed mistakes due to a forgotten space, where Participant 1 talked about an unfortunate autocorrect that happened "when I pressed $b$ instead of the space bar." Four participants indicated that mistakes at the beginning of a word are usually the hardest to autocorrect. Many mentioned mistakes that occur when they use multiple languages on the keyboard.
192
+
193
+ Finally, we asked participants about their design recommendations. Participant 3 said that systems designers should "make it slightly more hidden and less distracting," while Participant 8 said, "I think if we made mistakes on typing there [should be] a sound like [an] alarm, it will be useful" and suggested that "Highlighting [the] background of suggestion[s]" might be helpful.
194
+
195
+ Many participants mentioned that they would prefer if there were a button on the keyboard to quickly toggle autocorrection in a single click, instead of having to go into the settings dialog. Participant 18 added, "I think you should have a confidence score on the side of the screen so users could feel comfortable turning it off at times." Participants 5 and 19 indicated that they want to see synonyms, one of which suggested, "Maybe keep the drop-down option or even add it to the screen while typing with various spellings or adding an option to [show] a meaning or similar words [thesaurus option]." Participant 10 suggested allowing deletion of standard dictionary words: "I have no idea what a 'wyeth' is, but it's in my Android dictionary and can't be deleted." Others suggested sentence completion using artificial intelligence. Finally, Participant 9 expressed a desire for an option for autocorrections based on their location, as people communicate differently in different geographical locations.
196
+
197
+ After they completed the task, we asked participants about their text entry behavior during our tasks. A majority, 70% indicated that they typed as fast as possible, while ${30}\%$ reported that they were as careful as possible. All participants entered text using (the thumbs or fingers of) both hands.
198
+
199
+ #### 5.0.4 Observations During Text Entry
200
+
201
+ We reviewed the videos from the experiment to further understand user behaviors. We saw that expressions of frustration were much more frequent with conditions where more autocorrection failures occurred; However participants were less expressive about their frustration in the conditions with ${10}\%$ and ${20}\%$ failures, compared to no autocorrection. Participants that experienced no autocorrection freely expressed their frustration and let us know about their feelings. We also found that our experiment was quite sensitive to user behaviors. For instance, we identified two spikes in the reported frustration for an optimal condition participant. Going back to the videos we observed that they had said "the word 'distraction' is really hard to type" and in the other instance, they mentioned that typing the word "rectangular" was time-consuming for them. Another participant with the optimal condition said, "I am not sure; I am confused about the autocorrection. I want to go back and fix a mistake, but it is fixed for me [pause], which is good by the way." Yet another optimal-condition participant mentioned that they did not know if the autocorrection was on, but their frustration level was low for the whole session. Two participants experiencing the condition of autocorrection with ${10}\%$ mistakes said that the "autocorrection feature here [is] very similar to what I have in my phone." At the beginning of the study, we also observed that the majority of our participants did not know how to turn their autocorrection off, i.e., we had to help them turn it off. This did not apply to those who used custom keyboards.
202
+
203
+ ## 6 Discussion
204
+
205
+ We see some evidence that perfect autocorrect is better than other autocorrection alternatives. Furthermore, autocorrect that fails 10% of the time is in some measures better than ${20}\%$ failures, which in turn is also generally better than no autocorrect. Overall, as Figure 5 illustrates, lowering the percentage of autocorrect failures will reduce frustration.
206
+
207
+ We observed that using autocorrection significantly increases typing speed compared to not using it, with the optimal option being the fastest. However, we did not find a statistically significant difference between failing ${10}\%$ and ${20}\%$ options in terms of typing speed. When we compared the typing speed for each condition, we noticed that the participant's speed increased over time during our experiment, while without autocorrection it initially increased but then flattened out, see Figure 7.
208
+
209
+ Participants spent the least time verifying the phrases in both the optimal and autocorrection with ${10}\%$ failing conditions, see Figure 4. That is explained by our finding that both conditions were not significantly different in terms of both mental demand and effort, see Figure 5.
210
+
211
+ The significantly higher number of keystrokes per character without autocorrection provides supporting evidence that the condition without autocorrection significantly decreased participants' typing speed, compared to all other options for autocorrection, see Figure 4. This also matches results from previous work (e.g., [5])
212
+
213
+ No autocorrection and autocorrection with ${20}\%$ failures stood out as the most frustrating conditions. There's also a chance that the frustration stems from the participants' frustration with themselves for making errors, instead of frustration with the autocorrection itself. A participant noticed an autocorrection error and said " $I$ am very bad typer, I never fix my mistakes [pause], maybe it is just me." This may be due to the fact that frustration can also lead users to believe that they are failing at a task [7]. This raises the question of how small the acceptable percentage of autocorrection failures should be. This is an interesting avenue for future, quantitative studies.
214
+
215
+ Despite occasional failures, participants felt that they had less mental demand with autocorrection regardless of its accuracy, see Figure 4. In the post-session interview, we asked them about their behaviors and perceptions around autocorrection. Most of them said in one way or the other that they accepted that autocorrection fails occasionally. As previous research has identified, lowering expectations can make emotional responses to subsequent failures less intense [44].
216
+
217
+ The physical demand significantly increased based on autocor-rection accuracy, see Figure 5, since more frequent mistakes require more editing, which increases physical demand. Also unexpectedly, physical demand peaked with no autocorrection. Mental and physical demand, as well as frustration, all exhibit similar patterns, with the optimal condition being the least demanding and no autocorrec-tion being the most demanding condition, see Figure 7.
218
+
219
+ Participants felt that they needed to spend less effort on completing the task in the optimal condition, see Figure 5. Interestingly, and in contrast to the other conditions, effort generally decreases over time.
220
+
221
+ Participants indicated that autocorrection is overall a useful feature, when used sensibly. However, they also felt that it can sometimes change the meaning of a sentence entirely if they do not pay sufficient attention. As mentioned in the description of the experiment, when we asked for stories about positive or negative episodes that participants had experienced with autocorrection, participants said that autocorrect sometimes produces hilarious mistakes such as "my friend was complaining about autocorrect in a text and it was changed to 'auto cucumber'." However, some indicated that autocor-rection can lead to serious mistakes and social embarrassment (see Section 4.4.3). Thus, participants said that in certain scenarios, e.g., sending professional emails or texting parents, they have to verify the text a couple of times and be more cautious. With the advance in algorithms and personalization, users are sometimes exposed to side-effects, which can save sensitive data that is then shown at inappropriate times, such as specific words that they use only in contexts unrelated to the current text message (e.g., passwords). Some participants mentioned that autocorrection "weirded them out" and that they were concerned about potential privacy issues.
222
+
223
+ Our participants generally indicated that the autocorrect mistakes that are hardest to correct are the ones that happen at the beginning of a sentence, likely because it takes longer to navigate to such positions in the text. There is substantial research on how to facilitate error correction, and many keyboards provide advanced techniques to tackle such issues, e.g., WiseType [1] or other work [2, 43]. However, most of these techniques have not yet been adapted in built-in keyboards on most smartphones. Many participants indicated that mistakes occur when they use multiple languages on the keyboard, which was fairly prevalent in our participant pool. There is thus a need to re-consider how multiple dictionaries should be handled as well as better language detection methods within a keyboard's implementation. Also, our participants emphasized that they would like to see keyboards with a built-in grammar checker. Grammar checkers are not yet widely available on commercial mobile keyboards at the time of our work, but recent work found that adding a grammar checker helps improve text entry speed and accuracy [1].
224
+
225
+ Participants were split about how they prefer visual feedback for autocorrects that occurred in the text. Some wished to have slightly more hidden and less distracting feedback, while others wanted highlighting and more obvious feedback for autocorrects. This indicates the importance of giving users the ability to change the visualization settings for autocorrection instances, not just the option to turn it on/off.
226
+
227
+ Participants made several interesting design recommendations. Many participants indicated that they would prefer if there were a button on the keyboard to quickly toggle autocorrection using a single step, instead of having to go into the settings dialog, see Figure 9. Some existing virtual keyboards have an option to turn off the autocorrection. However, this always requires multiple interaction steps through settings dialogues and similar mechanisms. The majority of our participants indicated that they did not know how to turn autocorrection off and on. One mentioned the idea of having a confidence score on the side of the screen. Others indicated that they want to see synonyms as drop-down options, similar to some desktop text processing systems. Another recommendation is to have an option for autocorrections based on the current location, because people communicate differently in different areas, i.e., the requirement for correctness is typically higher at work. Many participants said that they did not know how to delete words from dictionaries, which demonstrates that there are more opportunities to improve the interaction with the dictionary supporting autocorrect (for more details see section 4.4.3).
228
+
229
+ ![01963e62-b6c6-7651-b2b9-3e7d2cf9c4b0_8_216_152_588_752_0.jpg](images/01963e62-b6c6-7651-b2b9-3e7d2cf9c4b0_8_216_152_588_752_0.jpg)
230
+
231
+ Figure 9: A design recommendation from our participants for adding a button on the keyboard to quickly toggle autocorrection.
232
+
233
+ Even though we collected data from only five participants per condition, the significant differences in our results exhibit large or (at least) medium effect sizes, which we see as an indication that our results are unlikely to be spurious. Also, we point out that Kapoor et al.'s research on automatic prediction of frustration in an intelligent system relied similarly on only four participants per condition [29].
234
+
235
+ A potential limitation of our work is that our autocorrection implementation might have produced different outcomes relative to system-generated predictions, which are typically based on machine-learning-based approaches [51]. Yet, as autocorrect works differently on different platforms, we could not identify a simple way to perfectly match the behaviour that users are used to across platforms, while still giving our software access to uncorrected input and/or allowing us to implement an optimal autocorrect condition. Also, two participants stated of their own volition, i.e., without prompting or questions from our side, that they perceived our 10% failing autocorrect implementation to match closely the one on their current smartphone. One reason behind this is that many users use smartphone models that are a few years old, which means that their experience with autocorrect also lags behind the state of the art, especially on the Android platform, which many participants used. Thus, we believe that we can still state that at the time our study was performed, our implementation was ecologically valid for the study.
236
+
237
+ Additionally, we used our own implementation because we wanted to tightly control the percentage of autocorrect failures and to explore the best-case scenario with "perfect" autocorrect conditions, which is similar to the "100% accurate" autocorrect condition in [5]. This condition closely resembles an oracle. Even with the use of advanced predictive autocorrection algorithms, it would be impossible to guarantee that a given number of failures would occur, especially since we cannot predict when or how the user enters any misspelled word. After all, wrong autocorrections can be due to participants entering unrecognized words with (potentially compounding) issues, such as spelling the word wrong, using the wrong touch locations, and/or missing a space. Interestingly, powerful autocorrect algorithms that predict corrections based on words, sentences, and user history can fail as well. Some of our participants that had the newest phones among our participants indicated that the quick adaptability of these newer methods can create issues for them, such as the system memorizing slang or curse words and then ranking them highly, in a situation where participants do not want the system to utilize such content for autocorrection.
238
+
239
+ ## 7 CONCLUSION
240
+
241
+ We assessed the effect of autocorrection failures on the user's mental and physical demand, performance, and effort during typing tasks using self-report measures, a think-aloud protocol, and interviews. We showed that the higher the frequency of autocorrection failures, the more likely it is that participants become frustrated. Then we listed several design recommendations for giving users the ability to temporarily adjust the behavior of autocorrection.
242
+
243
+ In the future, we will conduct a study to explore the effect of methods that are designed to ease users' frustration when autocorrec-tion fails. We also want to identify behavioural patterns around user frustration and potentially conduct quantitative studies that pinpoint at which failure percentage the frustration associated with autocor-rect disappears. Finally, we also plan to look further into how to better support autocorrection for bilingual users and the implications of autocorrect failures that occur when using multiple languages on a keyboard.
244
+
245
+ ## ACKNOWLEDGMENTS
246
+
247
+ We would like to thank the participants. The work was funded by King Saud University to whom we are also grateful.
248
+
249
+ ## REFERENCES
250
+
251
+ [1] O. Alharbi, A. S. Arif, W. Stuerzlinger, M. D. Dunlop, and A. Komni-nos. WiseType: A tablet keyboard with color-coded visualization and various editing options for error correction. Proceedings - Graphics Interface, pp. 1-10, 2019. doi: 10.20380/GI2019.04
252
+
253
+ [2] A. S. Arif, S. Kim, W. Stuerzlinger, G. Lee, and A. Mazalek. Evaluation of a Smart-Restorable Backspace Technique to Facilitate Text Entry Error Correction. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (2016), pp. 5151-5162, 2016. doi: 10. 1145/2858036.2858407
254
+
255
+ [3] A. S. Arif and W. Stuerzlinger. Analysis of Text Entry Performance Metrics. Proceedings of the IEEE Toronto International Conference - Science and Technology for Humanity (TIC-STH '09), pp. 100-105, 2009. doi: 10.1109/TIC-STH.2009.5444533
256
+
257
+ [4] K. C. Arnold, K. Z. Gajos, and A. T. Kalai. On Suggesting Phrases vs. Predicting Words for Mobile Text Composition. Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16), pp. 603-608, 2016. doi: 10.1145/2984511.2984584
258
+
259
+ [5] N. Banovic, T. Sethapakdi, Y. Hari, A. K. Dey, and J. Mankoff. The Limits of Expert Text Entry Speed on Mobile Keyboards with Auto-correct. Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI 2019, pp. 1-12, 2019. doi: 10.1145/3338286.3340126
260
+
261
+ [6] X. Bi and S. Zhai. IJQwerty: What Difference Does One Key Change Make? Gesture Typing Keyboard Optimization Bounded by One Key Position Change from Qwerty. Proceedings of the Conference on Human Factors in Computing Systems (CHI '16), pp. 49-58, 2016. doi: 10.1145/2858036.2858421
262
+
263
+ [7] C. S. Carver and M. F. Scheier. Origins and Functions of Positive and Negative Affect: A Control-Process View. Psychological Review, pp. 19-35, 1990. doi: 10.1037/0033-295X.97.1.19
264
+
265
+ [8] I. Ceaparu, J. Lazar, K. Bessiere, J. Robinson, and B. Shneiderman. Determining Causes and Severity of End-user Frustration. International Journal of Human-Computer Interaction, pp. 333-356, 2004. doi: 10. 1207/s15327590ijhc1703
266
+
267
+ [9] D. G. Cooper, I. Arroyo, B. P. Woolf, K. Muldner, W. Burleson, and R. Christopherson. Sensors Model Student Self Concept in the Classroom. International Conference on User Modeling, Adaptation, and
268
+
269
+ Personalization, pp. 30-41, 2009. doi: 10.1007/978-3-642-02247-0
270
+
271
+ [10] W. Cui, S. Zhu, M. R. Zhang, H. A. Schwartz, J. O. Wobbrock, and X. Bi. Justcorrect: Intelligent post hoc text correction techniques on smartphones. p. 487-499, 2020.
272
+
273
+ [11] C. De Guzman, M. Chignell, J. Jiang, and L. Zucherman. Testing the Effects of Peak, end, and Linear Trend on Evaluations of Online Video Quality of Experience. Proceedings of the Human Factors and Ergonomics Society, pp. 813-817, 2017. doi: 10.1177/ 1541931213601696
274
+
275
+ [12] S. Droit-Volet, S. Monceau, M. Berthon, P. Trahanias, and M. Mani-adakis. The explicit judgment of long durations of several minutes in everyday life: Conscious retrospective memory judgment and the role of affects? PLoS ONE, pp. 1-17, 2018. doi: 10.1371/journal.pone. 0195397
276
+
277
+ [13] S. Droit-Volet, P. Trahanias, and M. Maniadakis. Passage of Time Judgments in Everyday Life are not Related to Duration Judgments Except for Long Duration of Several Minutes. Acta Psychologica, pp. 116-121, 2017. doi: 10.1016/j.actpsy.2016.12.010
278
+
279
+ [14] S. Droit-Volet and J. Wearden. Passage of Time Judgments Are Not Duration Judgments: Evidence from a Study Using Experience Sampling Methodology. Frontiers in Psychology, pp. 1-7, 2016. doi: 10. 3389/fpsyg.2016.00176
280
+
281
+ [15] M. D. Dunlop and A. Crossan. Predictive Text Entry Methods for Mobile Phones. Personal Technologies, pp. 134-143, 2000. doi: 10. 1007/BF01324120
282
+
283
+ [16] A. Fowler, K. Partridge, C. Chelba, X. Bi, T. Ouyang, and S. Zhai. Effects of Language Modeling and its Personalization on Touchscreen Typing Performance. Proceedings of the ACM CHI'15 Conference on Human Factors in Computing Systems, 1:649-658, 2015. doi: 10. 1145/2702123.2702503
284
+
285
+ [17] V. A. Freedman, F. G. Conrad, J. C. Cornman, N. Schwarz, and F. P. Stafford. Does Time Fly When You are Having Fun? A Day Reconstruction Method Analysis. Journal of Happiness Studies, 2014. doi: 10.1007/s10902-013-9440-0
286
+
287
+ [18] K. Gelbrich. Anger, Frustration, and Helplessness After Service Failure: Coping Strategies and Effective Informational Support. Journal of the Academy of Marketing Science, pp. 567-585, 2010. doi: 10.1007/ s11747-009-0169-6
288
+
289
+ [19] M. Goel, a. Jansen, T. Mandel, S. Patel, and J. Wobbrock. ContextType: Using Hand Posture Information to Improve Mobile Touch Screen Text Entry. Proceedings of CHI 2013, pp. 2795-2798, 2013. doi: 10. 1145/2470654.2481386
290
+
291
+ [20] R. E. Goldsmith, B. A. Lafferty, and S. J. Newell. The Impact of Corporate Credibility and Celebrity Credibility on Consumer Reaction to Advertisements and Brands. Journal of Advertising, pp. 43-54, 2000. doi: 10.1080/00913367.2000.10673616
292
+
293
+ [21] A. Gunawardana, T. Paek, and C. Meek. Usability Guided Key-Target Resizing for Soft Keyboards. Proceedings of the 15th international conference on Intelligent user interfaces (IUI '10), pp. 111-118, 2012. doi: 10.1145/1719970.1719986
294
+
295
+ [22] N. Harrington. The frustration discomfort scale: Development and psychometric properties. Clinical Psychology & Psychotherapy: An International Journal of Theory & Practice, pp. 374-387, 2005. doi: 10.1002/cpp.465
296
+
297
+ [23] S. G. Hart and L. E. Staveland. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. Advances in Psychology, pp. 139-183, 1988. doi: 10.1016/S0166-4115(08)62386 -9
298
+
299
+ [24] T. K. Hoppmann. Examining the 'Point of Frustration'. The Think-aloud Method Applied to Online Search Tasks. Quality and Quantity, pp. 211-224, 2009. doi: 10.1007/s11135-007-9116-0
300
+
301
+ [25] Y. Huang, T. Fei, M. P. Kwan, Y. Kang, J. Li, Y. Li, X. Li, and M. Bian. Gis-based emotional computing: A review of quantitative approaches to measure the emotion layer of human-environment relationships. ISPRS International Journal of Geo-Information, pp. 1-15, 2020. doi: 10.3390/ijgi9090551
302
+
303
+ [26] C. James and K. Reischel. Text Input for Mobile Devices: Comparing
304
+
305
+ Model Prediction to Actual Performance. Proceedings of the 2001 CHI Conference on Human Factors in Computing Systems (CHI '01), pp. 365-371, 2001. doi: 10.1145/365024.365300
306
+
307
+ [27] A. Jameson and P. O. Kristensson. Understanding and supporting
308
+
309
+ modality choices. In The Handbook of Multimodal-Multisensor Interfaces: Foundations, User Modeling, and Common Modality Combinations - Volume 1, pp. 201-238. ACM, 4 2017. doi: 10.1145/3015783. 3015790
310
+
311
+ [28] J. P. Jokinen, S. Sarcar, A. Oulasvirta, C. Silpasuwanchai, Z. Wang, and X. Ren. Modelling Learning of New Keyboard Layouts. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems - (CHI '17), pp. 4203-4215, 2017. doi: 10.1145/3025453.3025580
312
+
313
+ [29] A. Kapoor, W. Burleson, and R. W. Picard. Automatic Prediction of Frustration. International Journal of Human-Computer Studies, pp. 724-736, 8 2007. doi: 10.1016/j.ijhcs.2007.02.003
314
+
315
+ [30] J. Klein, Y. Moon, and R. W. Picard. This Computer Responds to User Frustration. Extended abstracts on Human factors in computing systems (CHI '99), pp. 119-140, 1999. doi: 10.1145/632716.632866
316
+
317
+ [31] H. H. Koester and S. P. Levine. Effect of a Word Prediction Feature on User Performance. Augmentative and Alternative Communication (AAC), pp. 155-168, 1996. doi: 10.1080/07434619612331277608
318
+
319
+ [32] J.-H. Lee and C. Spence. Assessing the benefits of multimodal feedback on dual-task performance under demanding conditions. Proceedings of the 22nd British HCI Group Annual Conference on People and Computers: Culture, Creativity, Interaction-Volume 1, pp. 185-192, 2008. doi: 10.1145/1531514.1531540
320
+
321
+ [33] V. I. Levenshtein. Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady, 10(8):707-710, 1966.
322
+
323
+ [34] I. S. MacKenzie and R. W. Soukoreff. Phrase sets for evaluating text entry techniques. Extended Abstracts on Human Factors in Computing Systems (CHI'03), pp. 754-755, 2003. doi: 10.1145/765968.765971
324
+
325
+ [35] J. R. Mccoll-Kennedy and B. A. Sparks. Application of fairness theory to service failures and service recovery. Journal of Service Research, 5(3):251-266, 2003. doi: 10.1177/1094670502238918
326
+
327
+ [36] D. T. Nguyen and J. R. McColl-Kennedy. Diffusing Customer Anger in Service Recovery: A Conceptual Framework. Australasian Marketing Journal, 2003. doi: 10.1016/S1441-3582(03)70128-1
328
+
329
+ [37] J. Nielsen, T. Clemmensen, and C. Yssing. Getting Access to What Goes on in People's Heads? - Reflections on the Think-aloud Technique. Proceedings of the second Nordic conference on Human-computer interaction, pp. 101-110, 2002. doi: 10.1145/572020.572033
330
+
331
+ [38] K. Palin, A. M. Feit, S. Kim, P. O. Kristensson, and A. Oulasvirta. How Do People Type on Mobile Devices? Observations from a Study with 37,000 Volunteers. Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI 2019, pp. 1-12, 2019. doi: 10.1145/3338286.3340120
332
+
333
+ [39] P. Quinn and A. Cockburn. Loss Aversion and Preferences in Interaction. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI '18, pp. 1-48, 6 2018. doi: 10.1080/ 07370024.2018.1433040
334
+
335
+ [40] P. Quinn and S. Zhai. A Cost-Benefit Study of Text Entry Suggestion Interaction. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems - CHI '16, pp. 83-88, 2016. doi: 10. 1145/2858036.2858305
336
+
337
+ [41] M. L. Richins. Measuring Emotions in the Consumption Experience. Journal of Consumer Research, 24(2):127-146, 1997. doi: 10.1086/ 209499
338
+
339
+ [42] G. M. Rose, M. L. Meuter, and J. M. Curran. On-line waiting: The role of download time and other important predictors on attitude toward e-retailers, 2005. doi: 10.1002/mar.20051
340
+
341
+ [43] S. Sindhwani, C. Lutteroth, and G. Weber. ReType: Quick Text Editing with Keyboard and Gaze. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI '19, pp. 1-13, 2019. doi: 10.1145/3290605.3300433
342
+
343
+ [44] M. R. Solomon, K. White, D. W. Dahl, J. L. Zaichkowsky, and R. Pole-gato. Consumer behavior: Buying, having, and being. Pearson Boston, MA, 2017.
344
+
345
+ [45] R. W. Soukoreff and I. S. MacKenzie. Metrics for text entry research: an evaluation of MSD and KSPC, and a new unified error metric. Proceedings of the conference on Human factors in computing systems
346
+
347
+ - CHI '03, pp. 113-120, 2003. doi: 10.1145/642611.642632
348
+
349
+ [46] J. Tipples. Increased Frustration Predicts the Experience of Time Slowing-Down: Evidence from an Experience Sampling Study, 2018. doi: 10.1163/22134468-20181134
350
+
351
+ [47] E. Van Steenburg, N. Spears, and R. O. Fabrize. Point of Purchase or Point of Frustration? Consumer Frustration Tendencies and Response in a Retail Setting. Journal of Consumer Behaviour, pp. 1-12, 2013. doi: 10.1002/cb.1440
352
+
353
+ [48] K. Vertanen and P. O. Kristensson. A versatile dataset for text entry evaluations based on genuine mobile emails. Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI '11), pp. 295-270, 2011. doi: 10. 1145/2037373.2037418
354
+
355
+ [49] K. Vertanen and P. O. Kristensson. Complementing text entry evaluations with a composition task. ACM Transactions on Computer-Human Interaction, 21(2):1-33, 2014. doi: 10.1145/2555691
356
+
357
+ [50] C. C. Wu and Y. H. Lo. Customer Reactions to Encountering Consecutive Service Failures. Journal of Consumer Behaviour, pp. 217-224, 2012. doi: 10.1002/cb.1376
358
+
359
+ [51] X. Wu, Z. Liang, and J. Wang. Fedmed: A Federated Learning Framework for Language Modeling. International Journal of Speech Technology, pp. 499-504, 2020. doi: 10.3390/s20144048
360
+
361
+ [52] M. S. Young, K. A. Brookhuis, C. D. Wickens, and P. A. Hancock. State of science: mental workload in ergonomics. pp. 1-17, 2015. doi: 10.1080/00140139.2014.956151
362
+
363
+ [53] M. Zhang, H. Wen, and J. O. Wobbrock. Type, then correct: Intelligent text correction techniques for mobile text entry using neural networks. UIST 2019 - Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, pp. 843-855, 2019. doi: 10. 1145/3332165.3347924
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/dcbsb4qTmnt/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,239 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § AUTO-CUCUMBER: THE IMPACT OF AUTOCORRECTION FAILURES ON USERS’ FRUSTRATION
2
+
3
+ Ohoud Alharbi*
4
+
5
+ King Saud University
6
+
7
+ Wolfgang Stuerzlinger†
8
+
9
+ Simon Fraser University
10
+
11
+ § ABSTRACT
12
+
13
+ Many mobile users rely on autocorrection mechanisms during text entry on their smartphone. Previous studies investigated the effects of autocorrection mechanisms on typing speed and accuracy but did not explore the level of frustration and perceived mental workload often associated with autocorrection. Through a mixed-methods user study, we investigate the effect of autocorrection failures on increasing the user's frustration, mental and physical demand, performance, and effort in this paper. We identified that perceived mental and physical demand, and frustration are directly affected by autocorrection.
14
+
15
+ Index Terms: Human-centered computing-Interaction design and evaluation methods—Keyboards;
16
+
17
+ § 1 INTRODUCTION
18
+
19
+ Empowered by the growth of text-based social media, many people prefer writing text messages or social media posts over making phone calls. To keep up with this growth, text entry methods have been improved by providing features that enable users to type as fast as possible and correcting their typing errors as they go. Yet, being fast and accurate can be a challenge on touch screen keyboards, due to various issues, including misspelling the word, using the wrong touch locations, missing a space, and compounded versions of these.
20
+
21
+ Still, a frustrating interaction with a computing device, resulting from typing errors or a wrong autocorrect, can cause users to experience negative emotions toward the system and to potentially abandon using some functionality [30]. In that moment of frustration, users might not be aware how much autocorrect has already improved and keeps improving with continuous use and upgrades to algorithms. To better understand the origins of current user reactions, this paper focuses on an analysis of the behaviors people exhibit in text entry with respect to autocorrect and its failures and the associated costs in terms of perceived mental and physical demand, and user frustration.
22
+
23
+ Text entry research typically collects data to evaluate the speed and accuracy of a new interaction technique, such as Drag-n-Drop, Drag-n-Throw, and Magic Key [53]. Studies have examined the effect of keyboard layouts on typing behavior, e.g., [6, 19, 21, 28, 49], while other studies have investigated the time users spent while interacting with autocorrections and the prediction panel while entering text, including when prediction and autocorrect approaches fail, e.g., $\left\lbrack {1,2,{10}}\right\rbrack$ . However, there are no studies that investigate the effect of failing autocorrections on the user's emotions and their level of frustration. Yet, cognitive theory research has shown that system failures can activate negative emotions such as anger, annoyance, and frustration [35].
24
+
25
+ This paper presents a user study that investigates the effect of various degrees of failing autocorrection on the user's frustration and perceived mental workload. We analyze the results through metrics related to individual keystrokes, but also use qualitative methods, such as survey questions, observations, and interviews. After a discussion of related work, we present the results of our study $\left( {\mathrm{N} = {20}}\right)$ to observe the effect of failing autocorrection on users' mental workload. Results show that perceived mental and physical demand, and frustration levels are affected by autocorrection. There is a need to further investigate ways to give users the ability to temporarily adjust the behavior of autocorrection without turning this generally beneficial feature permanently off. Based on user feedback, we propose mechanisms such as adding a (single-step) button on the keyboard to quickly toggle autocorrection, or displaying a confidence score at the side of the screen.
26
+
27
+ § 2 RELATED WORK
28
+
29
+ Frustration can lead users to believe that they are failing a task [7]. Further, a frustrating interaction with a computing device can cause users to feel negatively toward the system and then encourage them to potentially turn off some aspects of its functionality, such as autocorrect [30]. If feelings of frustration are strong, they may even make a user abort or re-consider an action [46]. For instance, excessive download delays might have a negative impact on the brand perceived to be responsible for the delay [42]. Feelings of frustration are linked to the perceived duration of activities [8, 17]. There is much potential negative impact when users are frustrated and unable to respond to failures or give feedback [35].
30
+
31
+ Nevertheless, it is not always the case that negative emotions will increase as failures occur more frequently. While there will generally be a negative emotional response to failure, there may also be a lowering of expectations, which will tend to make emotional responses to subsequent failures less intense [36,44].
32
+
33
+ § 2.1 PREDICTIVE FEATURES
34
+
35
+ As errors contribute substantially to slow real-life text entry speed, facilitating error correction is a key challenge for text entry [26]. Errors are costly in time and effort, and can negatively affect user perception of text entry quality. Yet, the visibility of errors and suggestions for error correction can also increase both perception and interaction costs, which might even reduce text entry speed, e.g., $\left\lbrack {{27},{32},{39},{40}}\right\rbrack$ , and in some cases decrease writing accuracy $\left\lbrack 4\right\rbrack$ . Previous work has identified that word correction and completion features on mobile keyboards could save up to ${45}\%$ of keystrokes [16], but this promise rarely results in a corresponding increase in typing speed [15].
36
+
37
+ If an appropriate language model is used, predictive algorithms can support effective error correction and completion [16]. However, many other factors play a role in the effectiveness of the use of predictive features [31], including the experience of the user [38]. To enable us to study the effect of failures in a systematic manner and how users experience such failures, we strategically caused the autocorrection to fail with controlled frequencies in our study.
38
+
39
+ § 2.2 FRUSTRATION AND MENTAL WORKLOAD ASSESSMENT
40
+
41
+ Workload is a term used to characterize the effort associated with a job and refers to the amount of work that needs to be performed ('the work'), usually within a fixed period of time ('the load'). Mental workload is the level of measurable mental effort put forth by an individual in response to one or more cognitive tasks [52]. We can assess mental workload using physiological or self-report measures.
42
+
43
+ *e-mail: omalharbi@ksu.edu.sa
44
+
45
+ ${}^{ \dagger }$ e-mail: w.s@sfu.ca
46
+
47
+ Physiological measures used to measure mental workload can include frustration, since these feelings are accompanied by physiological changes. Ceaparu et al. [8] measured the physiological response associated with workload by simulating frustrating experiences that someone might have when they play a game. At specific intervals the mouse would fail, leading to frustration. Yet, emotional experiences may be influenced by many factors such as individuals' memory, life history, culture, age, and gender [25]. More research is thus needed to identify how different physiological methods, e.g., skin conductance and heart rate variability, can be combined to develop more objective measures of frustration that are both effective and reliable. Still, we believe that physiological measures are currently not yet reliable enough to be used as a main measure of frustration.
48
+
49
+ Alternatively, self-reports are a subjective assessment that rate perceived workload to assess a task, system, or other aspects of performance. With this approach, researchers ask participants to rate their response after an intervention or interruption.
50
+
51
+ To compare self-reports with Physiological measures, Cooper et al. [9] evaluated four sensors in terms of utility for frustration research: a camera that focused on the participants' face, a skin conductance bracelet, a pressure sensitive mouse, and a chair seat capable of detecting posture. Participants were presented with questions such as "how [interested/excited/confident/frustrated] do you feel right now?" and rated their current state on a scale of 1 to 5. The authors found that the most accurate results came from the self-reported assessment.
52
+
53
+ Further, the NASA TLX is a popular and well-validated self-report questionnaire to measure the experienced workload and was initially developed to measure workload in the military [23]. It has been applied in a variety of settings in human-computer interaction research [11]. The NASA TLX combines six scales, including mental demand, physical demand, effort, and frustration.
54
+
55
+ Frustration is an important component of mental workload. Many researchers developed questionnaires to specifically measure this emotion. Ceaparu et al. [8] forced a frustrating situation and asked participants to subjectively report on each frustrating experience, once it occurred during the session. Van Steenburg et al. [47] and Gelbrich [18] developed questionnaires that measure frustration in an imagined frustrating situation. Goldsmith et al. [20] developed an online questionnaire including scales that measure attitude and frustration tolerance [22]. Richins [41] used a method based on ratings of seven frustration-related adjectives (frustrated, uncomfortable, anxious, stressed, strained, annoyed, and awkward). Similarly, Wu and Lo [50] developed ten items aimed at measuring how a telecommunications service is performing relative to customer expectations. Droit-Volet and Wearden [14] measured the mood of participants throughout the day using an experience-sampling method or short survey.
56
+
57
+ The approach of repeatedly measuring mood states has been used in a number of further studies [12-14]. Since repeatedly using self-report measures is a standard method in human-computer interaction research, we decided to adopt this approach by repeatedly measuring workload and frustration states through a short survey based on the NASA TLX questions.
58
+
59
+ Finally, the complementary combination of self-reported measures together with qualitative analysis can yield an even better representation of a users' mental state [24]. For instance, an exploratory study [24] employed questionnaires, think-aloud protocols, and in-depth interviews to determine the primary points of critique and satisfaction with the information provided on a website, by examining the properties of the website, the search process, and the mood alterations of the participants in combination. Using the think-aloud method often provides good explanations about the users' thought process and reveals changes of mood [37].
60
+
61
+ Task: Copy the phrase below, while ignoring capitalization
62
+
63
+ i would suggest you and mark address this together
64
+
65
+ Next » 1/50
66
+
67
+ Error Rate:
68
+
69
+ Figure 1: The webpage that participants saw during the experiment.
70
+
71
+ § 2.2.1 MOTIVATION AND EXPERIMENTAL APPROACH
72
+
73
+ Our work aims to highlight the potential side-effects of "smart" techniques that are automatically applied, such as autocorrection. We investigate the effect of failing autocorrections on the user's level of frustration and perceived mental workload. Based on the above review of methods to measure frustration and mental workload, we decided to combine different methods to arrive at a more complete picture of the outcome. Following previous work [24], we combine self-report questions with qualitative protocols, more specifically think-aloud and interviews, to better understand the reactions of our participants. According to previous work, this approach currently still yields a better representation of users mental states than using physiological measures [24]. We also follow Ceaparu et al.'s [8] approach by forcing a frustrating event (autocorrection failures) and asked participants to subjectively report on their experience during the session.
74
+
75
+ § 3 APPARATUS
76
+
77
+ We used a web application for data collection. We implemented the system using HTML, CSS, JavaScript, and PHP. We then used Amazon Web Services (AWS) to host our web application. The application includes a custom autocorrection method that works independent of various operating system implementations. The system presents prompts with text for the user to enter and logs all occurring events at the keystroke level (Figure 1).
78
+
79
+ § 3.1 INSTRUCTIONS
80
+
81
+ Participants initially needed to acknowledge that they had read the instructions and to also give their consent for data collection. These initial instructions asked participants to temporarily disable the predictive features on their phones. Once participants agreed to participate, they were instructed on the procedure and then started the English language text entry tasks. The main part of the experiment showed only a single line of instruction, a presented phrase, and a textbox to input that phrase, see Figure 1, as well as the user's own keyboard, which they used for text entry. We asked participants to use their own device and their own keyboard layout, because we wanted to eliminate the associated learning factor and any potential influence of such learning on their frustration. Users needed to tap on the "Next" button to move to the next phrase, where they then also saw an up-to-date average for their text entry speed and error rate. In between blocks of 5 phrases, participants were presented with questions about how much mental demand/physical demand/effort/frustration they felt at that moment, rated on a scale of 1 to 7, see Figure 2. We purposely removed the questions regarding temporal demand and performance from the NASA TLX, since in our instructions we asked participants to type as fast as possible and to maintain a low error rate. These questions appeared before the task and were then shown each time after the users had entered 5 phrases. Following previous work, we asked the users to answer the questions repeatedly to better understand the contingencies of their behavior [12-14]. We used transcription typing to measure participants' typing speed, as this approach enables us to study motor performance while excluding cognitive aspects related to the process of text generation [38].
82
+
83
+ < g r a p h i c s >
84
+
85
+ Figure 2: Our short survey to probe frustration, effort, and mental and physical demand.
86
+
87
+ § 3.2 CUSTOM AUTOCORRECTION
88
+
89
+ To ensure that we could correctly log every text entry action, we asked participants to disable their own predictive system, including their prediction panel and autocorrection. Another reason for this decision was that we needed to manipulate some internals of the autocorrection mechanisms in our study, something that current system APIs do not permit. We thus used a custom autocorrection algorithm that gets triggered when an inputted word does not match the word in the presented text.
90
+
91
+ For autocorrection we exposed participants to four different conditions: optimal, failures 10% of the time, failures 20% of the time, and no autocorrection. In the optimal condition, if the misspelled word is close enough to the intended one, our system autocorrects it to match the presented word. This conditions always produces perfect autocorrections, which is similar to the "100% accurate" autocorrect condition in [5]. This condition closely resembles an oracle.
92
+
93
+ For autocorrection that fails ${10}\% \left( {{20}\% }\right)$ of the time, we adjust the system to produce a correct autocorrection ${90}\% \left( {{80}\% }\right)$ of the time (using the optimal method), but produce only a "close-enough" result in the remaining ${10}\% \left( {{20}\% }\right)$ of the time. To create such an almost correct result, our implementation searches for similar words using the Levenshtein distance [33] and then chooses the one with the lowest editing distance, i.e., a word that looks like a plausible autocorrect. We used a dictionary with the 40,000 most frequent words from project Gutenberg ${}^{1}$ . We verified that our prediction algorithm matches commercial systems reasonably well. For this, we randomly chose phrases and compared the output of our system with that of an Android 9 keyboard using the same input test. We found that the output matches ${94}\%$ of the time, which is reasonably high and likely at a level that is not easily perceived to be different by naive users.
94
+
95
+ § 3.3 DATA LOGGING
96
+
97
+ Through our web-based system, we recorded each text change or touch event, which fairly closely corresponds to the keystroke logging level, with a corresponding timestamp. For each phrase, we recorded the following data: device orientation (portrait/landscape), presented text, typed text, the complete input stream, keystrokes per character, words per minute, and total time per phrase. Moreover, we also logged all autocorrections, cursor movements, and error messages that were triggered during text entry. This comprehensive logging enables us to fully replay the input of each phrase.
98
+
99
+ § 3.4 PHRASE SET
100
+
101
+ We used 30 phrases randomly selected from the Enron MobileEmail phrase set [48]. We removed all non-alphabetic characters, including punctuation, and made sure that the selected phrases contained at least three words. We decided to exclude non-alphabetic characters and punctuation in the study, as such characters introduce a potential confounding source of variation in the dependent measures and threaten internal validity [34]. The phrases in the set (774 sentences) were generally short to medium length, average 6.1 words (SD 1.68, ranging from 3 to 12), and contained on average 29.9 characters (SD 10.13, ranging from 14 to 67).
102
+
103
+ § 4 USER STUDY
104
+
105
+ The purpose of this study was to compare 4 conditions of auto-correction (optimal, failing ${10}\%$ , failing ${20}\%$ , and none) and to measure the associated perceived mental and physical workload of the user. Previous work identified that the largest error rate at which typists would attempt to type before autocorrect corrects errors ranges between approximately 15% and 25% [5]. In our pilot studies, we initially experimented with conditions that exaggerated the number of failures (up to ${40}\%$ failures on autocorrects). Yet, we observed that high error-rate conditions (larger than 25%) were extremely confusing for participants. Thus, we decided to exclude such conditions from our main study and to examine only the ${10}\%$ and ${20}\%$ options. With similar conditions, we also ran a pilot study with a within-subject design and found indications for a substantial carryover effect that influenced participants' answers, based on the sequence in which the conditions appeared.
106
+
107
+ § 4.1 DESIGN
108
+
109
+ We used a between-subjects design. Each participant entered 30 phrases with one of the 4 conditions (no, 20% failing, 10% failing, and optimal autocorrection), excluding two practice phrases. In total we collected (20 participants $\times {30}$ phrases) $= {600}$ phrases.
110
+
111
+ § 4.2 PROCEDURE
112
+
113
+ Before starting this study, participants were asked to complete a background questionnaire about their age, gender, English proficiency, and their experience with their current touchscreen device keyboard, including what they thought about the performance of their current autocorrection system. We also gave them a full demonstration of our system and let them experience text entry using it for entering a few training phrases (using the chosen condition for that participant, i.e., if their assigned condition was optimal auto-correction, they experienced this already in the training). During the study, participants were asked to enter 30 English phrases using our system and to answer questions about how much mental and physical demand, effort, and frustration they felt at the moment, see Figure 2. Each participant answered the questions seven times, once before the typing task started and the remaining six times after entering each block of five phrases. Additionally, we asked them to use the think-aloud method, which we explained to them during the training phase.
114
+
115
+ ${}^{1}$ https://en.wiktionary.org/wiki/Wiktionary:Frequency_lists
116
+
117
+ At the end of the session, we conducted a semi-structured interview targeting behaviors we had observed or comments users had made during the text entry sessions. Further, we also asked participants about their own stories around autocorrection, i.e., positive or negative episodes that they had encountered in the past. We also asked them about how they believed that autocorrection influenced their typing speed and correctness, and how autocorrection made them feel. Other questions inquired about the type of words that they find hardest to get correct with current autocorrect systems and finally if they had any design recommendations around autocorrec-tion.
118
+
119
+ Including signing consent forms, filling questionnaires, the main typing tasks, and the interview, the session lasted about 45 minutes on average. We used two cameras and tripods, as well as voice recording to assist observation. Figure 3 shows the setting of the experiment. One camera was directed at the mobile screen and the second at the participants' face to record their expressions. The user study was approved by the research ethics board of the local university.
120
+
121
+ § 4.3 PARTICIPANTS
122
+
123
+ We recruited twenty participants (10 females, 10 males) for the study through advertising to a student participant pool at Simon Fraser University. Of these participants, 14 were between 18 and 24 years old and 6 between 25 and 34 . Half of the participants indicated that they are using a mobile keyboard with Latin characters, i.e., the modern English alphabet, constantly during the day, 30% more than once per hour, and ${20}\%$ more than once a day.
124
+
125
+ Even though our task did not require high English proficiency, we created a quick English quiz using material from http://iteslj.org towards an objective assessment of English skills. The "overall success rate" was the final score participants achieved in our language proficiency quiz that consisted of six grammar questions: two easy, two medium, and two hard. Results show that the success rate for the overall English proficiency quiz was ${92}\% \left( {\mathrm{{SD}} = {13}}\right)$ , which corresponds to reasonably high English proficiency, as is to be expected for a university environment. Given this level of proficiency, we did not follow up on this data.
126
+
127
+ Among our participants 65% used Android or variants (Samsung, OxygenOS, etc.), while 35% used Apple iOS. Most (90%) indicated that they normally have autocorrection activated on their devices. When we asked them to rate predictive features in their mobile devices on a 5-point Likert scale (very good, good, acceptable, poor, and very poor), 5% chose very good, 55% indicated good, 35% acceptable, and $5\%$ very poor.
128
+
129
+ § 5 RESULTS
130
+
131
+ We used one-way ANOVA with alpha of 0.05 for all analyses. A Shapiro-Wilk test identified that the assumption of a normal distribution was satisfied, and all other preconditions of ANOVA were also met. We used Tukey's Honest Significant Difference (HSD) test for post-hoc analyses. To characterize effect sizes we used the partial eta squared measure.
132
+
133
+ < g r a p h i c s >
134
+
135
+ Figure 3: The experiment setting.
136
+
137
+ § 5.0.1 PERFORMANCE
138
+
139
+ In line with common text entry study protocols, we used the words per minute (WPM) metric to measure entry speed $\left\lbrack {3,{45}}\right\rbrack$ . Time was measured from the first keystroke to the last. We observed a statistically significant effect on entry speeds for the four conditions, $\mathrm{F}\left( {3,{136}}\right) = {3.491},\mathrm{p} < {.018}$ , with optimal being the fastest option, with a medium effect size ${\eta }_{p}^{2} = {.07}$ see Figure 4.
140
+
141
+ We also measured the verification time, i.e., the "reviewing time", which is the time participants took to review a phrase before moving to the next. For this, we measured the time from the last keystroke until the time participants pressed the "next" button. Verification times were statistically significantly, $\mathrm{F}\left( {3,{136}}\right) = {3.51},\mathrm{p} = {.04}$ , with a large effect size ${\eta }_{p}^{2} = {.4}$ . Optimal and ${10}\%$ autocorrection required less verification time, see Figure 4.
142
+
143
+ The difference in terms of the number of keystrokes per character (KSPC) for each condition was statistically significant [3, 45], $\mathrm{F}\left( {3,{136}}\right) = {4.97},\mathrm{\;p} = {.013}$ , with a large effect size ${\eta }_{p}^{2} = {.48}$ . No au-tocorrection had higher KSPC as shown in Figure 4, corresponding to more keystrokes spent on error correction.
144
+
145
+ We analyzed the average Error Rate (ER) of the final submitted text, and found it was not significantly different across conditions, $\mathrm{F}\left( {3,{136}}\right) = {2.256},\mathrm{p} = {.085}$ .
146
+
147
+ We further investigated the use of error correction methods, such as the number of backspaces and cursor movements. We found the use of backspaces to be statistically significant, $F\left( {3,{136}}\right) = {5.39}$ , $\mathrm{p} = {.009}$ , with a large effect size ${\eta }_{p}^{2} = {.5}$ , but the use of cursor movements is not significant $\mathrm{F}\left( {3,{136}}\right) = {2.36},\mathrm{\;p} = {.11}$ . Optimal and ${10}\%$ autocorrection required fewer backspaces.
148
+
149
+ The average rate of autocorrections events that occurred due to participants making typing errors with the ${20}\%$ failing condition were $\mathrm{M} = {12.60}\% \left( {\mathrm{{SD}} = {18.29}}\right)$ , for ${10}\%$ failing $\mathrm{M} = {10.51}\% (\mathrm{{SD}} =$ 5.47), and for the optimal condition were $\mathrm{M} = {17.99}\% \left( {\mathrm{{SD}} = {14.70}}\right)$ . Of those recorded events, the average percentage of forced failures, i.e., where the system simulated a failure, were ${19.66}\% \left( {\mathrm{{SD}} = {14.44}}\right)$ for the ${20}\%$ failing condition, ${7.5}\% \left( {\mathrm{{SD}} = {5.01}}\right)$ for the ${10}\%$ failing condition, and $0\%$ for optimal condition.
150
+
151
+ § 5.0.2 THE NASA TASK LOAD INDEX
152
+
153
+ We observed a statistically significant effect on frustration, as measured by the corresponding question from the NASA TLX, F(3, ${136}) = {12.686},\mathrm{p} < {.001}$ , with a large effect size ${\eta }_{p}^{2} = {.22}$ . Optimal stood out by being the least frustrating. There was also a statistically significant effect on mental demand across conditions, $\mathrm{F}\left( {3,{136}}\right) = {15.361},\mathrm{p} < {.001}$ , with a large effect size ${\eta }_{p}^{2} = {.25}$ . No autocorrection was significantly more mentally demanding. Additionally, we observed a statistically significant effect on physical demand across conditions, $\mathrm{F}\left( {3,{136}}\right) = {19.51},\mathrm{p} < {.001}$ , with a large effect size ${\eta }_{p}^{2} = {.30}$ . Here, no autocorrection was followed by ${20}\%$ autocorrection as being the two most physically demanding conditions. Finally, we observed a statistically significant effect on effort across the conditions, $\mathrm{F}\left( {3,{136}}\right) = {8.55},\mathrm{p} < {.001}$ , with a large effect size ${\eta }_{p}^{2} = {.16}$ . No autocorrection and ${20}\%$ autocorrec-tion required most effort. The means and results from the post-hoc analyses are presented in Figure 5. As we had prompted participants with our survey seven times during the study to investigate changes in frustration and workload over time, we illustrate the fluctuations of the answers in Fig. 7.
154
+
155
+ < g r a p h i c s >
156
+
157
+ Figure 4: a) Average words per minute (WPM), b) average keystrokes per character (KSPC), and c) average verification time for each condition (seconds). The three asterisks (***) illustrate a significant difference with $p \leq {0.001}$ .
158
+
159
+ Figure 5: a) Average effort, b) average mental demand, and c) average physical demand for each condition. The three asterisks (***) illustrate a significant difference with $p \leq {0.001}$ .
160
+
161
+ < g r a p h i c s >
162
+
163
+ Figure 6: Average frustration for each condition. The three asterisks (***) illustrate a significant difference with $p \leq {0.001}$ .
164
+
165
+ § 5.0.3 INTERVIEWS
166
+
167
+ At the end of the session, we conducted a semi-structured interview with each participant, focusing on any observed behaviors or comments users made during text entry. We analyzed what people told us, by first coding our interview data in a systematic manner and then identifying larger themes from that data.
168
+
169
+ When we asked participants about their experience with autocor-rection, Participant 4 mentioned that it offers easy help to accelerate typing. Participant 6 stated, "It helps me type so much faster than all my friends because they don't use it. So, I would say almost all of the time, [it] is a good experience," and Participant 2 said, "it's a mini helper." Still, Participant 11 indicated that it can slow them down, disturb, and hinder the communication. Participant 13 had a more balanced view and said, "It can be helpful, but also detrimental."
170
+
171
+ We also asked for stories about (positive or negative) episodes that participants had encountered with autocorrection. Participant 5 said, "my friend was complaining about autocorrect in a text and it was changed to 'auto cucumber'.", which was humorous enough to make it into the title of this paper. However, Participant 7 said, "a friend of mine sent an entirely different text to his wife because of autocorrection. She was so mad. He had to [provide] a lot of explanation to calm [her] down." Autocorrection also can lead to social embarrassment, as Participant 11 said, "due to autocorrection [I] typed a slang [word] instead of a person's surname. This was on a WhatsApp group chat. Later people mention this personally and I was so embarrassed," while Participant 13 said, "while messaging in a family group, autocorrection changed my wishes from 'dear' to 'dead'." Participants 2, 8 and 14 indicated that they sent a professional email to their employer and autocorrect changed some words to common slang terms. Participants 2 indicated that he sent his boss a curse word by accident because he used his phone to send an email. On the positive side, autocorrection can also lead to unexpected pleasant outcomes, including for Participant 4 who indicated that his friend got married because of an autocorrection changing "have" to "love," in a situation where the recipient seems to already have been in love with his friend.
172
+
173
+ < g r a p h i c s >
174
+
175
+ Figure 7: Average a) effort, b) physical demand, and c) mental demand for each condition for each survey prompt starting from the initial baseline prompt.
176
+
177
+ < g r a p h i c s >
178
+
179
+ Figure 8: Average a)frustration, d)mental demand, and b) word per minutes for each condition for each survey prompt starting from the initial baseline prompt.
180
+
181
+ We asked also how autocorrection makes participants feel. Some expressed positive emotions such as good, happy, confident comfortable, easy, safe, satisfied, less stress, and "makes life easier." Yet, others mentioned negative emotions such as frustrated, irritated, aggravated, bothersome, annoying, lazy, and unsatisfied. Some were neutral and indifferent. Participant 11 and 18 mentioned that au-tocorrection "weirded them out" because autocorrect can present sensitive data, such as passwords or names, which should not been stored, or personal suggestions that they do not recall typing into their phone.
182
+
183
+ Additionally, we asked participants about the type of words that are hardest to correct after an incorrect autocorrect. Participants mentioned errors due to grammar, especially tenses, mistakes in longer than average, complex, or new words, and surnames. Many discussed mistakes due to a forgotten space, where Participant 1 talked about an unfortunate autocorrect that happened "when I pressed $b$ instead of the space bar." Four participants indicated that mistakes at the beginning of a word are usually the hardest to autocorrect. Many mentioned mistakes that occur when they use multiple languages on the keyboard.
184
+
185
+ Finally, we asked participants about their design recommendations. Participant 3 said that systems designers should "make it slightly more hidden and less distracting," while Participant 8 said, "I think if we made mistakes on typing there [should be] a sound like [an] alarm, it will be useful" and suggested that "Highlighting [the] background of suggestion[s]" might be helpful.
186
+
187
+ Many participants mentioned that they would prefer if there were a button on the keyboard to quickly toggle autocorrection in a single click, instead of having to go into the settings dialog. Participant 18 added, "I think you should have a confidence score on the side of the screen so users could feel comfortable turning it off at times." Participants 5 and 19 indicated that they want to see synonyms, one of which suggested, "Maybe keep the drop-down option or even add it to the screen while typing with various spellings or adding an option to [show] a meaning or similar words [thesaurus option]." Participant 10 suggested allowing deletion of standard dictionary words: "I have no idea what a 'wyeth' is, but it's in my Android dictionary and can't be deleted." Others suggested sentence completion using artificial intelligence. Finally, Participant 9 expressed a desire for an option for autocorrections based on their location, as people communicate differently in different geographical locations.
188
+
189
+ After they completed the task, we asked participants about their text entry behavior during our tasks. A majority, 70% indicated that they typed as fast as possible, while ${30}\%$ reported that they were as careful as possible. All participants entered text using (the thumbs or fingers of) both hands.
190
+
191
+ § 5.0.4 OBSERVATIONS DURING TEXT ENTRY
192
+
193
+ We reviewed the videos from the experiment to further understand user behaviors. We saw that expressions of frustration were much more frequent with conditions where more autocorrection failures occurred; However participants were less expressive about their frustration in the conditions with ${10}\%$ and ${20}\%$ failures, compared to no autocorrection. Participants that experienced no autocorrection freely expressed their frustration and let us know about their feelings. We also found that our experiment was quite sensitive to user behaviors. For instance, we identified two spikes in the reported frustration for an optimal condition participant. Going back to the videos we observed that they had said "the word 'distraction' is really hard to type" and in the other instance, they mentioned that typing the word "rectangular" was time-consuming for them. Another participant with the optimal condition said, "I am not sure; I am confused about the autocorrection. I want to go back and fix a mistake, but it is fixed for me [pause], which is good by the way." Yet another optimal-condition participant mentioned that they did not know if the autocorrection was on, but their frustration level was low for the whole session. Two participants experiencing the condition of autocorrection with ${10}\%$ mistakes said that the "autocorrection feature here [is] very similar to what I have in my phone." At the beginning of the study, we also observed that the majority of our participants did not know how to turn their autocorrection off, i.e., we had to help them turn it off. This did not apply to those who used custom keyboards.
194
+
195
+ § 6 DISCUSSION
196
+
197
+ We see some evidence that perfect autocorrect is better than other autocorrection alternatives. Furthermore, autocorrect that fails 10% of the time is in some measures better than ${20}\%$ failures, which in turn is also generally better than no autocorrect. Overall, as Figure 5 illustrates, lowering the percentage of autocorrect failures will reduce frustration.
198
+
199
+ We observed that using autocorrection significantly increases typing speed compared to not using it, with the optimal option being the fastest. However, we did not find a statistically significant difference between failing ${10}\%$ and ${20}\%$ options in terms of typing speed. When we compared the typing speed for each condition, we noticed that the participant's speed increased over time during our experiment, while without autocorrection it initially increased but then flattened out, see Figure 7.
200
+
201
+ Participants spent the least time verifying the phrases in both the optimal and autocorrection with ${10}\%$ failing conditions, see Figure 4. That is explained by our finding that both conditions were not significantly different in terms of both mental demand and effort, see Figure 5.
202
+
203
+ The significantly higher number of keystrokes per character without autocorrection provides supporting evidence that the condition without autocorrection significantly decreased participants' typing speed, compared to all other options for autocorrection, see Figure 4. This also matches results from previous work (e.g., [5])
204
+
205
+ No autocorrection and autocorrection with ${20}\%$ failures stood out as the most frustrating conditions. There's also a chance that the frustration stems from the participants' frustration with themselves for making errors, instead of frustration with the autocorrection itself. A participant noticed an autocorrection error and said " $I$ am very bad typer, I never fix my mistakes [pause], maybe it is just me." This may be due to the fact that frustration can also lead users to believe that they are failing at a task [7]. This raises the question of how small the acceptable percentage of autocorrection failures should be. This is an interesting avenue for future, quantitative studies.
206
+
207
+ Despite occasional failures, participants felt that they had less mental demand with autocorrection regardless of its accuracy, see Figure 4. In the post-session interview, we asked them about their behaviors and perceptions around autocorrection. Most of them said in one way or the other that they accepted that autocorrection fails occasionally. As previous research has identified, lowering expectations can make emotional responses to subsequent failures less intense [44].
208
+
209
+ The physical demand significantly increased based on autocor-rection accuracy, see Figure 5, since more frequent mistakes require more editing, which increases physical demand. Also unexpectedly, physical demand peaked with no autocorrection. Mental and physical demand, as well as frustration, all exhibit similar patterns, with the optimal condition being the least demanding and no autocorrec-tion being the most demanding condition, see Figure 7.
210
+
211
+ Participants felt that they needed to spend less effort on completing the task in the optimal condition, see Figure 5. Interestingly, and in contrast to the other conditions, effort generally decreases over time.
212
+
213
+ Participants indicated that autocorrection is overall a useful feature, when used sensibly. However, they also felt that it can sometimes change the meaning of a sentence entirely if they do not pay sufficient attention. As mentioned in the description of the experiment, when we asked for stories about positive or negative episodes that participants had experienced with autocorrection, participants said that autocorrect sometimes produces hilarious mistakes such as "my friend was complaining about autocorrect in a text and it was changed to 'auto cucumber'." However, some indicated that autocor-rection can lead to serious mistakes and social embarrassment (see Section 4.4.3). Thus, participants said that in certain scenarios, e.g., sending professional emails or texting parents, they have to verify the text a couple of times and be more cautious. With the advance in algorithms and personalization, users are sometimes exposed to side-effects, which can save sensitive data that is then shown at inappropriate times, such as specific words that they use only in contexts unrelated to the current text message (e.g., passwords). Some participants mentioned that autocorrection "weirded them out" and that they were concerned about potential privacy issues.
214
+
215
+ Our participants generally indicated that the autocorrect mistakes that are hardest to correct are the ones that happen at the beginning of a sentence, likely because it takes longer to navigate to such positions in the text. There is substantial research on how to facilitate error correction, and many keyboards provide advanced techniques to tackle such issues, e.g., WiseType [1] or other work [2, 43]. However, most of these techniques have not yet been adapted in built-in keyboards on most smartphones. Many participants indicated that mistakes occur when they use multiple languages on the keyboard, which was fairly prevalent in our participant pool. There is thus a need to re-consider how multiple dictionaries should be handled as well as better language detection methods within a keyboard's implementation. Also, our participants emphasized that they would like to see keyboards with a built-in grammar checker. Grammar checkers are not yet widely available on commercial mobile keyboards at the time of our work, but recent work found that adding a grammar checker helps improve text entry speed and accuracy [1].
216
+
217
+ Participants were split about how they prefer visual feedback for autocorrects that occurred in the text. Some wished to have slightly more hidden and less distracting feedback, while others wanted highlighting and more obvious feedback for autocorrects. This indicates the importance of giving users the ability to change the visualization settings for autocorrection instances, not just the option to turn it on/off.
218
+
219
+ Participants made several interesting design recommendations. Many participants indicated that they would prefer if there were a button on the keyboard to quickly toggle autocorrection using a single step, instead of having to go into the settings dialog, see Figure 9. Some existing virtual keyboards have an option to turn off the autocorrection. However, this always requires multiple interaction steps through settings dialogues and similar mechanisms. The majority of our participants indicated that they did not know how to turn autocorrection off and on. One mentioned the idea of having a confidence score on the side of the screen. Others indicated that they want to see synonyms as drop-down options, similar to some desktop text processing systems. Another recommendation is to have an option for autocorrections based on the current location, because people communicate differently in different areas, i.e., the requirement for correctness is typically higher at work. Many participants said that they did not know how to delete words from dictionaries, which demonstrates that there are more opportunities to improve the interaction with the dictionary supporting autocorrect (for more details see section 4.4.3).
220
+
221
+ < g r a p h i c s >
222
+
223
+ Figure 9: A design recommendation from our participants for adding a button on the keyboard to quickly toggle autocorrection.
224
+
225
+ Even though we collected data from only five participants per condition, the significant differences in our results exhibit large or (at least) medium effect sizes, which we see as an indication that our results are unlikely to be spurious. Also, we point out that Kapoor et al.'s research on automatic prediction of frustration in an intelligent system relied similarly on only four participants per condition [29].
226
+
227
+ A potential limitation of our work is that our autocorrection implementation might have produced different outcomes relative to system-generated predictions, which are typically based on machine-learning-based approaches [51]. Yet, as autocorrect works differently on different platforms, we could not identify a simple way to perfectly match the behaviour that users are used to across platforms, while still giving our software access to uncorrected input and/or allowing us to implement an optimal autocorrect condition. Also, two participants stated of their own volition, i.e., without prompting or questions from our side, that they perceived our 10% failing autocorrect implementation to match closely the one on their current smartphone. One reason behind this is that many users use smartphone models that are a few years old, which means that their experience with autocorrect also lags behind the state of the art, especially on the Android platform, which many participants used. Thus, we believe that we can still state that at the time our study was performed, our implementation was ecologically valid for the study.
228
+
229
+ Additionally, we used our own implementation because we wanted to tightly control the percentage of autocorrect failures and to explore the best-case scenario with "perfect" autocorrect conditions, which is similar to the "100% accurate" autocorrect condition in [5]. This condition closely resembles an oracle. Even with the use of advanced predictive autocorrection algorithms, it would be impossible to guarantee that a given number of failures would occur, especially since we cannot predict when or how the user enters any misspelled word. After all, wrong autocorrections can be due to participants entering unrecognized words with (potentially compounding) issues, such as spelling the word wrong, using the wrong touch locations, and/or missing a space. Interestingly, powerful autocorrect algorithms that predict corrections based on words, sentences, and user history can fail as well. Some of our participants that had the newest phones among our participants indicated that the quick adaptability of these newer methods can create issues for them, such as the system memorizing slang or curse words and then ranking them highly, in a situation where participants do not want the system to utilize such content for autocorrection.
230
+
231
+ § 7 CONCLUSION
232
+
233
+ We assessed the effect of autocorrection failures on the user's mental and physical demand, performance, and effort during typing tasks using self-report measures, a think-aloud protocol, and interviews. We showed that the higher the frequency of autocorrection failures, the more likely it is that participants become frustrated. Then we listed several design recommendations for giving users the ability to temporarily adjust the behavior of autocorrection.
234
+
235
+ In the future, we will conduct a study to explore the effect of methods that are designed to ease users' frustration when autocorrec-tion fails. We also want to identify behavioural patterns around user frustration and potentially conduct quantitative studies that pinpoint at which failure percentage the frustration associated with autocor-rect disappears. Finally, we also plan to look further into how to better support autocorrection for bilingual users and the implications of autocorrect failures that occur when using multiple languages on a keyboard.
236
+
237
+ § ACKNOWLEDGMENTS
238
+
239
+ We would like to thank the participants. The work was funded by King Saud University to whom we are also grateful.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/eK2ZbaaJvd/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,431 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Blocks: Creating Rich Tables with Drag-and-Drop Interaction
2
+
3
+ Category: Research
4
+
5
+ ![01963e73-f90e-70cf-86fd-7721b334ff9e_0_222_331_1351_623_0.jpg](images/01963e73-f90e-70cf-86fd-7721b334ff9e_0_222_331_1351_623_0.jpg)
6
+
7
+ Figure 1: A rich table showing data about Airbnb listings in Seattle, created with Blocks. The table shows a variety of mark types and measures at several levels of detail combined into a single visualization. Each column of the table is defined by a Block with its own set of encoding and field mappings. The columns from left to right show rows for each neighborhood group, sorted by average listing price, a labeled bar chart showing average price, colored by availability, rows for each neighbourhood within each neighborhood group, the same labeled bar chart, but showing average price for each neighborhood, and a sparkline showing average price over time.
8
+
9
+ ## Abstract
10
+
11
+ We present Blocks, a formalism that enables the building of visualizations by specifying layout, data relationships, and level of detail (LOD) for specific portions of the visualization. Users can create and manipulate Blocks on a canvas interface through drag-and-drop interaction, controlling the LOD of the data attributes for tabular style visualizations. We conducted a user study to compare how 24 participants employ Blocks and Tableau in their analytical workflows to complete a target visualization task. We also ran a subsequent longitudinal diary study with eight participants to better understand both the usability and utility of Blocks in their own analytical inquiries. Findings from the study suggest that Blocks is a useful mechanism for creating visualizations with embedded microcharts, conditional formatting, and custom layouts. We finally describe how the Blocks formalism can be extended to support additional composite visualizations and Sankey charts, along with future implications for designing visual analysis interfaces that can handle creating more complex charts through drag-and-drop interaction.
12
+
13
+ Keywords: Formalism, level of detail, nesting, layout, conditional formatting, rich tables, drag-and-drop interaction.
14
+
15
+ ## 1 INTRODUCTION
16
+
17
+ Visual analysis tools $\left\lbrack {{15},{23}}\right\rbrack$ help support the user in data exploration and iterative view refinement. Some of these tools are more expressive, giving expert users more control, while others are easier to learn and faster to create visualizations. These tools are often driven by underlying grammars of graphics [27, 43] that provide various formalisms to concisely describe the components of a visualization. High level formalisms such as VizQL [40] and ggplot2 [42] are set up to support partial specifications of the visualization and hence provide the convenience of concise representations. Reasonable defaults are subsequently applied to infer missing information to generate a valid graphic. The downside of these concise representations is that the support for expressiveness for visualization generation in these tools is either limited or difficult for a user to learn how to do.
18
+
19
+ Drag-and-drop is one paradigm for addressing the limitations of expressivity by supporting task expression through user interaction where the visibility of the object of interest replaces complex language syntax. VizQL is one such formalism that supports the expression of chart creation through direct manipulation in Tableau [23]. While the language enables users to create charts through its underlying compositional algebra, there is still tight coupling between the query, the visualization structure, and layout. As a result, users often spend significant time in generating complex visualizations when they have a specific structure and layout in mind. The other paradigm for promoting expressiveness for chart creation is through the use of declarative specification grammars $\left\lbrack {{29},{38},{39}}\right\rbrack$ that can programmatically express the developer's intentions.
20
+
21
+ Despite the prevalence of these tools, creating expressive data visualizations still remains a challenging task. Beyond having a good insight about how the data can be best visualized, users need to have sufficient knowledge to generate these visualizations. So, how can we support users in their analytical workflows by enabling a greater degree of flexibility and control over nesting relationships, layout, and encodings, yet providing the intuitiveness of a user interface? In this paper, we address this dichotomy between expressibility and ease of use for the user by extending VizQL to provide greater flexibility in creating expressive charts through direct manipulation.
22
+
23
+ ### 1.1 Contributions
24
+
25
+ Specifically, our contributions are as follows:
26
+
27
+ - We introduce Blocks, a formalism that builds upon VizQL by supporting the nested relationships between attributes in a visualization using a drag-and-drop interaction. Every component of the visualization is an analytical entity to which different nesting and encoding properties can be applied.
28
+
29
+ - We implement a Blocks System that provides a user increased flexibility with layout and formatting options through the direct manipulation of Block objects in the interface.
30
+
31
+ - We evaluated Blocks with 24 participants when performing tasks involving the creation of rich tables using both Tableau and Blocks. Eight of these users recorded their explorations using Blocks in their own workflows for an additional two-week diary study. Findings from the studies indicate that Blocks is a promising paradigm for the creation of complex charts. We identify research directions to pursue to better support users' mental models when using the system.
32
+
33
+ Figure 1 shows how a user can create a rich table using Blocks with a Seattle Airbnb dataset. The assembly of Blocks in the interface results in columns with different mark types such as bar charts and sparklines. The query for each Block inherits the dimensions from the parent Blocks. The first price column inherits the field neighbourhood_group as its dimension, computing price for each neighbourhood group. The second price column inherits both neighbourhood_group and the field neighbourhood showing a more granular level of price per neighbourhood.
34
+
35
+ ## 2 RELATED WORK
36
+
37
+ Visual analysis techniques can be broadly classified into two main categories: (1) declarative specification grammars that provide high-level language abstractions and (2) visual analysis interfaces that facilitate chart generation through interaction modalities.
38
+
39
+ ### 2.1 Declarative specification grammars
40
+
41
+ Declarative visualization languages address the problem of expressiveness by allowing developers to concisely express how they would like to render a visualization. Vega [39] and Vega-Lite [38] support the authoring of interactivity in the visualizations. While these specification languages provide a great degree of flexibility in how charts can be programmatically generated, they provide limited support for displaying different levels of granularity within a field in a visualization. Further, they require programming experience, making it challenging for non-developers to quickly develop advanced charts in their flow of analysis. Viser [41] addresses this gap by automatically synthesizing visualization scripts from simple visual sketches provided by the user. Specifically, given an input data set and a visual sketch that demonstrates how to visualize a very small subset of this data, their technique automatically generates a program that can be used to visualize the entire data set. Ivy [35] proposes parameterized declarative templates, an abstraction mechanism over JSON-based visualization grammars. A related effort by Harper and Agrawala [32] converts D3 charts into reusable Vega-Lite templates for a limited subset of D3 charts. While our work is similar to that of declarative grammars and template specifications in the sense of abstracting low-level implementation details from the user, we focus on supporting non-developer analysts in creating expressive charts through drag-and-drop interaction. We specifically extend the formalism of VizQL for supporting nested queries, layout, and encoding flexibility through drag-and-drop interaction in the Blocks interface.
42
+
43
+ ### 2.2 Visual analysis interfaces
44
+
45
+ Visual analysis tools over the years have developed ways to help novice users in getting started in a UI context. The basic form of these tools for chart generation include chart pickers that are prevalent in various visual analysis systems [26]. Commercial visual analysis tools such as Tableau and PowerBI, along with systems like Charticulator [36] are built on a visualization framework that enables users to map fields to visual attributes using drag-and-drop interaction. As more analytical capabilities are enabled in these tools, there is a disconnect from the underlying abstraction, leading to calculation editors and dialog menus that add both complexity and friction to the analytical workflow.
46
+
47
+ Prior work has explored combinations of interaction modalities for creating visualizations. Liger [37] combines shelf-based chart specification and visualization by demonstration. Hanpuku [28], Data-Driven Guides [33], and Data Illustrator [34] combine visual editor-style manipulation with chart specification. However, none of these systems specifically focus on a visually expressive way of handling nested relationships during chart generation; a common and important aspect of analytical workflows. Our work specifically addresses this gap and focuses on supporting analysts in a visual analysis interface for creating more expressive charts with nestings by using drag-and-drop as an interaction paradigm.
48
+
49
+ Domino [31] is a system where users can arrange and manipulate subsets, visualize data, and explicitly represent the relationships between these subsets. Our work is similar in concept wherein direct manipulation is employed in visually building relationships in charts, but there are differences. Domino has limited nesting and inheritance capabilities as it does not define parent-child relationships between blocks to support dependent relationships (e.g., a column depending on rows). The expressiveness of complex visualizations such as rich tables with repeated cells containing sparklines, text, and shapes, is limited.
50
+
51
+ ## 3 TABLEAU USER EXPERIENCE
52
+
53
+ The core user experience of Tableau is placing Pills (data fields) onto Shelves (specific drop targets in the interface). This controls both the data used and the structure, along with the layout of the final visualization. Fields without an aggregation are called Dimensions. Measures are fields that are aggregated within groups defined by the set of all dimensions, i.e., the Level of Detail (LOD).
54
+
55
+ The key shelves are the Rows Shelf, the Columns Shelf, and the visual encoding shelves that are grouped into the Marks Card. Fields on the Rows and Columns Shelves define "headers" if discrete or "axes" if continuous. The Marks Card specifies a mark type and visual encodings such as size, shape, and color. If there is more than one Marks Card, the group of visualizations defined by the Marks Cards, forms the innermost part of the chart structure, repeated across the grid defined by the Rows and Columns Shelves.
56
+
57
+ The Blocks system attempts to address three limitations inherent to the Tableau experience:
58
+
59
+ - The separation between "headers" and "marks" concepts. The headers define the layout of the visualization and cannot be visually encoded. Only fields on the Marks Card participate in creating marks, but the marks must be arranged within the grid formed by the headers. For example, it is not possible to have a hierarchical table where the top level of the hierarchy is denoted by a symbol rather than text.
60
+
61
+ - The Rows and Columns Shelves are global. As per their names, a field on the Rows Shelf defines a horizontal band, and a field on the Columns Shelf a vertical band, across the entire visualization. For example, it is not possible to place a y-axis next to a simple text value, as one does for sparklines.
62
+
63
+ - Queries are always defined using both the Rows and Columns Shelves, along with the Marks Card. For example, it is not possible to get the value of a measure at an LOD of only dimensions from the Rows Shelf, without those on the Columns Shelf.
64
+
65
+ Users have found ways to work around these limitations to build complex visualizations such as rich tables with sparklines or visualizations with encodings at different LOD for example. These methods include composing multiple visualizations on a dashboard so they appear as one [2]; writing complex calculations to control layout or formatting of elements $\left\lbrack {3 - 5,7,{11} - {13}}\right\rbrack$ ; creating axes with only a single value $\left\lbrack {1,{20}}\right\rbrack$ , among others. Tableau introduced LOD expressions to help answer questions involving multiple levels of granularity in a single visualization [14]. The concept of LOD expressions is outside of the core UI paradigm of direct manipulation in Tableau. Rather, users need to define LOD calculated fields via a calculation editor and understand the syntax structure of Tableau formulae.
66
+
67
+ ## 4 DESIGN GOALS
68
+
69
+ To better understand the limitations of Tableau for creating more expressive visualizations, we interviewed 19 customers, analyzed 7 internal dashboards, and reviewed 10 discussions on the Tableau Community Forums [18] that used various workarounds to accomplish their analytical needs. Each customer interview had one facilitator and one notetaker. The customers we interviewed consisted of medium- or large-sized companies that employ Tableau in their work. The interviews consisted of an hour-long discussion where we probed these customers to better understand their use cases. We conducted a thematic analysis through open-coding of interview notes and the Tableau workbooks the customers created and maintained. Finally, we reviewed the top ideas in the Tableau Community Forums to locate needs for more expressive visualizations. These ideas included extensive discussions among customers, which helped us better understand the use cases as well as ways customers work around limitations today. We reviewed our findings, summarized what we learned, and identified common patterns from our research. This analysis is codified into the following design goals:
70
+
71
+ ## DG1. Support drag-and-drop interaction
72
+
73
+ Tableau employs a drag-and-drop interface to support visual analysis exploration. We learned through discussions with an internal analyst how important table visualizations were for her initial exploration of her data. Her first analytic step was to view her data in a table at multiple LOD and confirming that the numbers matched her expectations based on domain knowledge. We also noticed that many customers used tables to check the accuracy of their calculations throughout their analysis. These discussions indicated that tables are not just an end goal of analysis, but play a key part of the exploratory drag-and-drop process. Our goal is to maintain the ease of use provided by the drag-and-drop interface and data-driven flow when creating visualizations.
74
+
75
+ ## DG2. Better control over visualization components and layout
76
+
77
+ Tableau employs defaults to help users manage the large space of possibilities that a compositional language creates [40]. When users have specific ideas of what they want to create, their workflows often conflict with the system defaults. A customer at a large apparel company described the challenges they ran into when replicating an existing report in Tableau. In order to match all of the desired formatting and layout, they had to delicately align multiple sheets together on a single dashboard. Not only did the customer find this frustrating to maintain, but they often ran into issues with alignment and responsive layout. Our goal is to support users with increased layout flexibility as they generate charts for their analytical needs.
78
+
79
+ ## DG3. Aggregate and encode at any LOD in a visualization
80
+
81
+ As users strive to build richer visualizations, the need arises for more control over showing information at multiple LOD. While Tableau supports calculations to control the LOD a measure aggregates to, creating these calculations does not provide the ability to visually encode at any LOD and takes users out of their analytic workflow. For example, one customer at a large technology company had a table visualization that listed projects and the teams who worked on each of the projects. Some of the measures needed to show information at the project level (such as total cost), while others measures were at the team level (amount of effort required per team). Building this visualization in Tableau required the customer to write many LOD calculations. Our goal is to provide the ability to use visual encodings and a drag-and-drop experience to evaluate measures at any LOD from any component of the visualization.
82
+
83
+ ## 5 THE BLOCKS FORMALISM
84
+
85
+ The Blocks formalism uses an arbitrary number of connected local expressions (i.e., Blocks) instead of global Rows and Columns expressions. Each Block represents a single query of a data source at a single LOD, resulting in a component of the final visualization. Parent-child relationships between the Blocks form a directed acyclic graph (DAG).
86
+
87
+ A block-name is a unique identifier for the Block. The valid values of field-name and aggregation depend on the fields in the data source and the aggregation functions supported by that data source for each field. Any field-instance with an aggregation is used as a measure; all others are used as dimensions.
88
+
89
+ The local ${LOD}$ of the Block is the set of all dimensions used by any encoding within the Block. The full ${LOD}$ of the Block is the union of its local LOD and the local LOD of all of its ancestors. All of the measures used by the Block are evaluated at the full LOD of the Block. In addition to defining the LOD, the encodings map the query results to visual and spatial encodings. Except for 14 (sort ascending), $\bar{ = }4$ (sort descending), and $\therefore$ (data details), each encoding-type must occur at most once within each Block. The sort encodings control the order of the query result and ultimately the rendering order; their priority is determined by the order that they appear. By providing a means to encode $\overset{\mathrm{X}}{ \rightarrow }\left( {\mathrm{x} - \text{ axis }}\right)$ and $\uparrow \mathrm{y}\left( {\mathrm{y} - \text{ axis }}\right)$ at the visualization component level instead of as part of a global table expression as in Tableau, Blocks addresses DG3 with respect to sparklines and other micro charts within a table visualization.
90
+
91
+ ---
92
+
93
+ block := (block-name, layout-type,
94
+
95
+ mark-type, encoding, children)
96
+
97
+ children := $\{$ (child-group) $\}$
98
+
99
+ child-group := $\{$ block-name $\}$
100
+
101
+ layout-type := "rows" | "columns" | "inline"
102
+
103
+ mark-type := "text" | "shape" | "circle"
104
+
105
+ | "line" | "bar"
106
+
107
+ encoding $\; \mathrel{\text{:=}} (\{$ encoding-type $\}$ ,
108
+
109
+ field-instance)
110
+
111
+ encoding-type := .. "color" | \{\} "size"
112
+
113
+ | as "shape"
114
+
115
+ | ☑"text" | \$"x-axis"
116
+
117
+ | ↑y "y-axis" | 型 "sort-asc"
118
+
119
+ | _ _ "sort-desc" | _ _ "detail"
120
+
121
+ field-instance := ([aggregation], field-name)
122
+
123
+ ---
124
+
125
+ Each Block renders one mark of its mark-type per tuple in its query result. The layout-type determines how each of the Block's rendered marks are laid out in space. A Block with the layout type of rows creates a row for each value in its domain, with each row containing a single mark. A common example is a Block with a rows layout type and text mark type will generate a row displaying a text string for each value in the Block's domain. A Block with the layout type of columns creates a column for each value, each column containing a single mark per column. To facilitate the creation of scatter plots, line graphs, area charts, and maps, a Block with the layout type of inline renders all of its marks in a single shared space.
126
+
127
+ ![01963e73-f90e-70cf-86fd-7721b334ff9e_3_148_155_1500_556_0.jpg](images/01963e73-f90e-70cf-86fd-7721b334ff9e_3_148_155_1500_556_0.jpg)
128
+
129
+ Figure 2: Blocks system overview. Users create Block GUI Cards that can define multiple field encodings at a single LOD. The Block GUI card is translated into a Block specification. This specification consists of some number of dimensions, some number of measures aggregated to the LOD of the cross product of the dimensions, a layout, the visual encodings, a mark type, some number of filters, and a sort order. From this Block specification, a Block query is issued to the source data source. The output of a Block query is a Block result set which returns the tuples and corresponding encoding results. This is finally rendered as an output visualization.
130
+
131
+ Child Blocks are laid out in relation to their parents' positioning. A child-group is a set of children that share the same row (for a rows parent) or column (for a columns parent). E.g., in Figure $4\mathrm{\;d}$ , the children of Block $R$ are ((Block $B$ , Block $C$ ), Block $G$ ); $B$ and $C$ are on the same row and so form a child-group. To insure the layout can be calculated, the DAG must simplify to a single tree when considering only the children of rows Blocks or only the children of columns Blocks. This layout system enables Blocks to address DG2 by defining labels, axes, and marks all using the single Block concept. Figure 4a shows how Blocks can be expressed with the formalism.
132
+
133
+ ## 6 THE BLOCKS SYSTEM
134
+
135
+ The Blocks system provides an interface for creating Blocks and to view the resulting visualizations. Figure 2 illustrates the architecture. The Blocks Interface (1) and Output Visualization (4) are React-based [17] TypeScript [25] modules that run in a web browser. The interface communicates over HTTPS with a Python back-end that implements the Query Execution (2) and Rows and Column Assignment (3) processes. The system has the flexibility of using either of two query execution systems - a simple one built on Pandas [16] and local text files, or a connection to a Tableau Server Data Source [22], which provides access to Tableau's rich data model [19]. The back-end returns the visual data needed to the front end for rendering the output visualization.
136
+
137
+ ### 6.1 Blocks interface
138
+
139
+ The Blocks interface provides a visual, drag-and-drop technique to encode fields, consistent with DG1. Like Tableau, pills represent fields and a schema pane contains the list of fields from the connected data source. Instead of an interface of a fixed number of shelves, the Blocks interface provides a canvas that supports an arbitrary number of Blocks. Dragging out a pill to a blank spot on the canvas will create a new Block, defaulting the Block's encoding, mark type and layout type based on metadata of the field that the pill represents.
140
+
141
+ ![01963e73-f90e-70cf-86fd-7721b334ff9e_3_925_938_717_227_0.jpg](images/01963e73-f90e-70cf-86fd-7721b334ff9e_3_925_938_717_227_0.jpg)
142
+
143
+ Figure 3: Possible drop targets are shown to the user just-in-time as they drag pills to the Blocks canvas.
144
+
145
+ For example, dragging out a pill that represents the discrete, string field $\mathrm{P}$ will create a Block with the layout type of rows, mark type of text, and field $\mathrm{P}$ encoded on $\bar{ \circ }$ . The layout type and mark type are displayed at the top of the Block. Encodings are displayed as a list inside the block. Additional pills can be dragged to blank space on the canvas to create a new, unrelated block, added as an additional encoding to Block A, or dropped adjacent to Block A to create a new related block.
146
+
147
+ As seen in Figure 3, when a pill is dragged over an existing block, drop targets appear that represent any unused encodings in that Block that the system provides. When a pill is dragged over an area adjacent to an existing block, drop targets appear to assist in creating a new related block. If the pill that is being dragged represents a dimension field, the system provides options to create a new block with either the rows layout type or the column layout type. The dimension field of the pill will be encoded on $\bar{ \pm }$ by default. If the pill being dragged represents a measure field, the system provides the option to encode the measure on the $\underline{x}, \uparrow v$ , or $\bar{x}$ on a Block that is defaulted to the inline layout type. Once the new, related Block is created, the layout type, mark type, and encoding can all be customized.
148
+
149
+ There are two implicitly-created root Blocks that are invisible in the interface, a Rows root and a Columns root. Any Block that has no parents is the child of the Rows root Block and Columns root Block. These root Blocks are used as the starting point for calculating Row and Column indexes, as described in Section 6.3.
150
+
151
+ Blocks placed to the right of or below related Blocks are automatically determined to be child Blocks. A chevron icon (>) displayed between the Blocks denotes the direction of the nested relationship between the Blocks. The layout of Blocks also directly determines the layout of components in the visualization. Block A placed above Block B will draw visualization component A above visualization component B.
152
+
153
+ ![01963e73-f90e-70cf-86fd-7721b334ff9e_4_138_148_1521_1074_0.jpg](images/01963e73-f90e-70cf-86fd-7721b334ff9e_4_138_148_1521_1074_0.jpg)
154
+
155
+ Figure 4: Example configurations of Blocks
156
+
157
+ Every Block must have both a Rows and a Columns parent to determine its position in the visualization; more than two parents are not permitted. If a Rows or Columns parent is not explicit in the interface, that parent is added implicitly by the system. The missing parent is implied by the relationships of the defined parent. A Block that does not have a Columns parent defined in the interface uses the Column parent of its Rows parent. Similarly, a Block that does not have a Rows parent defined in the interface uses the Rows parent of its Columns parent. Inline Blocks do not have children. If the interface defines a Block as a child of an Inline Block, it uses the Rows and Columns parents of the Inline Block. Figure 5 shows the graph implied by the interface for Figure 4d.
158
+
159
+ In Figure 4a, the field Class is encoded on text in Block C with a Row layout type. As there are three values in the domain of the field of Class, three rows are created in the visualization with a text mark for each value of the field. An Inline Block is nested as a child Block with NumSurvived encoded on the $\underline{x}$ . The system creates a bar chart for each row as defined by the first Block. Since no additional dimensions are added to Block N, the measure NumSurvived is aggregated to the LOD of Class and a single bar is rendered per row.
160
+
161
+ ![01963e73-f90e-70cf-86fd-7721b334ff9e_4_938_1400_701_489_0.jpg](images/01963e73-f90e-70cf-86fd-7721b334ff9e_4_938_1400_701_489_0.jpg)
162
+
163
+ Figure 5: Implicit and explicit links for Figure 4d. Links explicitly shown in the interface are solid black arrows. The link from an Inline Block, which is treated as a link from the parent block, is shown in red. Links added implicitly are shown as dashed arrows.
164
+
165
+ Figure 4b expands the example, showing how multiple dimensions can be added to a visualization. Due to the parent-child relationship of the four Blocks, NumSurvived inherits the dimensions from the parent Blocks, and aggregates at the combined LOD of Class, Family Aboard, and Sex. In contrast, Age is encoded in Block C which has no parent Block. Therefore, Age aggregates to the LOD of Class, the only dimension encoded on the same Block.
166
+
167
+ To specify a crosstab, the formalism requires a Block to have two parents - a Rows Block parent and a Columns Block parent. The user interface supports the specification of two parent Blocks, one directly to the left and the other directly above a Block. Figure 4c shows Block N with Block C and Block S as parent Blocks. The measure of NumSurvived in Block $\mathrm{N}$ is aggregated to the combined LOD of Class and Sex, the dimensions from its parents, Block C and Block S.
168
+
169
+ ### 6.2 Query execution
170
+
171
+ Each Block executes a single query that is at the LOD of all the dimensions for the Block, including those inherited from parent Blocks. In figure 4b, the query for Block $S$ includes not only Sex but also FamilyAboard, inherited direction from Block F and Class, inherited indirectly from Block C. This enables layout of the Block's marks relative to its parents and avoids making the user repeat themselves in the user interface, in support of DG1. The query includes only the measures for the current Block, not those of any other Block, because measures are aggregates at a specific LOD, in support of DG3. Every query is deterministically sorted, either by a user-requested sort or by a default sort based on the order of the encoding fields within the block.
172
+
173
+ ### 6.3 Row and Column Assignment
174
+
175
+ Query execution results in multiple tables with different schemas. The system needs to assign Row and Column indexes from a single grid to tuples from all of these tables. This section describes the process for Rows; it is repeated for Columns.
176
+
177
+ 1. Produce a Block tree from the Blocks DAG by only considering links from Rows Blocks to their children, excluding any other links. The Blocks Interface ensures that this tree exists, is connected, and has a single root at the implicit Rows root Block.
178
+
179
+ 2. Produce a tuples tree by treating each tuple as a node. Its parent is the tuple from its parent Block with matching dimension values.
180
+
181
+ 3. Sort the children of each tuple, first in the order their Blocks appear as children in the Blocks tree, and then in the order of the Rows dimensions and user-specified sorts, if any, for each Block.
182
+
183
+ 4. Assign row indexes to each tuple by walking the tuple tree in depth-first order. Leaf tuples get a single row index; interior nodes record the minimum and maximum row indexes of all their leaves into the tuple.
184
+
185
+ ### 6.4 Output visualization
186
+
187
+ Each tuple from a Rows or Columns Block forms a single cell containing a single mark. All of the tuples from an Inline Block with the same Row and Column parent tuples form a single cell. The values of visual encoding fields that are dimensions, if any, differentiate between marks within that cell. Those marks may comprise a bar chart, scatter plot, or other visualization depending on the mark type and visual encodings of the Block. The system uses a CSS Grid [9] and the computed row and column minimum and maximum indexes to define the position of each cell. Within each cell, simple text marks are rendered using HTML. A SVG-based renderer is used for all other marks.
188
+
189
+ ## 7 COMPARATIVE STUDY OF BLOCKS WITH TABLEAU
190
+
191
+ We conducted a user study of Blocks with the goal of answering two research questions: RQ1: How do users orient and familiarize themselves with the Blocks paradigm? and RQ2: What are the differences in how users create visualizations across Tableau and Blocks? This information would provide insights as to how Blocks could be useful to users and how the paradigm could potentially integrate into a more comprehensive visual analysis system. The study had two parts: Part 1 was an exploratory warm-up exercise to observe how people would familiarize themselves with the Blocks interface in an open-ended way. Part 2 was a comparative study where participants completed an assigned visual analysis task of creating a visualization using both Tableau and Blocks. The study focused on various rich table creation tasks as they were found to be a prevalent type of visualization as described in Section 4. Comparing Blocks with Tableau would help highlight the differences in the participants' analytical workflows when performing the same task.
192
+
193
+ ### 7.1 Method
194
+
195
+ #### 7.1.1 Participants
196
+
197
+ A total of 24 volunteer participants ( 6 female, 18 male) took part in the studies and none of them participated more than once. All participants were fluent in English and recruited from a visual analytics organization without any monetary incentives. The participants had a variety of job backgrounds - user researcher, sales consultant, engineering leader, data analyst, product manager, technical program manager and marketing manager. Based on self-reporting, eight were experienced users of the Tableau product, eight had moderate experience, while eight had limited proficiency. During Part 2 of the study, each participant was randomly assigned an order of whether to use Blocks or Tableau first when completing their assigned task.
198
+
199
+ #### 7.1.2 Procedure and Apparatus
200
+
201
+ Two of the authors supported each session, one being the facilitator and the other as the notetaker. All the study trials were done remotely over a shared screen video conference to conform with social distancing protocol due to COVID-19. All sessions took approximately 50 minutes and were recorded. We began the study with the facilitator reading from an instructions script, followed by sharing a short (under two minutes) tutorial video of the Blocks interface, explaining the possible interactions. Participants were then provided a URL link to the Blocks prototype where they participated in Part 1 of the study using the Superstore dataset [24]. During this part, they were instructed to think aloud, and to tell us whenever the system did something unexpected. Halfway through the study session, participants transitioned to Part 2 of the study. They were provided instructions to the task to perform with a Tableau Online [21] workbook pre-populated with the dataset and the Blocks prototype. We discussed reactions to system behavior throughout the session and then concluded with a semi-structured interview. Experimenter script, task instructions, and tutorial video are included in supplementary material.
202
+
203
+ #### 7.1.3 Tasks
204
+
205
+ There were two main parts to the study: Open-ended exploration and closed-ended tasks.
206
+
207
+ Part 1: Open-ended exploration This task enabled us to observe how people would explore and familiarize themselves with the Blocks interface. Instructions were: "Based on what you saw in the tutorial video, we would like you to explore this data in the Blocks prototype. As you work, please let us know what questions or hypotheses you're trying to answer as well as any insights you have while using the interface."
208
+
209
+ ## Part 2: Closed-ended tasks
210
+
211
+ The closed-ended tasks were intended to provide some consistent objectives for task comparison across both Tableau and Blocks systems. Participants completed one of three randomly assigned closed-ended tasks that involved the creation of a rich table as shown in Figure 6. Expected visualization result images were shown as visual guidance along with the instructions to indicate what was generally expected as part of task completion. Here are the tasks along with their corresponding instructions that were provided to the participants:
212
+
213
+ ![01963e73-f90e-70cf-86fd-7721b334ff9e_6_148_150_1497_327_0.jpg](images/01963e73-f90e-70cf-86fd-7721b334ff9e_6_148_150_1497_327_0.jpg)
214
+
215
+ Figure 6: Three study tasks. Task 1: Cross tab with bar charts, Task 2: Table with sorted dimensions, and Task 3: Table with sparklines
216
+
217
+ - Task 1: Create a crosstab with barcharts "Using the Titanic dataset [10], create a crosstab for SUM(NumberSurvived) by Sex (on Rows) and FamilyAboard (on Columns). Now, switch to show bar charts for NumberSurvived with AVG(Age) on color."
218
+
219
+ - Task 2: Create a sorted table "Using the Gapminder dataset [30], create a table that shows SUM(GDP) for each Region and Country. Now, using the table from the previous step, sort both Region and Country by SUM(GDP)."
220
+
221
+ - Task 3: Create a table with sparklines "Using the COVID-19 dataset [8], create a table that shows New Count Confirmed, Total Count Confirmed, and Total Count Deaths for each County in California. Now, given the table from the previous task, add a column with the time attribute Date to generate sparklines to show the Total Count Deaths over time."
222
+
223
+ #### 7.1.4 Analysis Approach
224
+
225
+ The primary focus of our work was a qualitative analysis of how Blocks influenced people's analytical workflows and comparing those workflows with that of Tableau. We conducted a thematic analysis through open-coding of session videos, focusing on strategies participants took. Given the remote nature of the study setup, we did not measure the time taken for task completion. We use the notation ${PID}$ to refer to the study participants.
226
+
227
+ ## 8 STUDY FINDINGS
228
+
229
+ ### 8.1 RQ1: How do users orient and familiarize themselves with the Blocks paradigm?
230
+
231
+ To understand how intuitive the Blocks paradigm is for users, we first examine the strategies participants adopted for sense-making as they oriented themselves with the workings of the interface during Part 1 of the study. We observed various assumptions, expectations, and disconnections users faced as they drew from their past experiences while developing their own mental models when exploring Blocks.
232
+
233
+ #### 8.1.1 Expectations with drag-and-drop interaction
234
+
235
+ When asked to explore the Blocks interface, all participants immediately dragged attribute pills from the data pane onto the canvas; a paradigm that many of them were familiar with having used Tableau and PowerBI. P4 remarked while using the Superstore dataset - "I'm going to drag Category on the canvas and let it go and I see that it created a Block showing the various category values." When subsequent attributes were dragged on to the canvas, several participants were initially uncertain what the various drop targets were and how dropping a pill onto a new Block would affect the other Blocks currently on the canvas (P15). P4 - "I'm dragging out a new pill and I see these various drop targets, but do not know the difference between these." They eventually discovered that there are multiple drop targets within each Block for the various encodings as well as drop targets above, below, to the left and right of each Block. Participants often dragged Blocks around the canvas to change the structure of the generated visualization. Some participants (P4, P5, P6, P9, P11) wanted to modify the current Blocks on the canvas by dragging pills from one Block to another. When they realized that the interface currently does not support that functionality, they deleted the pill in one Block and dragged out the same pill onto another Block to replicate their intention.
236
+
237
+ #### 8.1.2 Understanding the concept of a Block
238
+
239
+ While the Blocks interface has some commonalities with that of Tableau's interface around marks and encodings, there are differences that participants took some time to understand. In particular, the Blocks interface moves away from the shelves paradigm in Tableau. It relies on users to set encoding properties within each Block for each mark type, and the layout is defined by the relative positions of other Blocks on the canvas. P4 tried to externalize her mental model of the interface, reconciling against that of Tableau, "I'm just trying to wrap my head around this. Looks like we are not constrained here [Blocks] by the rows, columns, marks paradigm from Tableau. I created rows with Category and I kept trying to drop Sales on Rows too, and now I notice these little arrows to drop on x or y."
240
+
241
+ Participants were initially unclear what the effects of the x - and y - axes encodings were on data values within a Block. P2 for example, set the mark of the Block to 'bar' and expected SUM(Sales), which was set on text encoding, to be displayed as a bar chart. After being guided by the experimenter to change the encoding from [T] encoding to $\underline{x}$ encoding, the semantics of the encoding properties became clearer. Other participants thought that the way to set encoding properties in the Block had a direct relationship to what they saw in the corresponding chart that was generated. P9 said, "This [Blocks] is much more literal. If I want to affect the Profit bar, I need to literally put the color on the Profit bar. In Tableau, I think of coloring the Category by Profit."
242
+
243
+ #### 8.1.3 Direct manipulation behavior
244
+
245
+ The visual drop targets around a Block in the interface piqued participants' curiosity in exploring what would happen when they dragged out pills to these targets. P8 remarked, "I'd like to get an intuitive sense as to what happens when I drop it here [pointing below the Block] or there [pointing to the right]." Participants were able to understand the relationship between adding Blocks horizontally and the effects on the generated chart. Placing Blocks below one another took further exploration to better understand the system behavior. P11 said, "Going across seems straightforward. I'm trying to figure out what going down meant" and followed up his inquiry by adding Blocks below an existing Block using the various layout options. P19 adopted a strategy of updating a Block with all the desired encodings - "I'm building out one definition for the first column of rows and then do the next." Participants also found it useful to be able to modify the existing chart by dragging pills into new Blocks in the middle of or adjacent to other Blocks, breaking down attributes into targeted levels of detail immediately. They found the visual layout of the Blocks to directly inform the structure of the generated chart - "The LOD of what is to the right is defined by what is to the left [P2]" and "You build out the viz literally the way you think about it [P6]." For some participants, the system did not match their expectations of how a dimension would be broken down by a measure. P8 said, "I put SUM(Sales) below Category and I expected Category to be broken down by Sales, but it showed me a single aggregated bar instead."
246
+
247
+ ### 8.2 RQ2: What are the differences in how users create visualizations across Tableau and Blocks?
248
+
249
+ #### 8.2.1 Task 1: Create a crosstab with barcharts
250
+
251
+ All eight participants were able to complete the task in both Blocks and Tableau. Here, we describe the workflows for both Blocks and Tableau.
252
+
253
+ Blocks: Adding text values for NumberSurvived in the table was relatively easy for all the participants. Participants took some time to figure out how to get the headers to appear in the expected spots (P2, P6). Putting Sex to the left of the current Block helped orient the participants with Block placement to generate the headers. All participants found it straightforward to then add bar charts by changing the encoding of NumberSurvived to $\underline{x}$ , and adding AVG(Age) on $\cdots$ in the Block. P9 realized that the placement of Blocks is a literal translation to the placement of headers in the visualization and was able to add the headers looking at the visual provided as a reference.
254
+
255
+ Tableau: For participants fluent with using Tableau, creating the crosstab was a quick task. Participants first built the rows and columns in the crosstab and then added a measure. This work-flow conflicted with the way participants (P12) created a crosstab in Block, where they started with adding the measure first. P2 said, "In Tableau, the fact that the headers are inside Columns and Rows than being in some separate place like in Blocks, makes it easier to generate." P9 struggled a bit to add barcharts to the crosstab and mentioned that it is was not very intuitive to place SUM(NumberSurvived) on columns.
256
+
257
+ #### 8.2.2 Task 2: Create a sorted table
258
+
259
+ All eight participants were able to complete the task in Blocks. Two participants (P8 and P14) needed guidance to complete the task in Tableau. Here, we describe the workflows for both Blocks and Tableau.
260
+
261
+ Blocks: All the participants dragged out the pills in the order of the columns in the table - Region, Country, and GDP with the encoding set to $\bar{ \mid }$ . They were able to complete the task quickly and appreciated the fact that they did not have to write a calculated field and the LOD was computed automatically based on the relative positions of the Blocks. P11 said, "That's cool. The LOD did what I would've expected if I wasn't used to using Tableau." P3 commented, "It seems like we need new Blocks for each partition aggregation." It was not immediately intuitive for a few participants as to how Region and Country needed to be sorted by GDP. Eventually when they dragged the GDP pill to the Region and Country Blocks, they noticed a sort icon appear and realized that sorting of a dimension is performed per Block.
262
+
263
+ Tableau: A prevalent technique that participants employed was using a calculated field (P3, P5, P11, P17, P20). Participants first added the Region and Country dimensions to Rows with the GDP measure added as text. They then created a calculated field for GDP per Region at the level of Region and converted it into a discrete pill in order to add it between two dimensions, Region and Country in the table. All participants took advantage of Tableau's contextual menu by right-clicking on the table's headers to sort the values in descending order.
264
+
265
+ #### 8.2.3 Task 3: Create a table with sparklines
266
+
267
+ All eight participants were able to complete the task in Blocks. One participant (P7) was unable to add sparklines to the table in Tableau. Here, we describe the workflows for both Blocks and Tableau. Blocks: All participants dragged out County, New Count Confirmed, Total Count Confirmed, and Total Count Deaths into separate Blocks that were laid out horizontally. Generating a column of sparklines in the table was easy for all participants; they intuitively dragged Date onto the $\underline{x}$ encoding and Total Count Deaths onto the $\uparrow$ v encoding into a new Block.
268
+
269
+ Tableau: All participants created the initial table with Tableau's Measure Names ${}^{1}$ and Measure Values ${}^{2}$ fields using County as the dimension to group the data by. Adding a column containing sparklines was more challenging for all participants. P4, P10, P16, and P22 created LOD calculations for each of the three measures New Count Confirmed, Total Count Confirmed, and Total Count Deaths, making each calculated field discrete so that the values could be broken down by County. Line charts were added to the table using Total Count Deaths over Date. P13 and P19 were unsure how to add sparklines to the existing table; they used a different approach by creating a separate worksheet containing a column of line charts and placed it adjacent to the initial table in a Tableau dashboard.
270
+
271
+ ### 8.3 Discussion
272
+
273
+ General feedback from the participants was positive and suggested that Blocks is a promising paradigm to have more control over the layout and manipulating the LOD in the structure of the created visualization. Participants identified certain tasks that could take longer to do in a tool like Tableau, that would be easier in Blocks. P12 remarked, "This is ridiculously awesome. I'm not going to lie, but I have this horrific cross tab bookmarked to do in Tableau. I can see doing it in Blocks in a minute and a half." Participants appreciated the flexibility of being able to apply conditional formatting to various parts of a visualization and not just for the measures. P19 commented, "That's cool. I've never been able to do conditional color dimensions before." Having more control over LOD was a consistent feature that participants found useful. P6 said, "You can do all these subdivisions that are hard to do in Tableau." and "Aha! I can get sparklines so easily." P2 said, "The fact that I can put all these encodings in Blocks makes it a heck of a lot more expressive." Participants also used the canvas to create different visualizations by laying out arrangements of Blocks in space, akin to a computational notebook. The layout helped them compare arrangements with one another as they reasoned about the effects of visual arrangement on chart structure. P15 commented, "In Tableau, I am forced to create a single visualization in each worksheet and then need to assemble them together into a dashboard. In Blocks, it feels like a canvas where I can create how many ever things I want."
274
+
275
+ There were some limitations that the participants brought up with the Blocks prototype.
276
+
277
+ #### 8.3.1 Need for better defaults and previews
278
+
279
+ The flexibility that the Blocks interface affords also comes with an inherent downside of a vast set of drop-target options. P10 was overwhelmed with the choices when he initially started exploring and remarked, "There are so many arrows to choose from. It would be helpful if I can get a hint as to where I should drop by pill based on what attribute I selected." Others wanted to see chart recommendations based on the pills they were interested in, similar to Show Me ${}^{3}$ in Tableau - "Would be nice to get a simple chart like Show Me by clicking on the attributes [P4]." P6 commented, "It would be nice if Blocks could just do the right things when I drop pills onto the [Blocks] canvas."
280
+
281
+ ---
282
+
283
+ ${}^{1}$ The Measure Names field contains the names of all measures in the data, collected into a single field with discrete values.
284
+
285
+ ${}^{2}$ The Measure Values field contains all the measures in the data, collected into a single field with continuous values.
286
+
287
+ ---
288
+
289
+ Showing previews and feedback in the interface when users drag pills to various encoding options within a Block or when new Blocks are created, could better orient the user to the workings of the interface. P12 suggested, "It would be really cool if there are actions associated with the visual indicators of the drop targets so the interface does not feel too free form." For example, dragging Age to a Block could highlight the particular column or cell in the visualization that would be affected by that change. P5 added, "I tend to experiment around and having previews show up as I drag pills to drop targets, would be helpful." Providing reasonable defaults such as suggesting a $\uparrow$ v encoding for a pill when the Block already has a $\underline{x}$ , encoding, could help guide the user towards useful encoding choices.
290
+
291
+ #### 8.3.2 More control over chart customization
292
+
293
+ Participants wanted additional customization in the interface. P3 said, "It would be nice if I could center the sparklines to the text in the table. I would also like to add a dot on the maximum values in the sparklines." Showing hierarchical data in a table requires Blocks to be added for each level that can potentially take up significant screen real-estate for large hierarchies. One workaround suggested was incorporating a Tableau UI feature to drill down into a hierarchical field within a Block (P13). The Blocks prototype also currently lacks templating actions such as adding borders and formatting text in headers that participants were accustomed to in Tableau (P12).
294
+
295
+ #### 8.3.3 Support for additional analytical capabilities
296
+
297
+ Participants wanted more advanced analytical capabilities such as calculated fields to add additional computations to the visual panes in the charts. P3 remarked,"I’d like to use a table calculation ${}^{4}$ to add a max sales values or running totals for that block." Others wanted the prototype to support additional chart types such as maps (P19, P20).
298
+
299
+ ## 9 LONGITUDINAL DIARY STUDIES
300
+
301
+ One of the limitations of the comparative studies was that participants had more experience with using Tableau than with Blocks. Our previous study focused on how Blocks were used in the short term during a single lab session. We offered an option to our study participants to take part in a two-week diary study. The goal of the diary study was to better understand users' behavioral patterns over a longer period of time and how they would use Blocks in their own exploratory analyses. In total, eight participants (seven male and one female) took part in the study where they documented their experiences using the Blocks prototype in Google Docs, spending at least 20 minutes a day for two weeks. Similar to the analysis approach in the previous user study, we conducted a thematic analysis through open-coding of the diary notes. The actual diaries are included as part of supplementary material.
302
+
303
+ ### 9.1 Diary Study Observations
304
+
305
+ Participants appreciated the ease of use of creating more complex rich tables. P3 found that this task was easier to do in Blocks than in Tableau - "Now I want to add more measures in this small multiples, which is super hard when you want to do this with $> 2$ measures in Tableau. With Blocks I can easily add as many as I want within the partition I'm interested in." P20 commented, "There is something to be said for how easy this type of thing is. Multi sparklines alongside totals shown in multiple perspectives." The extended period of time to explore the prototype also helped participants to reflect upon their understanding of how Blocks worked. P9 summarized by saying, "It seems like the mental model in Blocks is "Which number are you interested in?' You start with that, then you start breaking it down dimensionally to the left/right/top/bottom. In Tableau, I go to the dimensions first and then drop in my measure later. Both of these make sense, but I would like to get to a point where I can use my old mental model (dimensions first, then measures) and still be successful in Blocks. Sometimes I know my dimensionality first - voting by age/gender/precinct - I want to drop that in and then look at the measures."
306
+
307
+ There were also aspects of the prototype that were limiting to participants' exploratory analyses. Suggesting smart defaults in the Blocks interface continued to be a theme in the participants' feedback. P1 documented, "It would be helpful if Blocks can guide me towards building useful views. For example, I'm using the Super-store data source, and when I drag out Category and Profit, it would be useful to suggest the x-axis, showing horizontal bar charts that combine the headers and the bars nicely." P3 had a suggestion about better encoding defaults - "I first dropped a measure to create a block, I got a text mark type by default. But it would have been nice to pick up Circle or something similar to make the size encoding meaningful".
308
+
309
+ Some participants wanted interaction behaviors from Tableau in the prototype such as double-clicking to get a default chart similar to Show Me. P2 said, "I wanted to double-click to start adding fields instead of drag and drop. Especially for the first field when I'm just exploring the data. I'd also like to able to scroll the chart area independently of the Blocks". Participants (P2, P18, P20) tried to create other chart types such as stacked bar charts, tree maps, and Sankey charts that Blocks did not support at the time of the study.
310
+
311
+ ## 10 BEYOND TABLES: OTHER USE CASES & FUTURE WORK
312
+
313
+ In this paper, we demonstrate how the Blocks formalism can be used to create complex rich tables. Blocks can be extended to support other visualizations such as treemaps, bubble charts and non-rectangular charts with additional layout algorithms. Blocks does not currently support layering or juxtaposed views that are prevalent in composite visualizations. Future work could explore how to support the creation of these visualizations in the Blocks interface. The ability to define rich tables at multiple LODs could be applied to support other visualization types such as Sankey diagrams and composite maps.
314
+
315
+ Sankey diagrams are a common type of chart created in Tableau, but the creation is a multi-step process involving partitioning the view, densifying the data, indexing the values across different visual dimensions, and several table calculations [6]. With the Blocks system, an $n$ -level Sankey diagram could be built with ${2n} - 1$ Blocks as shown in Figure 7: the Row Blocks represent the nodes of the Sankey for Region, Category, and Segment attributes, while the Link Blocks represent connecting between levels. The Link Blocks inherit their LOD from the neighboring Blocks and render the curves between pairs of marks. The links are encoded by color and size based on SUM(Sales).
316
+
317
+ The composite map visualization in Figure 8 shows State polygons as parent Blocks and nested sparkline charts containing Sales by Order Date. The visualization is constructed using an Inline Block for the map with the sparkline Block as its child.
318
+
319
+ While Blocks employs direct manipulation for supporting the creation of expressive charts, there is an opportunity to add scaffolds through thoughtful defaults and previews to better support users and their mental models when learning the workings of the new interface. We would like to explore how visual interaction during chart generation can be better supported by bridging the user's intentions with the facilities afforded by the interface. The Blocks interface shows promise in supporting analytical workflows that are currently challenging to perform in Tableau, but additional analytical capabilities such as new chart types, support for reference lines, and better formatting options need to be incorporated to be truly useful. Exploring the balance between comprehensive analytical capabilities yet reducing friction in accomplishing users' goals, is an important research direction to pursue.
320
+
321
+ ---
322
+
323
+ ${}^{3}$ Show Me creates a view based on the fields in the view and any fields you've selected in the data pane.
324
+
325
+ ${}^{4}$ A type of calculated field in Tableau that computes based on what is currently in the visualization and does not consider any measures or dimensions that are filtered out of the visualization.
326
+
327
+ ---
328
+
329
+ ![01963e73-f90e-70cf-86fd-7721b334ff9e_9_153_152_715_409_0.jpg](images/01963e73-f90e-70cf-86fd-7721b334ff9e_9_153_152_715_409_0.jpg)
330
+
331
+ Figure 7: A two-level Sankey Diagram
332
+
333
+ ![01963e73-f90e-70cf-86fd-7721b334ff9e_9_220_638_580_575_0.jpg](images/01963e73-f90e-70cf-86fd-7721b334ff9e_9_220_638_580_575_0.jpg)
334
+
335
+ Figure 8: Map with nested sparkline charts
336
+
337
+ We evaluated Blocks with users who had varied degrees of familiarity using Tableau. The study findings indicate that their mental models when exploring the Blocks interface were influenced in part by their prior experience with the Tableau interface. While Blocks and Tableau share some common paradigms, they do have differences. As we continue to evolve Blocks, we would like to further evaluate how the effects of reality and expectations cross with users who have no experience using Tableau compared to their counterparts who frequently use Tableau. Understanding how users create new mental models or upgrade existing ones would help inform ways to support effective onboarding to the Blocks paradigm.
338
+
339
+ ## 11 CONCLUSION
340
+
341
+ We present Blocks, a new formalism that builds upon VizQL by supporting the handling of nesting relationships between attributes through direct manipulation. By treating each component of the visualization as an analytical entity, users can set different LOD and encoding properties through drag-and-drop interactions in the Blocks interface. An evaluation of the Blocks interface and comparing users' analytical workflows with Tableau indicates that Blocks is a useful paradigm for supporting the creation of rich tables with embedded charts. We further demonstrate how Blocks is generalizable to express more complex nested visualizations. Future research directions will explore additional analytical and interaction capabilities in the system along with useful scaffolds for supporting users during visual analysis. We hope that insights learned from our work can identify interesting research directions to help strike a balance between expressivity, ease of use, and analytical richness in visual analysis tools.
342
+
343
+ ## REFERENCES
344
+
345
+ [1] Conditional Formatting v4. https://public.tableau.com/profile/jonathan.drummey#!/vizhome/ conditionalformattingv4/Introduction,2012.
346
+
347
+ [2] KPIs and Floating Dashboards. http://drawingwithnumbers.artisart.org/kpis-and-floating-dashboards/#more-1041, 2013.
348
+
349
+ [3] Parallel Coordinates via Pivot and LOD Expressions. https: //public.tableau.com/profile/jonathan.drummey#!/ vizhome/parallelcoordinatesviapivotandLODexpressions/ dashboard, 2016.
350
+
351
+ [4] Simple Slope for Slope Graph. https://public.tableau.com/profile/jonathan.drummey#!/vizhome/ simpleslopeforslopegraph/slope, 2016.
352
+
353
+ [5] Coloring Column Headers. https://public.tableau.com/profile/jonathan.drummey#!/vizhome/ coloringcolumnheaders/CustomGT, 2017.
354
+
355
+ [6] How to build a Sankey diagram in Tableau without any data prep beforehand. https://www.theinformationlab.co.uk/2018/03/09/ build-sankey-diagram-tableau-without-data-prep-beforehand, 2018.
356
+
357
+ [7] Trellis Charts and Color Highlighting. https://vizzendata.com/ 2019/04/25/trellis-charts-and-color-highlighting/, 2019.
358
+
359
+ [8] Covid-19 Dataset, 2020. CC-BY Dataset: https://covid19.ca.gov.
360
+
361
+ [9] CSS Grid Layout Module Level 1. https://www.w3.org/TR/2020/ CRD-css-grid-1-20201218/, 2020.
362
+
363
+ [10] Encyclopedia Titanica, 2020. CC-BY Dataset: https://www.encyclopedia-titanica.org.
364
+
365
+ [11] How to Create a Population Pyramid Chart in Tableau. https://www.rigordatasolutions.com/post/ how-to-create-a-population-pyramid-chart-in-tableau, 2020.
366
+
367
+ [12] How to make Trellis, Tile, Small Multiple Maps in Tableau. https://playfairdata.com/
368
+
369
+ how-to-make-trellis-tile-small-multiple-maps-in-tableau/, 2020.
370
+
371
+ [13] Trellis Charts in Tableau. https://tessellationtech.io/ trellis-chart/, 2020.
372
+
373
+ [14] Create Level of Detail Expressions in Tableau. https: //help.tableau.com/current/pro/desktop/en-us/ calculations_calculatedfields_lod_overview.htm, 2021.
374
+
375
+ [15] Microsoft Q&A. https://powerbi.microsoft.com/en-us/ documentation/powerbi-service-q-and-a/,2021.
376
+
377
+ [16] Pandas. https://pandas.pydata.org, 2021.
378
+
379
+ [17] React: A JavaScript library for building user interfaces. https:// reactjs.org/, 2021.
380
+
381
+ [18] Tableau Community Forum. https://community.tableau.com, 2021.
382
+
383
+ [19] The Tableau Data Model. https://help.tableau.com/current/ pro/desktop/en-us/datasource_datamodel.htm, 2021.
384
+
385
+ [20] Tableau Hack: How to conditionally format individual rows or columns. https://evolytics.com/blog/ tableau-hack-conditionally-format-individual-rows-columns/, 2021.
386
+
387
+ [21] Tableau Online, 2021. https://online.tableau.com.
388
+
389
+ [22] Tableau Server Data Sources. https://help.tableau.com/ current/server/en-us/datasource.htm, 2021.
390
+
391
+ [23] Tableau Software. https://tableau.com, 2021.
392
+
393
+ [24] Tableau Superstore, 2021. CC-BY Dataset: https://help.tableau.com/current/guides/get-started-tutorial/ en-us/get-started-tutorial-connect.htm.
394
+
395
+ [25] Typescript. https://www.typescriptlang.org/, 2021.
396
+
397
+ [26] C. Ahlberg. Spotfire: An information exploration environment. SIG-MOD Rec., 25(4):25-29, Dec. 1996. doi: 10.1145/245882.245893
398
+
399
+ [27] J. Bertin. Semiology of Graphics: Diagrams, Networks, Maps. Esri Press, Redlands, 2011.
400
+
401
+ [28] A. Bigelow, S. Drucker, D. Fisher, and M. Meyer. Iterating between tools to create and edit visualizations. IEEE Transactions on Visualization and Computer Graphics, 23(1):481-490, 2017. doi: 10.1109/ TVCG.2016.2598609
402
+
403
+ [29] M. Bostock. D3.js - data-driven documents. 2012. http://d3js.org/.
404
+
405
+ [30] Gapminder. World development indicators, 2020. CC-BY Dataset: https://gapminder.org/data.
406
+
407
+ [31] S. Gratzl, N. Gehlenborg, A. Lex, H. Pfister, and M. Streit. Domino: Extracting, comparing, and manipulating subsets across multiple tabular datasets. IEEE Transactions on Visualization and Computer Graphics (InfoVis), 20(12):2023-2032, 2014. doi: 10.1109/TVCG.2014. 2346260
408
+
409
+ [32] J. Harper and M. Agrawala. Converting basic d3 charts into reusable style templates. IEEE Transactions on Visualization and Computer Graphics, PP, 09 2016. doi: 10.1109/TVCG.2017.2659744
410
+
411
+ [33] N. W. Kim, E. Schweickart, Z. Liu, M. Dontcheva, W. Li, J. Popovic, and H. Pfister. Data-driven guides: Supporting expressive design for information graphics. IEEE Transactions on Visualization and Computer Graphics, 23(1):491-500, 2017. doi: 10.1109/TVCG.2016. 2598620
412
+
413
+ [34] Z. Liu, J. Thompson, A. Wilson, M. Dontcheva, J. Delorey, S. Grigg, B. Kerr, and J. Stasko. Data illustrator: Augmenting vector design tools with lazy data binding for expressive visualization authoring. p. 1-13, 2018.
414
+
415
+ [35] A. M. McNutt and R. Chugh. Integrated visualization editing via parameterized declarative templates. ArXiv, abs/2101.07902, 2021.
416
+
417
+ [36] D. Ren, B. Lee, and M. Brehmer. Charticulator: Interactive construction of bespoke chart layouts. IEEE Transactions on Visualization and Computer Graphics, 25(1):789-799, Jan. 2019. doi: 10.1109/TVCG. 2018.2865158
418
+
419
+ [37] B. Saket, L. Jiang, C. Perin, and A. Endert. Liger: Combining interaction paradigms for visual analysis, 2019.
420
+
421
+ [38] A. Satyanarayan, D. Moritz, K. Wongsuphasawat, and J. Heer. Vega-Lite: A Grammar of Interactive Graphics. IEEE Transactions on Visualization and Computer Graphics, 23(1):341-350, Jan. 2017. doi: 10.1109/TVCG.2016.2599030
422
+
423
+ [39] A. Satyanarayan, R. Russell, J. Hoffswell, and J. Heer. Reactive vega: A streaming dataflow architecture for declarative interactive visualization. IEEE Trans. Visualization & Comp. Graphics (Proc. InfoVis), 2016.
424
+
425
+ [40] C. Stolte, D. Tang, and P. Hanrahan. Polaris: A system for query, analysis, and visualization of multidimensional relational databases. IEEE Transactions on Visualization and Computer Graphics, 8(1):52-65, Jan. 2002. doi: 10.1109/2945.981851
426
+
427
+ [41] C. Wang, Y. Feng, R. Bodik, A. Cheung, and I. Dillig. Visualization by example. Proc. ACM Program. Lang., 4(POPL), Dec. 2019. doi: 10 .1145/3371117
428
+
429
+ [42] H. Wickham. ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York, 2016.
430
+
431
+ [43] L. Wilkinson. The Grammar of Graphics (Statistics and Computing). Springer-Verlag, Berlin, Heidelberg, 2005.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/eK2ZbaaJvd/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,329 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Blocks: Creating Rich Tables with Drag-and-Drop Interaction
2
+
3
+ Category: Research
4
+
5
+ < g r a p h i c s >
6
+
7
+ Figure 1: A rich table showing data about Airbnb listings in Seattle, created with Blocks. The table shows a variety of mark types and measures at several levels of detail combined into a single visualization. Each column of the table is defined by a Block with its own set of encoding and field mappings. The columns from left to right show rows for each neighborhood group, sorted by average listing price, a labeled bar chart showing average price, colored by availability, rows for each neighbourhood within each neighborhood group, the same labeled bar chart, but showing average price for each neighborhood, and a sparkline showing average price over time.
8
+
9
+ § ABSTRACT
10
+
11
+ We present Blocks, a formalism that enables the building of visualizations by specifying layout, data relationships, and level of detail (LOD) for specific portions of the visualization. Users can create and manipulate Blocks on a canvas interface through drag-and-drop interaction, controlling the LOD of the data attributes for tabular style visualizations. We conducted a user study to compare how 24 participants employ Blocks and Tableau in their analytical workflows to complete a target visualization task. We also ran a subsequent longitudinal diary study with eight participants to better understand both the usability and utility of Blocks in their own analytical inquiries. Findings from the study suggest that Blocks is a useful mechanism for creating visualizations with embedded microcharts, conditional formatting, and custom layouts. We finally describe how the Blocks formalism can be extended to support additional composite visualizations and Sankey charts, along with future implications for designing visual analysis interfaces that can handle creating more complex charts through drag-and-drop interaction.
12
+
13
+ Keywords: Formalism, level of detail, nesting, layout, conditional formatting, rich tables, drag-and-drop interaction.
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ Visual analysis tools $\left\lbrack {{15},{23}}\right\rbrack$ help support the user in data exploration and iterative view refinement. Some of these tools are more expressive, giving expert users more control, while others are easier to learn and faster to create visualizations. These tools are often driven by underlying grammars of graphics [27, 43] that provide various formalisms to concisely describe the components of a visualization. High level formalisms such as VizQL [40] and ggplot2 [42] are set up to support partial specifications of the visualization and hence provide the convenience of concise representations. Reasonable defaults are subsequently applied to infer missing information to generate a valid graphic. The downside of these concise representations is that the support for expressiveness for visualization generation in these tools is either limited or difficult for a user to learn how to do.
18
+
19
+ Drag-and-drop is one paradigm for addressing the limitations of expressivity by supporting task expression through user interaction where the visibility of the object of interest replaces complex language syntax. VizQL is one such formalism that supports the expression of chart creation through direct manipulation in Tableau [23]. While the language enables users to create charts through its underlying compositional algebra, there is still tight coupling between the query, the visualization structure, and layout. As a result, users often spend significant time in generating complex visualizations when they have a specific structure and layout in mind. The other paradigm for promoting expressiveness for chart creation is through the use of declarative specification grammars $\left\lbrack {{29},{38},{39}}\right\rbrack$ that can programmatically express the developer's intentions.
20
+
21
+ Despite the prevalence of these tools, creating expressive data visualizations still remains a challenging task. Beyond having a good insight about how the data can be best visualized, users need to have sufficient knowledge to generate these visualizations. So, how can we support users in their analytical workflows by enabling a greater degree of flexibility and control over nesting relationships, layout, and encodings, yet providing the intuitiveness of a user interface? In this paper, we address this dichotomy between expressibility and ease of use for the user by extending VizQL to provide greater flexibility in creating expressive charts through direct manipulation.
22
+
23
+ § 1.1 CONTRIBUTIONS
24
+
25
+ Specifically, our contributions are as follows:
26
+
27
+ * We introduce Blocks, a formalism that builds upon VizQL by supporting the nested relationships between attributes in a visualization using a drag-and-drop interaction. Every component of the visualization is an analytical entity to which different nesting and encoding properties can be applied.
28
+
29
+ * We implement a Blocks System that provides a user increased flexibility with layout and formatting options through the direct manipulation of Block objects in the interface.
30
+
31
+ * We evaluated Blocks with 24 participants when performing tasks involving the creation of rich tables using both Tableau and Blocks. Eight of these users recorded their explorations using Blocks in their own workflows for an additional two-week diary study. Findings from the studies indicate that Blocks is a promising paradigm for the creation of complex charts. We identify research directions to pursue to better support users' mental models when using the system.
32
+
33
+ Figure 1 shows how a user can create a rich table using Blocks with a Seattle Airbnb dataset. The assembly of Blocks in the interface results in columns with different mark types such as bar charts and sparklines. The query for each Block inherits the dimensions from the parent Blocks. The first price column inherits the field neighbourhood_group as its dimension, computing price for each neighbourhood group. The second price column inherits both neighbourhood_group and the field neighbourhood showing a more granular level of price per neighbourhood.
34
+
35
+ § 2 RELATED WORK
36
+
37
+ Visual analysis techniques can be broadly classified into two main categories: (1) declarative specification grammars that provide high-level language abstractions and (2) visual analysis interfaces that facilitate chart generation through interaction modalities.
38
+
39
+ § 2.1 DECLARATIVE SPECIFICATION GRAMMARS
40
+
41
+ Declarative visualization languages address the problem of expressiveness by allowing developers to concisely express how they would like to render a visualization. Vega [39] and Vega-Lite [38] support the authoring of interactivity in the visualizations. While these specification languages provide a great degree of flexibility in how charts can be programmatically generated, they provide limited support for displaying different levels of granularity within a field in a visualization. Further, they require programming experience, making it challenging for non-developers to quickly develop advanced charts in their flow of analysis. Viser [41] addresses this gap by automatically synthesizing visualization scripts from simple visual sketches provided by the user. Specifically, given an input data set and a visual sketch that demonstrates how to visualize a very small subset of this data, their technique automatically generates a program that can be used to visualize the entire data set. Ivy [35] proposes parameterized declarative templates, an abstraction mechanism over JSON-based visualization grammars. A related effort by Harper and Agrawala [32] converts D3 charts into reusable Vega-Lite templates for a limited subset of D3 charts. While our work is similar to that of declarative grammars and template specifications in the sense of abstracting low-level implementation details from the user, we focus on supporting non-developer analysts in creating expressive charts through drag-and-drop interaction. We specifically extend the formalism of VizQL for supporting nested queries, layout, and encoding flexibility through drag-and-drop interaction in the Blocks interface.
42
+
43
+ § 2.2 VISUAL ANALYSIS INTERFACES
44
+
45
+ Visual analysis tools over the years have developed ways to help novice users in getting started in a UI context. The basic form of these tools for chart generation include chart pickers that are prevalent in various visual analysis systems [26]. Commercial visual analysis tools such as Tableau and PowerBI, along with systems like Charticulator [36] are built on a visualization framework that enables users to map fields to visual attributes using drag-and-drop interaction. As more analytical capabilities are enabled in these tools, there is a disconnect from the underlying abstraction, leading to calculation editors and dialog menus that add both complexity and friction to the analytical workflow.
46
+
47
+ Prior work has explored combinations of interaction modalities for creating visualizations. Liger [37] combines shelf-based chart specification and visualization by demonstration. Hanpuku [28], Data-Driven Guides [33], and Data Illustrator [34] combine visual editor-style manipulation with chart specification. However, none of these systems specifically focus on a visually expressive way of handling nested relationships during chart generation; a common and important aspect of analytical workflows. Our work specifically addresses this gap and focuses on supporting analysts in a visual analysis interface for creating more expressive charts with nestings by using drag-and-drop as an interaction paradigm.
48
+
49
+ Domino [31] is a system where users can arrange and manipulate subsets, visualize data, and explicitly represent the relationships between these subsets. Our work is similar in concept wherein direct manipulation is employed in visually building relationships in charts, but there are differences. Domino has limited nesting and inheritance capabilities as it does not define parent-child relationships between blocks to support dependent relationships (e.g., a column depending on rows). The expressiveness of complex visualizations such as rich tables with repeated cells containing sparklines, text, and shapes, is limited.
50
+
51
+ § 3 TABLEAU USER EXPERIENCE
52
+
53
+ The core user experience of Tableau is placing Pills (data fields) onto Shelves (specific drop targets in the interface). This controls both the data used and the structure, along with the layout of the final visualization. Fields without an aggregation are called Dimensions. Measures are fields that are aggregated within groups defined by the set of all dimensions, i.e., the Level of Detail (LOD).
54
+
55
+ The key shelves are the Rows Shelf, the Columns Shelf, and the visual encoding shelves that are grouped into the Marks Card. Fields on the Rows and Columns Shelves define "headers" if discrete or "axes" if continuous. The Marks Card specifies a mark type and visual encodings such as size, shape, and color. If there is more than one Marks Card, the group of visualizations defined by the Marks Cards, forms the innermost part of the chart structure, repeated across the grid defined by the Rows and Columns Shelves.
56
+
57
+ The Blocks system attempts to address three limitations inherent to the Tableau experience:
58
+
59
+ * The separation between "headers" and "marks" concepts. The headers define the layout of the visualization and cannot be visually encoded. Only fields on the Marks Card participate in creating marks, but the marks must be arranged within the grid formed by the headers. For example, it is not possible to have a hierarchical table where the top level of the hierarchy is denoted by a symbol rather than text.
60
+
61
+ * The Rows and Columns Shelves are global. As per their names, a field on the Rows Shelf defines a horizontal band, and a field on the Columns Shelf a vertical band, across the entire visualization. For example, it is not possible to place a y-axis next to a simple text value, as one does for sparklines.
62
+
63
+ * Queries are always defined using both the Rows and Columns Shelves, along with the Marks Card. For example, it is not possible to get the value of a measure at an LOD of only dimensions from the Rows Shelf, without those on the Columns Shelf.
64
+
65
+ Users have found ways to work around these limitations to build complex visualizations such as rich tables with sparklines or visualizations with encodings at different LOD for example. These methods include composing multiple visualizations on a dashboard so they appear as one [2]; writing complex calculations to control layout or formatting of elements $\left\lbrack {3 - 5,7,{11} - {13}}\right\rbrack$ ; creating axes with only a single value $\left\lbrack {1,{20}}\right\rbrack$ , among others. Tableau introduced LOD expressions to help answer questions involving multiple levels of granularity in a single visualization [14]. The concept of LOD expressions is outside of the core UI paradigm of direct manipulation in Tableau. Rather, users need to define LOD calculated fields via a calculation editor and understand the syntax structure of Tableau formulae.
66
+
67
+ § 4 DESIGN GOALS
68
+
69
+ To better understand the limitations of Tableau for creating more expressive visualizations, we interviewed 19 customers, analyzed 7 internal dashboards, and reviewed 10 discussions on the Tableau Community Forums [18] that used various workarounds to accomplish their analytical needs. Each customer interview had one facilitator and one notetaker. The customers we interviewed consisted of medium- or large-sized companies that employ Tableau in their work. The interviews consisted of an hour-long discussion where we probed these customers to better understand their use cases. We conducted a thematic analysis through open-coding of interview notes and the Tableau workbooks the customers created and maintained. Finally, we reviewed the top ideas in the Tableau Community Forums to locate needs for more expressive visualizations. These ideas included extensive discussions among customers, which helped us better understand the use cases as well as ways customers work around limitations today. We reviewed our findings, summarized what we learned, and identified common patterns from our research. This analysis is codified into the following design goals:
70
+
71
+ § DG1. SUPPORT DRAG-AND-DROP INTERACTION
72
+
73
+ Tableau employs a drag-and-drop interface to support visual analysis exploration. We learned through discussions with an internal analyst how important table visualizations were for her initial exploration of her data. Her first analytic step was to view her data in a table at multiple LOD and confirming that the numbers matched her expectations based on domain knowledge. We also noticed that many customers used tables to check the accuracy of their calculations throughout their analysis. These discussions indicated that tables are not just an end goal of analysis, but play a key part of the exploratory drag-and-drop process. Our goal is to maintain the ease of use provided by the drag-and-drop interface and data-driven flow when creating visualizations.
74
+
75
+ § DG2. BETTER CONTROL OVER VISUALIZATION COMPONENTS AND LAYOUT
76
+
77
+ Tableau employs defaults to help users manage the large space of possibilities that a compositional language creates [40]. When users have specific ideas of what they want to create, their workflows often conflict with the system defaults. A customer at a large apparel company described the challenges they ran into when replicating an existing report in Tableau. In order to match all of the desired formatting and layout, they had to delicately align multiple sheets together on a single dashboard. Not only did the customer find this frustrating to maintain, but they often ran into issues with alignment and responsive layout. Our goal is to support users with increased layout flexibility as they generate charts for their analytical needs.
78
+
79
+ § DG3. AGGREGATE AND ENCODE AT ANY LOD IN A VISUALIZATION
80
+
81
+ As users strive to build richer visualizations, the need arises for more control over showing information at multiple LOD. While Tableau supports calculations to control the LOD a measure aggregates to, creating these calculations does not provide the ability to visually encode at any LOD and takes users out of their analytic workflow. For example, one customer at a large technology company had a table visualization that listed projects and the teams who worked on each of the projects. Some of the measures needed to show information at the project level (such as total cost), while others measures were at the team level (amount of effort required per team). Building this visualization in Tableau required the customer to write many LOD calculations. Our goal is to provide the ability to use visual encodings and a drag-and-drop experience to evaluate measures at any LOD from any component of the visualization.
82
+
83
+ § 5 THE BLOCKS FORMALISM
84
+
85
+ The Blocks formalism uses an arbitrary number of connected local expressions (i.e., Blocks) instead of global Rows and Columns expressions. Each Block represents a single query of a data source at a single LOD, resulting in a component of the final visualization. Parent-child relationships between the Blocks form a directed acyclic graph (DAG).
86
+
87
+ A block-name is a unique identifier for the Block. The valid values of field-name and aggregation depend on the fields in the data source and the aggregation functions supported by that data source for each field. Any field-instance with an aggregation is used as a measure; all others are used as dimensions.
88
+
89
+ The local ${LOD}$ of the Block is the set of all dimensions used by any encoding within the Block. The full ${LOD}$ of the Block is the union of its local LOD and the local LOD of all of its ancestors. All of the measures used by the Block are evaluated at the full LOD of the Block. In addition to defining the LOD, the encodings map the query results to visual and spatial encodings. Except for 14 (sort ascending), $\bar{ = }4$ (sort descending), and $\therefore$ (data details), each encoding-type must occur at most once within each Block. The sort encodings control the order of the query result and ultimately the rendering order; their priority is determined by the order that they appear. By providing a means to encode $\overset{\mathrm{X}}{ \rightarrow }\left( {\mathrm{x} - \text{ axis }}\right)$ and $\uparrow \mathrm{y}\left( {\mathrm{y} - \text{ axis }}\right)$ at the visualization component level instead of as part of a global table expression as in Tableau, Blocks addresses DG3 with respect to sparklines and other micro charts within a table visualization.
90
+
91
+ block := (block-name, layout-type,
92
+
93
+ mark-type, encoding, children)
94
+
95
+ children := $\{$ (child-group) $\}$
96
+
97
+ child-group := $\{$ block-name $\}$
98
+
99
+ layout-type := "rows" | "columns" | "inline"
100
+
101
+ mark-type := "text" | "shape" | "circle"
102
+
103
+ | "line" | "bar"
104
+
105
+ encoding $\; \mathrel{\text{ := }} (\{$ encoding-type $\}$ ,
106
+
107
+ field-instance)
108
+
109
+ encoding-type := .. "color" | {} "size"
110
+
111
+ | as "shape"
112
+
113
+ | ☑"text" | $"x-axis"
114
+
115
+ | ↑y "y-axis" | 型 "sort-asc"
116
+
117
+ | _ _ "sort-desc" | _ _ "detail"
118
+
119
+ field-instance := ([aggregation], field-name)
120
+
121
+ Each Block renders one mark of its mark-type per tuple in its query result. The layout-type determines how each of the Block's rendered marks are laid out in space. A Block with the layout type of rows creates a row for each value in its domain, with each row containing a single mark. A common example is a Block with a rows layout type and text mark type will generate a row displaying a text string for each value in the Block's domain. A Block with the layout type of columns creates a column for each value, each column containing a single mark per column. To facilitate the creation of scatter plots, line graphs, area charts, and maps, a Block with the layout type of inline renders all of its marks in a single shared space.
122
+
123
+ < g r a p h i c s >
124
+
125
+ Figure 2: Blocks system overview. Users create Block GUI Cards that can define multiple field encodings at a single LOD. The Block GUI card is translated into a Block specification. This specification consists of some number of dimensions, some number of measures aggregated to the LOD of the cross product of the dimensions, a layout, the visual encodings, a mark type, some number of filters, and a sort order. From this Block specification, a Block query is issued to the source data source. The output of a Block query is a Block result set which returns the tuples and corresponding encoding results. This is finally rendered as an output visualization.
126
+
127
+ Child Blocks are laid out in relation to their parents' positioning. A child-group is a set of children that share the same row (for a rows parent) or column (for a columns parent). E.g., in Figure $4\mathrm{\;d}$ , the children of Block $R$ are ((Block $B$ , Block $C$ ), Block $G$ ); $B$ and $C$ are on the same row and so form a child-group. To insure the layout can be calculated, the DAG must simplify to a single tree when considering only the children of rows Blocks or only the children of columns Blocks. This layout system enables Blocks to address DG2 by defining labels, axes, and marks all using the single Block concept. Figure 4a shows how Blocks can be expressed with the formalism.
128
+
129
+ § 6 THE BLOCKS SYSTEM
130
+
131
+ The Blocks system provides an interface for creating Blocks and to view the resulting visualizations. Figure 2 illustrates the architecture. The Blocks Interface (1) and Output Visualization (4) are React-based [17] TypeScript [25] modules that run in a web browser. The interface communicates over HTTPS with a Python back-end that implements the Query Execution (2) and Rows and Column Assignment (3) processes. The system has the flexibility of using either of two query execution systems - a simple one built on Pandas [16] and local text files, or a connection to a Tableau Server Data Source [22], which provides access to Tableau's rich data model [19]. The back-end returns the visual data needed to the front end for rendering the output visualization.
132
+
133
+ § 6.1 BLOCKS INTERFACE
134
+
135
+ The Blocks interface provides a visual, drag-and-drop technique to encode fields, consistent with DG1. Like Tableau, pills represent fields and a schema pane contains the list of fields from the connected data source. Instead of an interface of a fixed number of shelves, the Blocks interface provides a canvas that supports an arbitrary number of Blocks. Dragging out a pill to a blank spot on the canvas will create a new Block, defaulting the Block's encoding, mark type and layout type based on metadata of the field that the pill represents.
136
+
137
+ < g r a p h i c s >
138
+
139
+ Figure 3: Possible drop targets are shown to the user just-in-time as they drag pills to the Blocks canvas.
140
+
141
+ For example, dragging out a pill that represents the discrete, string field $\mathrm{P}$ will create a Block with the layout type of rows, mark type of text, and field $\mathrm{P}$ encoded on $\bar{ \circ }$ . The layout type and mark type are displayed at the top of the Block. Encodings are displayed as a list inside the block. Additional pills can be dragged to blank space on the canvas to create a new, unrelated block, added as an additional encoding to Block A, or dropped adjacent to Block A to create a new related block.
142
+
143
+ As seen in Figure 3, when a pill is dragged over an existing block, drop targets appear that represent any unused encodings in that Block that the system provides. When a pill is dragged over an area adjacent to an existing block, drop targets appear to assist in creating a new related block. If the pill that is being dragged represents a dimension field, the system provides options to create a new block with either the rows layout type or the column layout type. The dimension field of the pill will be encoded on $\bar{ \pm }$ by default. If the pill being dragged represents a measure field, the system provides the option to encode the measure on the $\underline{x}, \uparrow v$ , or $\bar{x}$ on a Block that is defaulted to the inline layout type. Once the new, related Block is created, the layout type, mark type, and encoding can all be customized.
144
+
145
+ There are two implicitly-created root Blocks that are invisible in the interface, a Rows root and a Columns root. Any Block that has no parents is the child of the Rows root Block and Columns root Block. These root Blocks are used as the starting point for calculating Row and Column indexes, as described in Section 6.3.
146
+
147
+ Blocks placed to the right of or below related Blocks are automatically determined to be child Blocks. A chevron icon (>) displayed between the Blocks denotes the direction of the nested relationship between the Blocks. The layout of Blocks also directly determines the layout of components in the visualization. Block A placed above Block B will draw visualization component A above visualization component B.
148
+
149
+ < g r a p h i c s >
150
+
151
+ Figure 4: Example configurations of Blocks
152
+
153
+ Every Block must have both a Rows and a Columns parent to determine its position in the visualization; more than two parents are not permitted. If a Rows or Columns parent is not explicit in the interface, that parent is added implicitly by the system. The missing parent is implied by the relationships of the defined parent. A Block that does not have a Columns parent defined in the interface uses the Column parent of its Rows parent. Similarly, a Block that does not have a Rows parent defined in the interface uses the Rows parent of its Columns parent. Inline Blocks do not have children. If the interface defines a Block as a child of an Inline Block, it uses the Rows and Columns parents of the Inline Block. Figure 5 shows the graph implied by the interface for Figure 4d.
154
+
155
+ In Figure 4a, the field Class is encoded on text in Block C with a Row layout type. As there are three values in the domain of the field of Class, three rows are created in the visualization with a text mark for each value of the field. An Inline Block is nested as a child Block with NumSurvived encoded on the $\underline{x}$ . The system creates a bar chart for each row as defined by the first Block. Since no additional dimensions are added to Block N, the measure NumSurvived is aggregated to the LOD of Class and a single bar is rendered per row.
156
+
157
+ < g r a p h i c s >
158
+
159
+ Figure 5: Implicit and explicit links for Figure 4d. Links explicitly shown in the interface are solid black arrows. The link from an Inline Block, which is treated as a link from the parent block, is shown in red. Links added implicitly are shown as dashed arrows.
160
+
161
+ Figure 4b expands the example, showing how multiple dimensions can be added to a visualization. Due to the parent-child relationship of the four Blocks, NumSurvived inherits the dimensions from the parent Blocks, and aggregates at the combined LOD of Class, Family Aboard, and Sex. In contrast, Age is encoded in Block C which has no parent Block. Therefore, Age aggregates to the LOD of Class, the only dimension encoded on the same Block.
162
+
163
+ To specify a crosstab, the formalism requires a Block to have two parents - a Rows Block parent and a Columns Block parent. The user interface supports the specification of two parent Blocks, one directly to the left and the other directly above a Block. Figure 4c shows Block N with Block C and Block S as parent Blocks. The measure of NumSurvived in Block $\mathrm{N}$ is aggregated to the combined LOD of Class and Sex, the dimensions from its parents, Block C and Block S.
164
+
165
+ § 6.2 QUERY EXECUTION
166
+
167
+ Each Block executes a single query that is at the LOD of all the dimensions for the Block, including those inherited from parent Blocks. In figure 4b, the query for Block $S$ includes not only Sex but also FamilyAboard, inherited direction from Block F and Class, inherited indirectly from Block C. This enables layout of the Block's marks relative to its parents and avoids making the user repeat themselves in the user interface, in support of DG1. The query includes only the measures for the current Block, not those of any other Block, because measures are aggregates at a specific LOD, in support of DG3. Every query is deterministically sorted, either by a user-requested sort or by a default sort based on the order of the encoding fields within the block.
168
+
169
+ § 6.3 ROW AND COLUMN ASSIGNMENT
170
+
171
+ Query execution results in multiple tables with different schemas. The system needs to assign Row and Column indexes from a single grid to tuples from all of these tables. This section describes the process for Rows; it is repeated for Columns.
172
+
173
+ 1. Produce a Block tree from the Blocks DAG by only considering links from Rows Blocks to their children, excluding any other links. The Blocks Interface ensures that this tree exists, is connected, and has a single root at the implicit Rows root Block.
174
+
175
+ 2. Produce a tuples tree by treating each tuple as a node. Its parent is the tuple from its parent Block with matching dimension values.
176
+
177
+ 3. Sort the children of each tuple, first in the order their Blocks appear as children in the Blocks tree, and then in the order of the Rows dimensions and user-specified sorts, if any, for each Block.
178
+
179
+ 4. Assign row indexes to each tuple by walking the tuple tree in depth-first order. Leaf tuples get a single row index; interior nodes record the minimum and maximum row indexes of all their leaves into the tuple.
180
+
181
+ § 6.4 OUTPUT VISUALIZATION
182
+
183
+ Each tuple from a Rows or Columns Block forms a single cell containing a single mark. All of the tuples from an Inline Block with the same Row and Column parent tuples form a single cell. The values of visual encoding fields that are dimensions, if any, differentiate between marks within that cell. Those marks may comprise a bar chart, scatter plot, or other visualization depending on the mark type and visual encodings of the Block. The system uses a CSS Grid [9] and the computed row and column minimum and maximum indexes to define the position of each cell. Within each cell, simple text marks are rendered using HTML. A SVG-based renderer is used for all other marks.
184
+
185
+ § 7 COMPARATIVE STUDY OF BLOCKS WITH TABLEAU
186
+
187
+ We conducted a user study of Blocks with the goal of answering two research questions: RQ1: How do users orient and familiarize themselves with the Blocks paradigm? and RQ2: What are the differences in how users create visualizations across Tableau and Blocks? This information would provide insights as to how Blocks could be useful to users and how the paradigm could potentially integrate into a more comprehensive visual analysis system. The study had two parts: Part 1 was an exploratory warm-up exercise to observe how people would familiarize themselves with the Blocks interface in an open-ended way. Part 2 was a comparative study where participants completed an assigned visual analysis task of creating a visualization using both Tableau and Blocks. The study focused on various rich table creation tasks as they were found to be a prevalent type of visualization as described in Section 4. Comparing Blocks with Tableau would help highlight the differences in the participants' analytical workflows when performing the same task.
188
+
189
+ § 7.1 METHOD
190
+
191
+ § 7.1.1 PARTICIPANTS
192
+
193
+ A total of 24 volunteer participants ( 6 female, 18 male) took part in the studies and none of them participated more than once. All participants were fluent in English and recruited from a visual analytics organization without any monetary incentives. The participants had a variety of job backgrounds - user researcher, sales consultant, engineering leader, data analyst, product manager, technical program manager and marketing manager. Based on self-reporting, eight were experienced users of the Tableau product, eight had moderate experience, while eight had limited proficiency. During Part 2 of the study, each participant was randomly assigned an order of whether to use Blocks or Tableau first when completing their assigned task.
194
+
195
+ § 7.1.2 PROCEDURE AND APPARATUS
196
+
197
+ Two of the authors supported each session, one being the facilitator and the other as the notetaker. All the study trials were done remotely over a shared screen video conference to conform with social distancing protocol due to COVID-19. All sessions took approximately 50 minutes and were recorded. We began the study with the facilitator reading from an instructions script, followed by sharing a short (under two minutes) tutorial video of the Blocks interface, explaining the possible interactions. Participants were then provided a URL link to the Blocks prototype where they participated in Part 1 of the study using the Superstore dataset [24]. During this part, they were instructed to think aloud, and to tell us whenever the system did something unexpected. Halfway through the study session, participants transitioned to Part 2 of the study. They were provided instructions to the task to perform with a Tableau Online [21] workbook pre-populated with the dataset and the Blocks prototype. We discussed reactions to system behavior throughout the session and then concluded with a semi-structured interview. Experimenter script, task instructions, and tutorial video are included in supplementary material.
198
+
199
+ § 7.1.3 TASKS
200
+
201
+ There were two main parts to the study: Open-ended exploration and closed-ended tasks.
202
+
203
+ Part 1: Open-ended exploration This task enabled us to observe how people would explore and familiarize themselves with the Blocks interface. Instructions were: "Based on what you saw in the tutorial video, we would like you to explore this data in the Blocks prototype. As you work, please let us know what questions or hypotheses you're trying to answer as well as any insights you have while using the interface."
204
+
205
+ § PART 2: CLOSED-ENDED TASKS
206
+
207
+ The closed-ended tasks were intended to provide some consistent objectives for task comparison across both Tableau and Blocks systems. Participants completed one of three randomly assigned closed-ended tasks that involved the creation of a rich table as shown in Figure 6. Expected visualization result images were shown as visual guidance along with the instructions to indicate what was generally expected as part of task completion. Here are the tasks along with their corresponding instructions that were provided to the participants:
208
+
209
+ < g r a p h i c s >
210
+
211
+ Figure 6: Three study tasks. Task 1: Cross tab with bar charts, Task 2: Table with sorted dimensions, and Task 3: Table with sparklines
212
+
213
+ * Task 1: Create a crosstab with barcharts "Using the Titanic dataset [10], create a crosstab for SUM(NumberSurvived) by Sex (on Rows) and FamilyAboard (on Columns). Now, switch to show bar charts for NumberSurvived with AVG(Age) on color."
214
+
215
+ * Task 2: Create a sorted table "Using the Gapminder dataset [30], create a table that shows SUM(GDP) for each Region and Country. Now, using the table from the previous step, sort both Region and Country by SUM(GDP)."
216
+
217
+ * Task 3: Create a table with sparklines "Using the COVID-19 dataset [8], create a table that shows New Count Confirmed, Total Count Confirmed, and Total Count Deaths for each County in California. Now, given the table from the previous task, add a column with the time attribute Date to generate sparklines to show the Total Count Deaths over time."
218
+
219
+ § 7.1.4 ANALYSIS APPROACH
220
+
221
+ The primary focus of our work was a qualitative analysis of how Blocks influenced people's analytical workflows and comparing those workflows with that of Tableau. We conducted a thematic analysis through open-coding of session videos, focusing on strategies participants took. Given the remote nature of the study setup, we did not measure the time taken for task completion. We use the notation ${PID}$ to refer to the study participants.
222
+
223
+ § 8 STUDY FINDINGS
224
+
225
+ § 8.1 RQ1: HOW DO USERS ORIENT AND FAMILIARIZE THEMSELVES WITH THE BLOCKS PARADIGM?
226
+
227
+ To understand how intuitive the Blocks paradigm is for users, we first examine the strategies participants adopted for sense-making as they oriented themselves with the workings of the interface during Part 1 of the study. We observed various assumptions, expectations, and disconnections users faced as they drew from their past experiences while developing their own mental models when exploring Blocks.
228
+
229
+ § 8.1.1 EXPECTATIONS WITH DRAG-AND-DROP INTERACTION
230
+
231
+ When asked to explore the Blocks interface, all participants immediately dragged attribute pills from the data pane onto the canvas; a paradigm that many of them were familiar with having used Tableau and PowerBI. P4 remarked while using the Superstore dataset - "I'm going to drag Category on the canvas and let it go and I see that it created a Block showing the various category values." When subsequent attributes were dragged on to the canvas, several participants were initially uncertain what the various drop targets were and how dropping a pill onto a new Block would affect the other Blocks currently on the canvas (P15). P4 - "I'm dragging out a new pill and I see these various drop targets, but do not know the difference between these." They eventually discovered that there are multiple drop targets within each Block for the various encodings as well as drop targets above, below, to the left and right of each Block. Participants often dragged Blocks around the canvas to change the structure of the generated visualization. Some participants (P4, P5, P6, P9, P11) wanted to modify the current Blocks on the canvas by dragging pills from one Block to another. When they realized that the interface currently does not support that functionality, they deleted the pill in one Block and dragged out the same pill onto another Block to replicate their intention.
232
+
233
+ § 8.1.2 UNDERSTANDING THE CONCEPT OF A BLOCK
234
+
235
+ While the Blocks interface has some commonalities with that of Tableau's interface around marks and encodings, there are differences that participants took some time to understand. In particular, the Blocks interface moves away from the shelves paradigm in Tableau. It relies on users to set encoding properties within each Block for each mark type, and the layout is defined by the relative positions of other Blocks on the canvas. P4 tried to externalize her mental model of the interface, reconciling against that of Tableau, "I'm just trying to wrap my head around this. Looks like we are not constrained here [Blocks] by the rows, columns, marks paradigm from Tableau. I created rows with Category and I kept trying to drop Sales on Rows too, and now I notice these little arrows to drop on x or y."
236
+
237
+ Participants were initially unclear what the effects of the x - and y - axes encodings were on data values within a Block. P2 for example, set the mark of the Block to 'bar' and expected SUM(Sales), which was set on text encoding, to be displayed as a bar chart. After being guided by the experimenter to change the encoding from [T] encoding to $\underline{x}$ encoding, the semantics of the encoding properties became clearer. Other participants thought that the way to set encoding properties in the Block had a direct relationship to what they saw in the corresponding chart that was generated. P9 said, "This [Blocks] is much more literal. If I want to affect the Profit bar, I need to literally put the color on the Profit bar. In Tableau, I think of coloring the Category by Profit."
238
+
239
+ § 8.1.3 DIRECT MANIPULATION BEHAVIOR
240
+
241
+ The visual drop targets around a Block in the interface piqued participants' curiosity in exploring what would happen when they dragged out pills to these targets. P8 remarked, "I'd like to get an intuitive sense as to what happens when I drop it here [pointing below the Block] or there [pointing to the right]." Participants were able to understand the relationship between adding Blocks horizontally and the effects on the generated chart. Placing Blocks below one another took further exploration to better understand the system behavior. P11 said, "Going across seems straightforward. I'm trying to figure out what going down meant" and followed up his inquiry by adding Blocks below an existing Block using the various layout options. P19 adopted a strategy of updating a Block with all the desired encodings - "I'm building out one definition for the first column of rows and then do the next." Participants also found it useful to be able to modify the existing chart by dragging pills into new Blocks in the middle of or adjacent to other Blocks, breaking down attributes into targeted levels of detail immediately. They found the visual layout of the Blocks to directly inform the structure of the generated chart - "The LOD of what is to the right is defined by what is to the left [P2]" and "You build out the viz literally the way you think about it [P6]." For some participants, the system did not match their expectations of how a dimension would be broken down by a measure. P8 said, "I put SUM(Sales) below Category and I expected Category to be broken down by Sales, but it showed me a single aggregated bar instead."
242
+
243
+ § 8.2 RQ2: WHAT ARE THE DIFFERENCES IN HOW USERS CREATE VISUALIZATIONS ACROSS TABLEAU AND BLOCKS?
244
+
245
+ § 8.2.1 TASK 1: CREATE A CROSSTAB WITH BARCHARTS
246
+
247
+ All eight participants were able to complete the task in both Blocks and Tableau. Here, we describe the workflows for both Blocks and Tableau.
248
+
249
+ Blocks: Adding text values for NumberSurvived in the table was relatively easy for all the participants. Participants took some time to figure out how to get the headers to appear in the expected spots (P2, P6). Putting Sex to the left of the current Block helped orient the participants with Block placement to generate the headers. All participants found it straightforward to then add bar charts by changing the encoding of NumberSurvived to $\underline{x}$ , and adding AVG(Age) on $\cdots$ in the Block. P9 realized that the placement of Blocks is a literal translation to the placement of headers in the visualization and was able to add the headers looking at the visual provided as a reference.
250
+
251
+ Tableau: For participants fluent with using Tableau, creating the crosstab was a quick task. Participants first built the rows and columns in the crosstab and then added a measure. This work-flow conflicted with the way participants (P12) created a crosstab in Block, where they started with adding the measure first. P2 said, "In Tableau, the fact that the headers are inside Columns and Rows than being in some separate place like in Blocks, makes it easier to generate." P9 struggled a bit to add barcharts to the crosstab and mentioned that it is was not very intuitive to place SUM(NumberSurvived) on columns.
252
+
253
+ § 8.2.2 TASK 2: CREATE A SORTED TABLE
254
+
255
+ All eight participants were able to complete the task in Blocks. Two participants (P8 and P14) needed guidance to complete the task in Tableau. Here, we describe the workflows for both Blocks and Tableau.
256
+
257
+ Blocks: All the participants dragged out the pills in the order of the columns in the table - Region, Country, and GDP with the encoding set to $\bar{ \mid }$ . They were able to complete the task quickly and appreciated the fact that they did not have to write a calculated field and the LOD was computed automatically based on the relative positions of the Blocks. P11 said, "That's cool. The LOD did what I would've expected if I wasn't used to using Tableau." P3 commented, "It seems like we need new Blocks for each partition aggregation." It was not immediately intuitive for a few participants as to how Region and Country needed to be sorted by GDP. Eventually when they dragged the GDP pill to the Region and Country Blocks, they noticed a sort icon appear and realized that sorting of a dimension is performed per Block.
258
+
259
+ Tableau: A prevalent technique that participants employed was using a calculated field (P3, P5, P11, P17, P20). Participants first added the Region and Country dimensions to Rows with the GDP measure added as text. They then created a calculated field for GDP per Region at the level of Region and converted it into a discrete pill in order to add it between two dimensions, Region and Country in the table. All participants took advantage of Tableau's contextual menu by right-clicking on the table's headers to sort the values in descending order.
260
+
261
+ § 8.2.3 TASK 3: CREATE A TABLE WITH SPARKLINES
262
+
263
+ All eight participants were able to complete the task in Blocks. One participant (P7) was unable to add sparklines to the table in Tableau. Here, we describe the workflows for both Blocks and Tableau. Blocks: All participants dragged out County, New Count Confirmed, Total Count Confirmed, and Total Count Deaths into separate Blocks that were laid out horizontally. Generating a column of sparklines in the table was easy for all participants; they intuitively dragged Date onto the $\underline{x}$ encoding and Total Count Deaths onto the $\uparrow$ v encoding into a new Block.
264
+
265
+ Tableau: All participants created the initial table with Tableau's Measure Names ${}^{1}$ and Measure Values ${}^{2}$ fields using County as the dimension to group the data by. Adding a column containing sparklines was more challenging for all participants. P4, P10, P16, and P22 created LOD calculations for each of the three measures New Count Confirmed, Total Count Confirmed, and Total Count Deaths, making each calculated field discrete so that the values could be broken down by County. Line charts were added to the table using Total Count Deaths over Date. P13 and P19 were unsure how to add sparklines to the existing table; they used a different approach by creating a separate worksheet containing a column of line charts and placed it adjacent to the initial table in a Tableau dashboard.
266
+
267
+ § 8.3 DISCUSSION
268
+
269
+ General feedback from the participants was positive and suggested that Blocks is a promising paradigm to have more control over the layout and manipulating the LOD in the structure of the created visualization. Participants identified certain tasks that could take longer to do in a tool like Tableau, that would be easier in Blocks. P12 remarked, "This is ridiculously awesome. I'm not going to lie, but I have this horrific cross tab bookmarked to do in Tableau. I can see doing it in Blocks in a minute and a half." Participants appreciated the flexibility of being able to apply conditional formatting to various parts of a visualization and not just for the measures. P19 commented, "That's cool. I've never been able to do conditional color dimensions before." Having more control over LOD was a consistent feature that participants found useful. P6 said, "You can do all these subdivisions that are hard to do in Tableau." and "Aha! I can get sparklines so easily." P2 said, "The fact that I can put all these encodings in Blocks makes it a heck of a lot more expressive." Participants also used the canvas to create different visualizations by laying out arrangements of Blocks in space, akin to a computational notebook. The layout helped them compare arrangements with one another as they reasoned about the effects of visual arrangement on chart structure. P15 commented, "In Tableau, I am forced to create a single visualization in each worksheet and then need to assemble them together into a dashboard. In Blocks, it feels like a canvas where I can create how many ever things I want."
270
+
271
+ There were some limitations that the participants brought up with the Blocks prototype.
272
+
273
+ § 8.3.1 NEED FOR BETTER DEFAULTS AND PREVIEWS
274
+
275
+ The flexibility that the Blocks interface affords also comes with an inherent downside of a vast set of drop-target options. P10 was overwhelmed with the choices when he initially started exploring and remarked, "There are so many arrows to choose from. It would be helpful if I can get a hint as to where I should drop by pill based on what attribute I selected." Others wanted to see chart recommendations based on the pills they were interested in, similar to Show Me ${}^{3}$ in Tableau - "Would be nice to get a simple chart like Show Me by clicking on the attributes [P4]." P6 commented, "It would be nice if Blocks could just do the right things when I drop pills onto the [Blocks] canvas."
276
+
277
+ ${}^{1}$ The Measure Names field contains the names of all measures in the data, collected into a single field with discrete values.
278
+
279
+ ${}^{2}$ The Measure Values field contains all the measures in the data, collected into a single field with continuous values.
280
+
281
+ Showing previews and feedback in the interface when users drag pills to various encoding options within a Block or when new Blocks are created, could better orient the user to the workings of the interface. P12 suggested, "It would be really cool if there are actions associated with the visual indicators of the drop targets so the interface does not feel too free form." For example, dragging Age to a Block could highlight the particular column or cell in the visualization that would be affected by that change. P5 added, "I tend to experiment around and having previews show up as I drag pills to drop targets, would be helpful." Providing reasonable defaults such as suggesting a $\uparrow$ v encoding for a pill when the Block already has a $\underline{x}$ , encoding, could help guide the user towards useful encoding choices.
282
+
283
+ § 8.3.2 MORE CONTROL OVER CHART CUSTOMIZATION
284
+
285
+ Participants wanted additional customization in the interface. P3 said, "It would be nice if I could center the sparklines to the text in the table. I would also like to add a dot on the maximum values in the sparklines." Showing hierarchical data in a table requires Blocks to be added for each level that can potentially take up significant screen real-estate for large hierarchies. One workaround suggested was incorporating a Tableau UI feature to drill down into a hierarchical field within a Block (P13). The Blocks prototype also currently lacks templating actions such as adding borders and formatting text in headers that participants were accustomed to in Tableau (P12).
286
+
287
+ § 8.3.3 SUPPORT FOR ADDITIONAL ANALYTICAL CAPABILITIES
288
+
289
+ Participants wanted more advanced analytical capabilities such as calculated fields to add additional computations to the visual panes in the charts. P3 remarked,"I’d like to use a table calculation ${}^{4}$ to add a max sales values or running totals for that block." Others wanted the prototype to support additional chart types such as maps (P19, P20).
290
+
291
+ § 9 LONGITUDINAL DIARY STUDIES
292
+
293
+ One of the limitations of the comparative studies was that participants had more experience with using Tableau than with Blocks. Our previous study focused on how Blocks were used in the short term during a single lab session. We offered an option to our study participants to take part in a two-week diary study. The goal of the diary study was to better understand users' behavioral patterns over a longer period of time and how they would use Blocks in their own exploratory analyses. In total, eight participants (seven male and one female) took part in the study where they documented their experiences using the Blocks prototype in Google Docs, spending at least 20 minutes a day for two weeks. Similar to the analysis approach in the previous user study, we conducted a thematic analysis through open-coding of the diary notes. The actual diaries are included as part of supplementary material.
294
+
295
+ § 9.1 DIARY STUDY OBSERVATIONS
296
+
297
+ Participants appreciated the ease of use of creating more complex rich tables. P3 found that this task was easier to do in Blocks than in Tableau - "Now I want to add more measures in this small multiples, which is super hard when you want to do this with $> 2$ measures in Tableau. With Blocks I can easily add as many as I want within the partition I'm interested in." P20 commented, "There is something to be said for how easy this type of thing is. Multi sparklines alongside totals shown in multiple perspectives." The extended period of time to explore the prototype also helped participants to reflect upon their understanding of how Blocks worked. P9 summarized by saying, "It seems like the mental model in Blocks is "Which number are you interested in?' You start with that, then you start breaking it down dimensionally to the left/right/top/bottom. In Tableau, I go to the dimensions first and then drop in my measure later. Both of these make sense, but I would like to get to a point where I can use my old mental model (dimensions first, then measures) and still be successful in Blocks. Sometimes I know my dimensionality first - voting by age/gender/precinct - I want to drop that in and then look at the measures."
298
+
299
+ There were also aspects of the prototype that were limiting to participants' exploratory analyses. Suggesting smart defaults in the Blocks interface continued to be a theme in the participants' feedback. P1 documented, "It would be helpful if Blocks can guide me towards building useful views. For example, I'm using the Super-store data source, and when I drag out Category and Profit, it would be useful to suggest the x-axis, showing horizontal bar charts that combine the headers and the bars nicely." P3 had a suggestion about better encoding defaults - "I first dropped a measure to create a block, I got a text mark type by default. But it would have been nice to pick up Circle or something similar to make the size encoding meaningful".
300
+
301
+ Some participants wanted interaction behaviors from Tableau in the prototype such as double-clicking to get a default chart similar to Show Me. P2 said, "I wanted to double-click to start adding fields instead of drag and drop. Especially for the first field when I'm just exploring the data. I'd also like to able to scroll the chart area independently of the Blocks". Participants (P2, P18, P20) tried to create other chart types such as stacked bar charts, tree maps, and Sankey charts that Blocks did not support at the time of the study.
302
+
303
+ § 10 BEYOND TABLES: OTHER USE CASES & FUTURE WORK
304
+
305
+ In this paper, we demonstrate how the Blocks formalism can be used to create complex rich tables. Blocks can be extended to support other visualizations such as treemaps, bubble charts and non-rectangular charts with additional layout algorithms. Blocks does not currently support layering or juxtaposed views that are prevalent in composite visualizations. Future work could explore how to support the creation of these visualizations in the Blocks interface. The ability to define rich tables at multiple LODs could be applied to support other visualization types such as Sankey diagrams and composite maps.
306
+
307
+ Sankey diagrams are a common type of chart created in Tableau, but the creation is a multi-step process involving partitioning the view, densifying the data, indexing the values across different visual dimensions, and several table calculations [6]. With the Blocks system, an $n$ -level Sankey diagram could be built with ${2n} - 1$ Blocks as shown in Figure 7: the Row Blocks represent the nodes of the Sankey for Region, Category, and Segment attributes, while the Link Blocks represent connecting between levels. The Link Blocks inherit their LOD from the neighboring Blocks and render the curves between pairs of marks. The links are encoded by color and size based on SUM(Sales).
308
+
309
+ The composite map visualization in Figure 8 shows State polygons as parent Blocks and nested sparkline charts containing Sales by Order Date. The visualization is constructed using an Inline Block for the map with the sparkline Block as its child.
310
+
311
+ While Blocks employs direct manipulation for supporting the creation of expressive charts, there is an opportunity to add scaffolds through thoughtful defaults and previews to better support users and their mental models when learning the workings of the new interface. We would like to explore how visual interaction during chart generation can be better supported by bridging the user's intentions with the facilities afforded by the interface. The Blocks interface shows promise in supporting analytical workflows that are currently challenging to perform in Tableau, but additional analytical capabilities such as new chart types, support for reference lines, and better formatting options need to be incorporated to be truly useful. Exploring the balance between comprehensive analytical capabilities yet reducing friction in accomplishing users' goals, is an important research direction to pursue.
312
+
313
+ ${}^{3}$ Show Me creates a view based on the fields in the view and any fields you've selected in the data pane.
314
+
315
+ ${}^{4}$ A type of calculated field in Tableau that computes based on what is currently in the visualization and does not consider any measures or dimensions that are filtered out of the visualization.
316
+
317
+ < g r a p h i c s >
318
+
319
+ Figure 7: A two-level Sankey Diagram
320
+
321
+ < g r a p h i c s >
322
+
323
+ Figure 8: Map with nested sparkline charts
324
+
325
+ We evaluated Blocks with users who had varied degrees of familiarity using Tableau. The study findings indicate that their mental models when exploring the Blocks interface were influenced in part by their prior experience with the Tableau interface. While Blocks and Tableau share some common paradigms, they do have differences. As we continue to evolve Blocks, we would like to further evaluate how the effects of reality and expectations cross with users who have no experience using Tableau compared to their counterparts who frequently use Tableau. Understanding how users create new mental models or upgrade existing ones would help inform ways to support effective onboarding to the Blocks paradigm.
326
+
327
+ § 11 CONCLUSION
328
+
329
+ We present Blocks, a new formalism that builds upon VizQL by supporting the handling of nesting relationships between attributes through direct manipulation. By treating each component of the visualization as an analytical entity, users can set different LOD and encoding properties through drag-and-drop interactions in the Blocks interface. An evaluation of the Blocks interface and comparing users' analytical workflows with Tableau indicates that Blocks is a useful paradigm for supporting the creation of rich tables with embedded charts. We further demonstrate how Blocks is generalizable to express more complex nested visualizations. Future research directions will explore additional analytical and interaction capabilities in the system along with useful scaffolds for supporting users during visual analysis. We hope that insights learned from our work can identify interesting research directions to help strike a balance between expressivity, ease of use, and analytical richness in visual analysis tools.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/oyJfW3GmBGX/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,457 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # FossilSketch: A novel interactive web interface for teaching university-level micropaleontology
2
+
3
+ Category: Research
4
+
5
+ ## Abstract
6
+
7
+ Micropaleontology studies fossils that are very small and require the use of a microscope. Micropaleontologists use microfossils to analyze data critical for estimating future sea level rise, understanding the causes of past climate upheavals, and finding economically important resources like oil and gas. This subject is taught as part of some geology classes at the undergraduate and graduate university level, but training in this field is time-consuming and less classroom time is typically devoted to the topic. Although demand for geoscientists is projected to grow, fewer students are exposed and trained in micropaleontology. Geosciences currently need micropaleontol-ogists as the population of experts is declining. While interactive math and engineering web interfaces are recently becoming more common, a similar system that provides students with a repository of knowledge and interactive exercises in the micropaleontology space was lacking. To address this problem of training students in micropaleontology, we developed FossilSketch: a web-based interactive learning tool that teaches, trains, and assesses students in the basics of micropaleontology. The interface we developed contains various interactions and a new template-based system to check drawn shape accuracy helps students learn characteristic features of microfossils. Our evaluation included deploying this system to 32 students in an undergraduate geology class at our university. The accompanying user study results indicate that FossilSketch is an engaging educational tool that can be deployed alongside the classroom for in-class and at-home learning. Student feedback together with our recorded submission data for various exercises suggests that FossilSketch is an effective online learning tool that serves as a helpful reference for class activities, allows for remote learning, presents helpful and engaging interactive games, and encourages repeat submissions.
8
+
9
+ Index Terms: Human-centered computing-Interactive systems and tools; Information systems-Web applications; Applied computing-Interactive learning environments; Applied computing—Earth and atmospheric sciences;
10
+
11
+ ## 1 INTRODUCTION
12
+
13
+ The fossil remains of micro-organisms preserved in modern and ancient sediments play key roles in determining the ages of geologic records, reconstructing ancient environments, and monitoring modern ecosystem health. However, training undergraduates to identify these microfossils is time-intensive and most students are not exposed to this tool in their courses. Core geoscience courses that reach all majors rarely include micropaleontology, the study of mi-crofossils, because contact hours are not sufficient to train students at the necessary level of detail. Student training in micropaleontology has declined over the last several decades as the field of geology has broadened and micropaleontology has been replaced by other methods $\left\lbrack {6,{48}}\right\rbrack$ . Thus, although the geosciences currently need micropaleontologists because the population of experts is aging, few students are trained to use this tool [40].
14
+
15
+ To enable and enhance training of undergraduates in the basics of micropaleontology in remote, hybrid and in-class conditions, we developed FossilSketch, an interactive digital tool that introduces students to micropaleontology through educational videos, sketch-based exercises and mini-games focused on microfossils and their applications in geosciences. FossilSketch, depicted in Figure 1, makes use of a modified version of the existing Hausdorff template matching technique to support automated grading of activities involving sketching microfossil outlines. This lightweight recognition technique is able to calculate cumulative distance between resampled points from the input sketch and only a single instructor-provided template. This recognition system effectively acts as a shape accuracy algorithm, returning the cumulative distance as an index of dissimilarity between a student-provided sketch and the instructor-provided template. This forms the basis of the system's recognition technique that is used in the identification exercises designed for two microfossil groups, Foraminifera and Ostracoda. The paper also outlines the other interactive games and assessments that underlie the FossilSketch system.
16
+
17
+ ![01963e5c-86b4-77e4-b670-55fba153041f_0_924_408_721_414_0.jpg](images/01963e5c-86b4-77e4-b670-55fba153041f_0_924_408_721_414_0.jpg)
18
+
19
+ Figure 1: A participant using the FossilSketch educational web app.
20
+
21
+ ## 2 BACKGROUND INFORMATION
22
+
23
+ ### 2.1 Micropaleontology
24
+
25
+ Micropaleontology is a critical tool for determining the ages of sedimentary rocks for both industrial (e.g., oil exploration) and scientific applications [28]. Microfossil species are also sensitive to specific environmental parameters and are often used to reconstruct past changes in ocean temperature, coastal sea-level, and seafloor oxygenation [36]. Further, microfossils are used in modern, real-time, environmental monitoring because they respond quickly to environmental change [12]. Despite their increasing usefulness, training students in micropaleontology has declined.
26
+
27
+ Foraminifera and Ostracoda are two of the most commonly used microfossils in industrial, environmental, and scientific applications; these are also some of the larger microfossils, which allows students to view them with standard stereoscopes. Foraminifera are amoeboid protists with shells made of calcium carbonate or agglutinated sediment grains and are often abundant in marine environments [6]. Ostracoda are micro-crustaceans with a bivalved calcareous carapace that are found in all aquatic environments from fresh water lakes to to the deep-sea [6]. The morphology of species in both groups is closely related to the environments in which they live $\left\lbrack {{22},{29},{43}}\right\rbrack$ and these two groups are often used in species-specific geochemical studies [25], thus accurate identification is important for using this tool. FossilSketch application focuses on these two groups of microfossils.
28
+
29
+ Accurate identification of species is the crucial first step in all applications of microfossils. Sketching is critical in understanding the morphological differences because it helps students internalize the characteristic features and better understand them by connecting their sketch to the specimen. Researchers find that sketching benefits learning in a wide range of disciplines, from human anatomy and biology to engineering, geography and math $\left\lbrack {9,{20},{21},{39},{45}}\right\rbrack$ . However, one of the challenges in teaching micropaleontology is the amount of individual feedback students need on their sketches to ensure they are learning the correct features for identification.
30
+
31
+ ### 2.2 Related Works
32
+
33
+ #### 2.2.1 Geoscience Educational Tools
34
+
35
+ The geosciences have rapidly adopted online and remote-based educational tools over the last five years. The popularity of online learning platforms has led to the development of online resources, new pedagogical practices, and course curricula (e.g., $\left\lbrack {5,{10},{11}}\right\rbrack$ ). Successful examples that integrate technology into geoscience classes include high resolution digital imaging for mapping and documenting geological outcrops, 3D virtual simulations, and digitalization of fossil collections $\left\lbrack {8,{15},{27}}\right\rbrack$ . For laboratory-based courses, scholarship has primarily focused on accessibility for students with visual disabilities at the introductory level [13] whereas field-based course literature on accessibility has mostly focused on inclusive practices to better serve students with mobility disabilities [13].
36
+
37
+ Some of the successful software used in geoscience education include the following. Researchers at Northwestern University and IBM pioneered sketching software uses in geoscience with the CogS-ketch application and a series of 26 introductory geoscience worksheets about key geoscience concepts [20]. CogSketch aids students in solving discipline-specific spatial problems while providing instructors with insights into student thinking and learning. Real-time feedback identifies erroneous sketch features, and helps students reconsider and correct them. Milliken developed tutorials to study sandstone petrology at the University of Texas at Austin using a "virtual microscope" [33]. Students are able to practice identification of a wide array of sandstone components outside of the laboratory and independent of the instructor. They found student attainment of petrography skills improved with tutorial use.
38
+
39
+ As for micropaleontology, researchers note a lack of human experts and decline in micropalentology training [14, 26, 34], however, most software development has been aimed at automated identification of microfossils. The earliest attempts lacked accuracy and were not fully automated $\left\lbrack {7,{46}}\right\rbrack$ . More recent approaches to automated micropaleontology identification software usually focuses on machine learning and uses 3D models for planktic and benthic foraminifera identification $\left\lbrack {{14},{26},{34}}\right\rbrack$ . Their results indicate that current image classification techniques perform identifications comparably to human experts [34].
40
+
41
+ Several large microfossil databases were built that include taxonomic hierarchy data, images, ecological characteristics and geographical distribution, as well as type species information (e.g., for Ostracoda: Modern Podocopid Database [17]; World Ostracoda Database (WorMS) [1]; for Foraminifera: World Foraminifera Database, (WorMS) [2]; Foraminifera Gallery, (Foraminifera.eu [44]). However, these online resources are designed for an advanced user and are difficult to use for entry level specialists and students without instruction on microfossil morphology.
42
+
43
+ To summarize, there is clearly a need and growing interest in developing automated AI methods for microfossil identification due to decline in human experts numbers. We believe that developing educational software on Foraminifera and Ostracoda would be a more efficient approach to solving the problem of the lack of human experts. Thus, designing novel, universally accessible, and academically rigorous educational tools is a highly relevant task for undergraduate geoscience education.
44
+
45
+ ![01963e5c-86b4-77e4-b670-55fba153041f_1_926_148_719_649_0.jpg](images/01963e5c-86b4-77e4-b670-55fba153041f_1_926_148_719_649_0.jpg)
46
+
47
+ Figure 2: After students log in, they are shown the landing page. Modules are divided into two columns, with required sections on the left and optional, extra credit, on the right.
48
+
49
+ #### 2.2.2 Digital Sketch Recognition in the Classroom
50
+
51
+ Sketching activities in the classroom have pedagogically been linked to enhanced student creativity and learning [35, 37, 41, 52, 53]. Studies have confirmed that information retention and learning outcomes are significantly improved when engaging in drawing and writing activities vs. using a keyboard as the primary input modality [35]. Sketch-based learning tools have been linked to a higher retention of information and improved skill compared against students who do not learn with sketch-based activities $\left\lbrack {{23},{54}}\right\rbrack$
52
+
53
+ Early gesture recognition systems developed by Rubine [47] have led to improved recognition systems including template-matching algorithms from the "Dollar" family of recognizers $\left\lbrack {3,4,{50},{51},{55}}\right\rbrack$ that produced lightweight recognition systems easily added to existing software. The "Dollar" recognizers perform classification tasks by using different methods of calculating distance from user-generated input compared against several samples of trained data. Despite these recognizers being used for classification techniques rather than grading sketch accuracy, we use this work as a basis for our recognition system. Both feature-based classification techniques and template matching techniques were later expanded into more robust systems for scaffolded recognition via systems like PaleoSketch [42] and LADDER [24], the second of which is notable for its integration of domain-specific shapes to better describe relationships between sketch properties to assist in recognition. More recent works like nuSketch [19] and COGSketch [18] integrate sketch recognition algorithms into educational tools to assist with the learning experience to measurable success.
54
+
55
+ Mechanix [38,49], Newton's Pen [32] and Newton's Pen II [31], and Physics Book [16] are systems specifically written to leverage the educational advantage of drawing and sketching into the core interactions of their tools. Indeed, these systems serve as the primary conceptual basis from which FossilSketch is designed. We aimed at adapting the educational techniques presented by these tools into the domain of micropaleontology in the classroom. This led to a variety of changes and design considerations taken in the teaching approach outlined in the next section.
56
+
57
+ ![01963e5c-86b4-77e4-b670-55fba153041f_2_148_149_1488_481_0.jpg](images/01963e5c-86b4-77e4-b670-55fba153041f_2_148_149_1488_481_0.jpg)
58
+
59
+ Figure 3: Overview of the activities in FossilSketch. a) is an example of an Ostracoda Orientation Game; b), d) and e) are examples of Foraminifera Matching Games; c) is a cropped screenshot of one of the modules from the FossilSketch landing page. Red arrows indicate which sub-figure belongs to which game, but arrows are not part of the FossilSketch interface and for illustrative purposes only.
60
+
61
+ ## 3 INTERFACE DESIGN
62
+
63
+ ### 3.1 Design Considerations
64
+
65
+ FossilSketch is a web-based educational tool for teaching students techniques for identifying microfossils. Educational materials for FossilSketch were developed to supplement various geoscience courses in the College of Geosciences at [author institution redacted for review]. Traditionally, undergraduate students learn about micropaleontology through lectures, diagrams, specimens viewed through a stereoscope, and hand-sized models in upper-level courses for geology majors. To allow for comparison between traditional and FossilSketch-based classes, we developed analogous educational materials for both groups. FossilSketch educational materials include the following: 1) educational videos; 2) instructional mini-games; 3) practice exercises; and 4) assessments. All four types of activities consist of content specifically created for FossilSketch and tailored to support the educational exercises in traditional and FossilSketch-based courses.
66
+
67
+ Exercises were developed based on the course learning objectives, the microfossil collections available, and the expertise of [co-author names redacted for review]. The level of difficulty and number of activities varied depending on whether the course is lower or upper division and whether the course primarily serves geoscience or non-geoscience majors. The landing page for each course also varied depending on the teaching goals and the activities assigned to students.
68
+
69
+ In Fall 2021, FossilSketch was deployed in Geol 208 ("Life on a Dynamic Planet"), a lower division undergraduate course where most students are not geology majors. Students were given access to FossilSketch 5 days before the in-person laboratory session during which one hour of laboratory time was devoted to FossilSketch activities. Students were required to complete activities for Foraminifera and could complete the Ostracoda activities included in a separate column of modules for extra credit.
70
+
71
+ ### 3.2 Interface Description
72
+
73
+ #### 3.2.1 Landing Page
74
+
75
+ The FossilSketch website initially prompts new and returning users to log in with their credentials. To ensure data integrity, new user registration is limited via a registration code assigned to each group of students who are part of the study, with each group being assigned a different code. Test accounts and external evaluators were assigned special login credentials and their activities were not recorded as part of the data collection.
76
+
77
+ After the participants log in, modules are listed in the order in which they are meant to be completed. Modules were added, modified, or removed depending on the class or activity in which FossilS-ketch was deployed. The landing page used in our current study is shown in Figure 2.
78
+
79
+ The self-contained nature of the exercises and the flexibility of the landing page interface offers the versatility of adding new exercises and rearranging the website experience depending on the course learning objectives.
80
+
81
+ #### 3.2.2 Educational Videos
82
+
83
+ Educational videos were created specifically for FossilSketch and were written to provide introductory information to help contextualize concepts covered in the rest of the FossilSketch's activity types. When users click on these modules, an overlay with an embedded YouTube link is displayed. Students are free to change playback with the standard embedded YouTube video controls and the overlay can be dismissed at any time by clicking outside of the video area. No progress data is recorded for this type of activity.
84
+
85
+ FossilSketch is intended to augment instructor lectures, meaning the videos are not intended to serve as a replacement for lecture material as is usually the case with typical instructional videos in an online learning interface. The FossilSketch system uses instructional videos to provide necessary information for students to engage with the rest of the modules if the students have not yet received instructor lectures, while at the same time emphasising concepts most directly relevant to the activities if they have attended in-depth lectures in the classroom.
86
+
87
+ #### 3.2.3 Instructional Mini-Games
88
+
89
+ FossilSketch integrates various kinds of interactive instructional tools. In order to improve student comprehension of microfossil identification, we broke identification tasks into small, minigame, tasks. Students were able to repeat tasks for mastery. Each mini-game consists of one or more types of interactions intended to highlight the visual-morphology aspect of learning about microfossil identification.
90
+
91
+ Matching Games require the participants to match morphological features, such as outline shape for Ostracoda, or morphototype and type of chamber arrangement for Foraminifera. At the beginning of the game the students are presented a reference image that lists each morphotype along with a sketched example, and students are able to return to this reference image again, when needed, by clicking on the zoomed-out image on the bottom right corner of the screen. When the game starts, the screen displays a small number of draggable "discs" or rectangular "cards" with actual microfossil photomicrographs that the user can move into slots with sketched categories for each feature used in this game. At the moment, four different mini games are created with this kind of interaction: Ostracoda lateral outline identification; Foraminifera apertures, chamber arrangement, and Foraminifera morphotypes identification.
92
+
93
+ ![01963e5c-86b4-77e4-b670-55fba153041f_3_173_168_1447_644_0.jpg](images/01963e5c-86b4-77e4-b670-55fba153041f_3_173_168_1447_644_0.jpg)
94
+
95
+ Figure 4: Menu of the morphotype ID exercises. Students pick from any of the unidentified morphotypes marked with a "?", and afterwards are shown their performance on a 3-star rating system.
96
+
97
+ All matching games include three rounds, with each level contributing to a final star score. The Foraminifera apertures and chamber arrangement mini games randomly pull images of Foraminifera from the database for matching to the corresponding apertures and chamber arrangement types, with each round of game having four cards to match. In the morphotype mini game, the number of drag-gable items and slots in later rounds increases from 4 in the first round, to 8 in the third round to increase difficulty. Students receive star rating form zero to three on how many rounds they got correctly in the first attempt.
98
+
99
+ Orientation Games integrate a rotation interaction to help students gain an understanding of how to correctly orient the ostracod valve for identification. An ostracod valve has four sides: dorsal, ventral, posterior, and anterior margins/side. This game starts with a general description of each of these margins to help students gain an intuition of how to identify each side of an ostracod. The user is tasked with rotating an ostracod to its position with the dorsal side up and all of its sides correctly labeled. To simplify the interaction, students rotate in one direction 90 degrees at a time by clicking or tapping once on the ostracod that is displayed on the center of the screen. When the student believes that the ostracod is oriented correctly, they submit their answer by selecting the "Finished" button on the center bottom of the screen.
100
+
101
+ Like with matching games, orientation games are divided into three rounds. In this case, each round consists of one ostracod valve that needs be rotated into correct orientation. Answers are marked "correct" if they are rotated correctly the first time the "Finished" button is clicked. Like in the matching games, students will need to correct their answer if it is incorrect to move onto to the next round, but the answer will still be marked incorrect. Students are encouraged to use the knowledge gained by correcting their wrong answer to try the exercise again to receive full credit for their answers and receive a 3 star rating.
102
+
103
+ #### 3.2.4 Identification Exercises
104
+
105
+ In micropaleontology, microfossils are picked from sediment samples and the obtained variety of different species represents an assemblage characteristic of the sample and may indicate the environmental setting or geologic age of the sample. A micropaleontologist would identify the species of microfossils in this assemblage based on their morphology, or their characteristic features. One of the goals of this interface is to demonstrate to students the various applications of microfossils in geosciences. Primarily, FossilSketch offers a scaffolded learning experience to guide students through the steps needed to identify microfossils and their morphological characteristics.
106
+
107
+ For the undergraduate course Geol 208, students identified foraminiferal morphotypes, and Ostracoda genera (as an extra credit). Students are first presented with a menu depicted in Figure 4. Once chosen, the Foraminifera morphotypes identification steps can be seen in Figure 5 and are the following: 1) sketch the outline of the foraminifer image on the left; 2) sketch the outline of the foraminifer image in the center; 3) choose the overall shape of the organism from a menu; 4) choose the type of chamber arrangement from the menu; 5) find and click on the aperture location in the center image; 6) identify a morphotype based on the selected features. The Ostracoda genera identification exercise steps are shown in Figure 6 and include: 1) sketch the maximum length of the valve; 2) sketch the maximum height of the valve; 3) identify right vs left valve; 4) sketch the outline of the ostracod valve; 5 ) choose the type of outline from the menu; 6) measure approximate size of the valve and choose the size range from the menu; 7) choose the types of ornamentation; 8) identify an ostracod genus based on the selected features.
108
+
109
+ Within each exercise the types of interactions are described below:
110
+
111
+ Sketching interactions (steps 1-2 for Foraminifera, and steps 1-2 and 4 for Ostracoda) help students retain and understand the various shapes and outlines they observe in different microfossils. It is the primary method of interaction after which the project is named. Sketching interactions integrate functionality from a library called paper.js to deliver flexible drawing interactions. Although the system is intended to be used with styli and touch to most naturally resemble a sketching activity, it is also possible to draw with a mouse or trackpad. Drawing interactions are usually integrated as the first steps of both kinds of identification exercises, as the overall shape of the sample is critical in identifying the microfossil.
112
+
113
+ ![01963e5c-86b4-77e4-b670-55fba153041f_4_156_168_1483_550_0.jpg](images/01963e5c-86b4-77e4-b670-55fba153041f_4_156_168_1483_550_0.jpg)
114
+
115
+ Figure 5: Step by step morphotype ID exercise, starting at the top-left screen and ending at the bottom-right, it includes the following steps: 1) sketch the left view of the organism, 2) sketch the middle view, 3) pick the overall shape, 4) pick the chamber arrangement, 5) click on the area of the aperture location, and 6) draw your conclusion - identify Foraminifera morphotype.
116
+
117
+ The FossilSketch system checks for correctness using a template-matching recognition heuristic. The template recognizer coded specifically for FossilSketch uses the Hausdorff-distance template matching technique as a baseline, implemented to act as a shape accuracy algorithm. We first resample both, the template and the input sketch, to a lower sampling rate with roughly equidistant points. The formula followed for calculating the interspace distance is:
118
+
119
+ $$
120
+ S = \frac{\sqrt{{\left( {x}_{m} - {x}_{n}\right) }^{2} + {\left( {y}_{m} - {y}_{n}\right) }^{2}}}{c = {256}} \tag{1}
121
+ $$
122
+
123
+ where $c = {256}$ is a constant empirically derived to adjust the distance between the points for optimal calculation of the distance metric. With the distance calculated, the sketch is resampled using the technique outlined in Algorithm 1.
124
+
125
+ Algorithm 1 Resampling Technique
126
+
127
+ ---
128
+
129
+ Require: Point list path, distance $S$
130
+
131
+ Ensure: Re-sampled point list out
132
+
133
+ $D \leftarrow 0$
134
+
135
+ for $i$ in path do
136
+
137
+ BetweenDist $\leftarrow \sqrt{{\left( {x}_{i + 1} - {x}_{i}\right) }^{2} + {\left( {y}_{i + 1} - {y}_{i}\right) }^{2}}$
138
+
139
+ $D \leftarrow D +$ BetweenDist
140
+
141
+ if $D > S$ then
142
+
143
+ $D \leftarrow$ BetweenDist
144
+
145
+ out $\leftarrow$ new point $\left( {{x}_{i},{y}_{i}}\right)$
146
+
147
+ end if
148
+
149
+ end for
150
+
151
+ ---
152
+
153
+ This iterates through each point in the provided path and gradually adds the distance between the current point and the next until the predetermined distance $S$ is reached, which is where the point will be placed. The algorithm repeats this process for every point in the input path.
154
+
155
+ We then iterate through each point in the input sketch, compare it with the corresponding point for the template sketch, and calculate the Euclidean distance between the two. Total distance is calculated across all the compared points and the cumulative sum is the overall "distance" between a template and the student input (see Figure 7). If the average deviation of the points is greater than the pixel with of the canvas divided by a constant, we would determine that the input sketch is too different from the template sketch. This constant was empirically determined after internal testing to match the desired student experience; students are meant to provide a relatively accurate, but not perfect, recreation of the template. This algorithm is outlined in Algorithm 2.
156
+
157
+ Algorithm 2 Compare Sketches
158
+
159
+ ---
160
+
161
+ Require: Student Spath, template Tpath
162
+
163
+ Ensure: Boolean result
164
+
165
+ totalDeviation $\leftarrow 0$
166
+
167
+ for $i$ in Spath do
168
+
169
+ closestDistance $\leftarrow$ INF
170
+
171
+ longestIndex $\leftarrow 0$
172
+
173
+ for $j$ in $T$ path do
174
+
175
+ tempDist $\leftarrow$ distance between ${Spat}{h}_{i}$ and ${Tpat}{h}_{j}$
176
+
177
+ if tempDist < closestDistance then
178
+
179
+ closestDist $\leftarrow$ tempDist
180
+
181
+ closestIndex $\leftarrow j$
182
+
183
+ end if
184
+
185
+ end for
186
+
187
+ end for
188
+
189
+ avgDeviation $\leftarrow \frac{\text{ totalDeviation }}{\text{ spathlength }}$
190
+
191
+ cwidth $\leftarrow$ pixel width of canvas
192
+
193
+ if avgDeviation $> \frac{\text{ cwidth }}{70}$ then
194
+
195
+ result $\leftarrow$ True
196
+
197
+ else
198
+
199
+ result $\leftarrow$ False
200
+
201
+ end if
202
+
203
+ ---
204
+
205
+ The template sketches are provided by [co-author names redacted for review] and coded directly into each foraminifer or ostracod image. Every foraminifer in FossilSketch has a database containing template sketch data the outline for its left view, its center view, its largest chamber, and coordinates for the location of the opening - aperture. The last item is used in the interaction labeled "Pointing Interactions" in this section. For every ostracod in a database there is a template sketch data for the outline, maximum length, and maximum height.
206
+
207
+ ![01963e5c-86b4-77e4-b670-55fba153041f_5_262_166_1259_714_0.jpg](images/01963e5c-86b4-77e4-b670-55fba153041f_5_262_166_1259_714_0.jpg)
208
+
209
+ Figure 6: Step by step of the ostracod ID exercise, starting at the top-left screen and ending at the bottom-right, it includes the following steps: 1) draw the max length of the ostracod, 2) draw the max height, 3) identify if it is a left or right valve, 4) sketch the outline of the ostracod, 5) choose the overall shape, 6) determine the length, 7) choose if the valve has ornamentation, and what are the ornamentation features, 8) draw your conclusion - identify Ostracoda genus.
210
+
211
+ Identification interactions (steps 3-5 for Foraminifera, and steps 3, 5-6 for Ostracoda) are presented to students as a horizontal multiple-choice menu along the bottom of the screen, and the student is asked to identify one of several characteristic features of the microfossils. For instance, the student might be asked "what is the overall shape of the organism?" and the possible answers might be "vase-like", "convex", "low-conical", "spherical" and "arch" among others. With each option, a sample sketched outline of each shape is shown, but it is important to note these are sketched examples and not photorealistic depictions of the choices. The student is tasked with remembering the particular physical properties of each characteristic feature rather than simply matching the pictures with the closest choice. Of these, one is the correct answer. In this part of the exercise, the student does not receive immediate feedback as to the correctness of this particular question, since all of these answers are summarized for the student to use to identify the foraminifer's morphotype or ostracod's genus.
212
+
213
+ Pointing interactions (step 5 for Foraminifera) are simplified forms of "sketching interactions" that require students to click once in a general area of interest, and FossilSketch checks if the identified location is correct. Specifically, this interaction is used to identify the general location of the aperture of a given foraminifer. The student is asked to click once in the region where they believe the aperture is. Each foraminifer in the FossilSketch database contains data on a rectangular region that points to the general area of its aperture. When the student clicks "Submit" after identifying the aperture area, FossilSketch checks to see if the location of the click is within the provided rectangular region. If it is, it is marked as correct. The location of the aperture is only used for identifying a foraminifer's morphotype.
214
+
215
+ ![01963e5c-86b4-77e4-b670-55fba153041f_5_1023_1121_526_433_0.jpg](images/01963e5c-86b4-77e4-b670-55fba153041f_5_1023_1121_526_433_0.jpg)
216
+
217
+ Figure 7: To grade answers, FossilSketch resamples and overlays both the student input and instructor-provided sketch, and a total distance metric is calculating by summing the Euclidean distance between sampled points.
218
+
219
+ The summary screen (step 6 for Foraminifera, and step 8 for Ostracoda) appears as the last step for each identification exercise, asking the student to draw from their observations and make the final selection of the foraminiferal morphotype, or Ostracoda genus. Each foraminiferal morphotype or Ostracoda genus has a list of characteristic features, and based on student answers, each feature correctly marked during the identification steps would have a blue check-mark. Choices of foraminiferal morphotypes, and Ostracoda genera are ranked by the highest number of matching properties with student answers. If student answers are correct, the choice is easy since it has the most check-marks and is the first item listed. Additionally, a picture of each choice is included, letting students double-check to see if their best-ranked choice is the most accurate. This system allows students to develop self-assessment skills to see if their choices match up with any given morphotype or genus. At any time students are able to revisit any of the previous steps, so this final choice would be a good motivation to do so if they notice their prior choices did not yield a definitive conclusion. It also allows students to see different properties that might be common between some morphotypes or genera, but each foraminifer and ostracod will have only one correct final answer.
220
+
221
+ #### 3.2.5 Assessment exercise
222
+
223
+ Once the students gain mastery of microfossil identification through practicing mini-games and microfossil identification, they proceed to the final type of exercise and assessment where they can apply their knowledge to reconstruct environments from an assemblage of different microfossils. In this exercise, the students view microfossil assemblages with approximately 20 foraminifer or ostracod individuals and identify the foraminiferal morphotypes or Ostracoda genera present. These assemblages imitate an actual microfossil "slide", as seen under a microscope that contains an assemblage of Foraminifera or Ostracoda. Students are asked to identify how many of each foraminiferal morphotype or ostracod genus are present in the slide. Before students start working on the exercise, they can view a screen with a summary of the information on foraminiferal morphotypes or ostracod genera and how they can be used to interpret environmental properties, such as the oxygenation or salinity of the water. This exercise includes 3 rounds and a summary. The student then needs to identify the different genera or morphotypes and select from the menu on the right side of the screen the number of each morphotype. It is intended that students will draw on their knowledge from the previous exercises to quickly identify the morphotypes or genera they see in these assemblages. For the ostracod assemblages, the menu to select from includes both the genera that are and genera that are not present in the assemblage. For the foraminiferal morphotypes, the assemblage includes two morpho-types to select from and "Other" category. To answer correctly, the student must provide a correct number for all categories, that is the two morphotypes or "other" for Foraminifera or genera for Ostracoda in an assemblage.
224
+
225
+ Both assemblage exercises conclude with a summary page where the student is asked to make an overall conclusion about the environment based on the assemblages. For instance, the Foraminifera morphotype assemblage exercise uses assemblages to determine for bottom water oxygenation. It has been shown that in environments where cylindrical- and flat-tapered morphotypes are found in abundance, the environments usually have low oxygenation [30]. The students are asked to rank each assemblage by relative oxygenation level. They should be able to do so when they consider the relative abundance of cylindrical-tapered and flat-tapered morphotypes they found in each of the three assemblages. Similarly for Ostracoda genera, students count the number of individuals of each genera, and determine the bottom water salinity indicated by each of the assemblages. These exercises assess microfossil identification learned and honed across all exercises of the FossilSketch system, and shows how microfossil research is applied.
226
+
227
+ ## 4 EVALUATION
228
+
229
+ FossilSketch was deployed as part of a laboratory exercise in a class titled "Life on a Dynamic Planet" for Fall 2021 at the investigator's university. [co-author name redacted for review] is the course's instructor for this class, and she introduced the students to the FossilSketch system. Students were instructed to watch the educational videos before coming to class. During the lab, they went through Foraminifera mini-games, morphotype identification and assessment exercise modules. Ostracoda modules were offered as an extra credit.
230
+
231
+ ![01963e5c-86b4-77e4-b670-55fba153041f_6_1013_156_542_385_0.jpg](images/01963e5c-86b4-77e4-b670-55fba153041f_6_1013_156_542_385_0.jpg)
232
+
233
+ Figure 8: Distribution of student ages among those who consented to have their age information included in the study.
234
+
235
+ ### 4.1 Design Study
236
+
237
+ Over the course of two weeks, a total of 32 students were asked to complete their assignment. All students were instructed to use the FossilSketch system as part of their assignment but consent to provide us data (surveys, focus group and sketch data) was fully optional. A total of 22 students consented to provide us data on their usage of FossilSketch for analysis.
238
+
239
+ #### 4.1.1 Study Population and Informed Consent
240
+
241
+ This study conformed to the university's Institutional Review Board protocol, IRB2019-1218M (expiration date 02/09/2023) ensuring the data is published only on users who gave us informed consent. Consents were distributed on the paper during the introductory portion of in the laboratory session. Of the 22 students who gave consent to have their demographic information published, 13 provided data on their race/ethnicity: 8 were White, 3 were Hispanic, 1 was Black, and 1 was Asian. Student ages ranged from 18 to 24, with specific age distribution shown in Figure 8
242
+
243
+ #### 4.1.2 Data Collection Protocol
244
+
245
+ The first module in FossilSketch has students complete a pre-study questionnaire that requests basic demographic information, and information on prior experience with micropaleontology and the topics covered in the FossilSketch interface, interest and self assessment in micropaleontology skills, and interest in future careers in micropaleontology. Similarly, the final module in FossilSketch is a post-study questionnaire that repeated questions regarding self-assessment of skill, interest in future careers involving micropaleontology, and feedback on use of FossilSketch. Most of the questions used a five-point Likert scale questions, and to provide us feedback students could elaborate in free-response forms. At the conclusion of the study, students were asked for feedback on their experience with the FossilSketch UI as part of informal interviews using focus groups with a subset of participants who were selected based on their agreement to take part on the focus group interviews.
246
+
247
+ FossilSketch tracks student performance by recording a student's "star rating" for each submitted exercise in an off-site grade-book SQL database. As a reminder, the final score of all exercises in FossilSketch is a rating ranging from one to three stars, with one being the most error-prone performance and three being error-free. Students are encouraged to repeat exercises if they did not receive three stars, and the website records every completed attempt in the grade-book database. This information lets us gauge overall performance in student activity on a per-exercise basis, and combining these responses with the more qualitative responses from students during focus group interviews and post-study questionnaires lets us analyze student interest.
248
+
249
+ ![01963e5c-86b4-77e4-b670-55fba153041f_7_252_173_504_384_0.jpg](images/01963e5c-86b4-77e4-b670-55fba153041f_7_252_173_504_384_0.jpg)
250
+
251
+ Figure 9: Visualization of the star ratings of submissions across all students.
252
+
253
+ Rating (Stars)
254
+
255
+ <table><tr><td colspan="2">Activity</td><td>1</td><td>” 2</td><td>3</td></tr><tr><td rowspan="4">Foraminifera</td><td>Identification</td><td>15</td><td>25</td><td>109</td></tr><tr><td>Morpho Match</td><td>0</td><td>6</td><td>5</td></tr><tr><td>Chamber Match</td><td>0</td><td>7</td><td>8</td></tr><tr><td>Assemblage</td><td>0</td><td>5</td><td>6</td></tr><tr><td rowspan="4">Ostracoda</td><td>Identification</td><td>0</td><td>4</td><td>59</td></tr><tr><td>Orientation</td><td>0</td><td>0</td><td>6</td></tr><tr><td>Outline</td><td>0</td><td>6</td><td>3</td></tr><tr><td>Assemblage</td><td>0</td><td>13</td><td>0</td></tr><tr><td colspan="2">Totals</td><td>15</td><td>66</td><td>196</td></tr></table>
256
+
257
+ Table 1: Number of student submissions for each FossilSketch activity.
258
+
259
+ ### 4.2 Results
260
+
261
+ Study data can be summarized as "Quantitative" and "Qualitative", with the former being the recorded performance metrics found in the grade-book SQL database and the latter summarizing student sentiment about the FossilSketch user experience.
262
+
263
+ #### 4.2.1 Quantitative
264
+
265
+ Modules that were tracked included all exercises and assessments, but activity on viewing videos was not tracked. However, the Fos-silSketch layout first displays the video modules, and instructors verbally encouraged students to complete the site's modules in order. The activities that were tracked in the grade-book SQL database are: foraminifera chamber matching game, foraminiferal morphotype matching game, morphotype identification exercise, assessment - paleoreconstruction using morphotypes, ostracod orientation game, ostracod outline matching game, ostracod genera identification exercise, and an assessment - ostracod assemblage exercise. Details on the exercises can be found in Sections 3.3 and 3.4. As a reminder, in this activity students were only required to complete the foraminifera exercises, with ostracod exercises existing as optional extra credit activities.
266
+
267
+ Table 1 summarizes the student submission data during our study. Figure 9 shows the submission score for the star ratings for the Morphotype Identification exercise. As expected, ostracod exercises received fewer submissions due to the extra credit nature of the exercises. However, it should also be pointed that both ID exercises received a much higher volume of submissions due to the module requiring at least 3 submitted foraminiferal morphotypes (out of a possible 17) and 3 submitted ostracods (out of a possible 10). If students further submitted one of the three but decided to retry for a better score, it would be counted as another submission. Out of a total of 32 students, this means students submitted an average of 3 submissions of ID exercises per student for foraminifer morphotypes, and 7 submissions per student for ostracods of the students who chose to complete the extra credit (a total of 9 students completed the extra credit modules).
268
+
269
+ #### 4.2.2 Qualitative
270
+
271
+ Surveys and lab assignments feedback. The following feedback was requested from students: 1 . On a scale of 1 to 5 , with 1 being completely disagree, and 5 being completely agree, how would you respond to the statement "I enjoyed the micropaleontology activities in this class." Please provide at least one example to explain your answer.
272
+
273
+ The most common rating the students gave was $3\left( {\mathrm{n} = {11}}\right)$ . Most of students pointed to some software bugs and this is likely why few people rated it 4 and 5 . Students' open-ended comments indicated that they: "enjoyed the identification aspect of the activities that allowed me to investigate and figure out where a sample fossil was found." "it was very buggy and that made it frustrating but the overall system was a good way to learn."
274
+
275
+ 2. Did you work on the micropaleontology activities outside of class (other than class time)? If so, please explain what you did.
276
+
277
+ Approximately ${50}\%$ of the students completed activities outside the class. Students' answers indicated that many of them used FossilSketch to finish lab assignment at home: "I did not finish in class so I completed the assignment at home." "yes, I watched videos and checked my lab answers." "Yes, I just finished the lab on my own time."
278
+
279
+ 3. How did you feel, typically, while you were working on micropaleontology activities in this class?
280
+
281
+ Eleven students provided the answer, five indicated that activities were enjoyable, and six people felt it was confusing since they did not have prior knowledge.
282
+
283
+ 4. Do you think the micropaleontology activities in this class are and will be useful to you? How so?
284
+
285
+ More than half of the students $\left( {\mathrm{n} = {10}}\right)$ who provided answers said that micropaleontology activities in this class are useful for future work, and career. The following quotes were associated with these answers: "yes, I am a geology major so I will likely use this later in school and in my career." "Yes, because I would like to go into paleontology as a career (although not micropaleontology), so it would be good to have prior knowledge in these areas."
286
+
287
+ 5. When did you feel uncertain or unsure about something while working on micropaleontology activities in this class? How did you deal with this uncertainty?
288
+
289
+ The most common answer $\left( {\mathrm{n} = 7}\right)$ was that a student went back to FossilSketch to look for answers.
290
+
291
+ 6. What was helpful in FossilSketch activities?
292
+
293
+ Students almost unanimously $\left( {\mathrm{n} = {12}}\right)$ said that videos and mini games were very helpful. The following quotes were associated with this question: "The videos and games."; "yes, I watched videos and checked my lab answers." "It was difficult to remember everything, so I went back in the videos and games." "Practice with identification" "Videos helped a lot with the lab questions." "YT videos + mini games" "The games were quite difficult, I rewatched videos and replayed the games until I was confident." "The videos were the most informative" "The videos and minigames were very helpful in explaining the different morphotypes".
294
+
295
+ <table><tr><td>Resource Type</td><td>Count</td></tr><tr><td>Rewatched FossilSketch videos</td><td>16</td></tr><tr><td>Retried FossilSketch games</td><td>14</td></tr><tr><td>Retried Morphotype ID games</td><td>12</td></tr><tr><td>Collaborated with others</td><td>9</td></tr><tr><td>Used in-person handouts</td><td>5</td></tr><tr><td>Other</td><td>2</td></tr></table>
296
+
297
+ Table 2: A count on the different resources that students used to complete their lab assignment.
298
+
299
+ Additionally, when completing their lab assignment, students were asked what resources did they use to answer questions about microfossils. The following table shows that students used FossilS-ketch activities for completing their lab assignment, with videos, mini games and morphotype ID being the most common.
300
+
301
+ Focus group feedback In the focus groups discussion, students provided the following feedback:
302
+
303
+ 1. How was your experience using FossilSketch?
304
+
305
+ "Good. The website was easy to navigate. There were no crushes and bugs. Learning material was easy to access. I like how we could learn with the videos, but I do wish that videos also had slides to go back to individually rather watching the entire video."; "It was good, the videos were good, the games were cool."; "I liked the games and that we could re-try them until we've learned."
306
+
307
+ 2. Anything you disliked? "I wish we had feedback to know what we did wrong instead of just saying "it's wrong"."; "Sometimes it was buggy, zooming in and out didn't work."
308
+
309
+ 3. If you were to add new features to FossilSketch, what would it be?
310
+
311
+ "The games need hints for correct answers."; "Review sheet for the videos would help."; "For the stars, add percent, or partial stars, like 3.5.”
312
+
313
+ 4. If you were to take another class would you want to use FossilSketch, or be in traditional class without software?
314
+
315
+ "Prefer to use software, creative applications make learning easier."; "FossilSketch could be supplementary to traditional classes. The best would be to combine."
316
+
317
+ 5. What was your favorite activity in FossilSketch?
318
+
319
+ "Morphotypes identification game."; "I like the extra credit (Ostracoda) activities, they were easier than the main ones."; "I liked the videos, they were the most informative."
320
+
321
+ 6. For sections with mini-games, morphotype identification and the paleoreconstruction assessment, the first time you worked with it, did you know what to do? Was it intuitive?
322
+
323
+ Majority of the students reported it was intuitive and they did not have any problems navigating between different steps of each section.
324
+
325
+ ### 4.3 Discussion
326
+
327
+ We observed a measurable amount of student interest across FossilS-ketch submissions overall via a combination of analyzing exercise submissions and qualitative results, although we will also note there were varying degrees of interest when observing individual exercises and games. Morphotype and Genus ID exercises for both types of microfossils comprised the highest number of submissions by a wide margin, with lower observable numbers of submissions in template matching and environmental reconstruction games for the required portion of the lab assignment. There were a total of 15 submissions for the chamber matching game, and 11 submissions for the environmental reconstruction out of a total of 32 participants who used the system in the class (see Table 1). For morphotype ID, the 149 total submissions is partially explained by the requirement of completing 3 submissions as part of the lab exercise, but that alone does not account for all submissions since students submitted an average of 4.66 submissions. One possible inference is that students felt encouraged to complete the ID exercises in particular because the design of these activities was more appealing, an observation we found important due these exercises being the most complex in Fossilsketch. As section 3.4 specifies, ID exercises consisted of several interactions including sketching, pointing, and completing multiple choice questions over 6-8 separate steps, which offer cumulative observations about the morphotype or genus in question. By contrast, the matching games consist of one main interaction and do not involve the student drawing a conclusion. We believe the engaging design and applied problem solving implemented into the ID exercises can be accounted for the increase in the number of total submissions and average submissions per student, well above the required three per student.
328
+
329
+ Qualitative feedback was overall positive with various students indicating intuitive user experience. Some students specifically mention the identification exercises as the activity they most enjoyed. Students rated videos, games and ID exercises as very useful when completing the lab assignment. Some students mentioned they found the games initially difficult and others consider the subject of micropaleontology to be difficult in general, but were able to improve their understanding of the subject by referring to the informational materials in FossilSketch and rewatching videos and repeating exercises in the system. Students were also able to complete the lab assignment remotely at home, which would not have been possible in a traditional lab environment without FossilSketch. Table 2 lists the student answers for resources used to complete the lab assignment, with 42 of 58 answers (72%) using either FossilSketch videos, games, or ID exercises for assistance in their assignment.
330
+
331
+ The primary difficulty in interactions was the lack of scaling in the FossilSketch interface, which made certain low-resolution or zoomed-in displays leave out UI elements that made it difficult to complete the exercises. Some students would change the zoom level of their screen, which would result in the "bugs" that some students mentioned in their qualitative feedback. Some students also expressed disinterest in the system largely due to micropaleontology not being relevant to their major of study.
332
+
333
+ Overall, we observed that the proposed system was successful in providing an engaging and informative tool for learning that students were interested in using on their own to complete the class's laboratory assignment. Generally positive feedback from students and a large number of submissions for the identification exercises suggest a positive overall learning experience and succeeds in our goal of an intuitive educational tool that can be used in tandem with in-class learning.
334
+
335
+ ## 5 FUTURE WORK
336
+
337
+ The modular design of FossilSketch provides flexibility in creating course-specific landing pages, so we will continue to iterate on the existing exercises for additional polish and bug-fixing reported in the study. Additionally, we intend to implement an instructor interface that would provide instructors with a login that would display their student submissions and performance. In addition, this interface will provide instructors with a system to create their own landing pages from within the website, allowing them to alter the order, add or remove exercises. We also intend for this interface to allow instructors to add more Foraminifera and Ostracoda morphotypes and genera for the identification exercises. We expect that these additions will allow this system to be deployed in various classrooms with a large number of instructors without the need for web developers to implement changes for each instructors' needs.
338
+
339
+ ## REFERENCES
340
+
341
+ [1] World Foraminifera Database. http://www.marinespecies.org/ foraminifera/. Accessed: 2021-08-06.
342
+
343
+ [2] World Ostracoda Database. http://www.marinespecies.org/ ostracoda/. Accessed: 2021-12-02.
344
+
345
+ [3] L. Anthony and J. O. Wobbrock. A lightweight multistroke recognizer for user interface prototypes. In Proceedings of Graphics Interface
346
+
347
+ 2010, pp. 245-252. ACM, 2010.
348
+
349
+ [4] L. Anthony and J. O. Wobbrock. \$ n-protractor: A fast and accurate multistroke recognizer. In Proceedings of Graphics Interface 2012, pp. 117-120. ACM, 2012.
350
+
351
+ [5] Arizona State University. Virtual field trips. https://vft.asu.edu/, 2020. Last accessed: 2021-12-02.
352
+
353
+ [6] H. A. Armstrong and M. D. Brasier. Foraminifera. Microfossils, Second Edition, pp. 142-187, 2005.
354
+
355
+ [7] J. Athersuch, F. Banner, A. Higgins, R. Howarth, and P. Swaby. The application of expert systems to the identification and use of microfos-sils in the petroleum industry. Mathematical geology, 26(4):483-489, 1994.
356
+
357
+ [8] C. Bentley. Digital samples for online labs and vitual field experience. In GSA annual meeting. Seattle, WA, October 2017. https://gsa.confex.com/gsa/2017AM/meetingapp.cgi/Paper/298724.
358
+
359
+ [9] A. Bhat, G. K. Kasiviswanathan, C. Mathew, S. Polsley, E. Prout, D. Goldberg, and T. Hammond. An intelligent sketching interface for education using geographic information systems. In A. A. T. Hammond and M. Prasad, eds., Frontiers in Pen and Touch: Impact of Pen and Touch Technology on Education, Human-Computer Interaction Series, chap. 11, pp. 147-163. Springer, Switzerland, 2017. https://doi.org/10.1007/978-3-319-64239-0_11.
360
+
361
+ [10] T. Bralower. Adapting an online course for a large student cohort. In GSA annual meeting. Seattle, WA, October 2017. https://gsa.confex.com/gsa/2017AM/meetingapp.cgi/Paper/298421.
362
+
363
+ [11] T. Bravo. Developing an online seismology course for alaska. In GSA annual meeting. Seattle, WA, October 2017. https://gsa.confex.com/gsa/2017AM/meetingapp.cgi/Paper/308093.
364
+
365
+ [12] L. Capotondi, C. Bergami, G. Orsini, M. Ravaioli, P. Colantoni, and S. Galeotti. Benthic foraminifera for environmental monitoring: a case study in the central adriatic continental shelf. Environmental Science and Pollution Research, 22(8):6034-6049, Apr 2015. doi: 10. 1007/s11356-014-3778-7
366
+
367
+ [13] I. G. Carabajal, A. M. Marshall, and C. L. Atchison. A synthesis of instructional strategies in geoscience education literature that address barriers to inclusion for students with disabilities. Journal of Geoscience Education, 65(4):531-541, 2017. https://doi.org/10.5408/16-211.1.
368
+
369
+ [14] L. Carvalho, G. Fauth, S. B. Fauth, G. Krahl, A. Moreira, C. Fernandes, and A. Von Wangenheim. Automated microfossil identification and segmentation using a deep learning approach. Marine Micropaleontology, 158:101890, 2020.
370
+
371
+ [15] A. J. Cawood and C. E. Bond. erock: An open-access repository of virtual outcrops for geoscience education. GSA Today, 2019.
372
+
373
+ [16] S. Cheema and J. LaViola. Physicsbook: a sketch-based interface for animating physics diagrams. In Proceedings of the 2012 ACM international conference on Intelligent User Interfaces, pp. 51-60. ACM, Lisbon, Portugal, 2012.
374
+
375
+ [17] T. Cronin, L. Gemery, E. Brouwers, W. Briggs Jr, A. Wood, A. Stepanova, E. Schornikov, J. Farmer, and K. Smith. Modern arctic ostracode database. IGBP PAGES/WDCA contribution series number: 2010-081. ftp.ncdc.noaa.gov/pub/data/paleo/ contributions_by_author/cronin2010/cronin2@10.txt, 2010. Accessed: 2021-11-05.
376
+
377
+ [18] K. Forbus, K. Lockwood, M. Klenk, E. Tomai, and J. Usher. Open-domain sketch understanding: The nusketch approach. In AAAI Fall Symposium on Making Pen-based Interaction Intelligent and Natural, pp. 58-63. AAAI Press, Arlington, VA, 2004.
378
+
379
+ [19] K. Forbus, J. Usher, A. Lovett, K. Lockwood, and J. Wetzel. Cogsketch: Sketch understanding for cognitive science research and for education. Topics in Cognitive Science, 3(4):648-666, 2011.
380
+
381
+ [20] K. D. Forbus, M. Chang, M. McLure, and M. Usher. The cognitive science of sketch worksheets. Topics in cognitive science, 9(4):921- 942, 2017.
382
+
383
+ [21] J. French, M. A. Segado, and P. Z. Ai. Sketching graphs in a calculus mooc: Preliminary results. In T. Hammond, A. Adler, and M. Prasad, eds., Frontiers in Pen and Touch: Impact of Pen and Touch Technology
384
+
385
+ on Education, p. 93-102. Springer International Publishing, Cham, 2017. doi: 10.1007/978-3-319-64239-0_7
386
+
387
+ [22] P. Frenzel and I. Boomer. The use of ostracods from marginal marine,
388
+
389
+ brackish waters as bioindicators of modern and quaternary environmental change. Palaeogeography, Palaeoclimatology, Palaeoecology, 225(1-4):68-92, 2005.
390
+
391
+ [23] T. Hammond. Dialectical creativity: Sketch-negate-create. In Studying Visual and Spatial Reasoning for Design Creativity, pp. 91-108. Springer, Dordrecht, England, 2015.
392
+
393
+ [24] T. Hammond and R. Davis. Ladder, a sketching language for user interface developers. In Computers & Graphics, vol. 29-4, pp. 518- 532. Elsevier, Amsterdam, The Netherlands, 2005. doi: 10.1016/j.cag. 2005.05.005
394
+
395
+ [25] A. Holbourn, W. Kuhnt, M. Lyle, L. Schneider, O. Romero, and N. Andersen. Middle miocene climate cooling linked to intensification of eastern equatorial pacific upwelling. Geology, 42(1):19-22, 2014.
396
+
397
+ [26] A. Y. Hsiang, A. Brombacher, M. C. Rillo, M. J. Mleneck-Vautravers, S. Conn, S. Lordsmith, A. Jentzen, M. J. Henehan, B. Metcalfe, I. S. Fenton, et al. Endless forams: $> {34},{000}$ modern planktonic foraminiferal images for taxonomic training and automated species recognition using convolutional neural networks. Paleoceanography and Paleoclimatology, 34(7):1157-1177, 2019.
398
+
399
+ [27] Z. Hughes, K. Johnson, R. Belben, C. Hughes, and R. Twitch-ett. Using museum collections to study brachiopod size change across extinction boundaries - taking advantage of mass digitization. In GSA annual meeting. Seattle, WA, October 2017. https://gsa.confex.com/gsa/2017AM/meetingapp.cgi/Paper/308718.
400
+
401
+ [28] R. W. Jones. Foraminifera and their Applications. Cambridge University Press, 2013.
402
+
403
+ [29] F. J. Jorissen, C. Fontanier, and E. Thomas. Chapter seven paleoceano-graphical proxies based on deep-sea benthic foraminiferal assemblage characteristics. Developments in Marine Geology, 1:263-325, 2007.
404
+
405
+ [30] K. Kaiho. Benthic foraminiferal dissolved-oxygen index and dissolved-oxygen levels in the modern ocean. Geology, 22(8):719-722, 1994.
406
+
407
+ [31] C. Lee, J. Jordan, T. F. Stahovich, and J. Herold. Newtons pen ii: an intelligent, sketch-based tutoring system and its sketch processing techniques. In Proceedings of the International Symposium on Sketch-Based Interfaces and Modeling, pp. 57-65. ACM, Annecy, France, 2012.
408
+
409
+ [32] W. Lee, R. de Silva, E. J. Peterson, R. C. Calfee, and T. F. Stahovich. Newton's pen: A pen-based tutoring system for statics. Computers & Graphics, 32(5):511-524, 2008.
410
+
411
+ [33] K. Milliken, J. Barufaldi, E. McBride, and S.-J. Choh. Design and assessment of an interactive digital tutorial for undergraduate-level sandstone petrology. Journal of Geoscience Education, 51(4):381-386, 2003.
412
+
413
+ [34] R. Mitra, T. Marchitto, Q. Ge, B. Zhong, B. Kanakiya, M. Cook, J. Fehrenbacher, J. Ortiz, A. Tripati, and E. Lobaton. Automated species-level identification of planktic foraminifera using convolutional neural networks, with comparison to human performance. Marine Micropaleontology, 147:16-24, 2019.
414
+
415
+ [35] P. A. Mueller and D. M. Oppenheimer. The pen is mightier than the keyboard: Advantages of longhand over laptop note taking. Psychological science, 25(6):1159-1168, 2014.
416
+
417
+ [36] J. W. Murray. Ecology and applications of benthic foraminifera. Cambridge University Press, 2006.
418
+
419
+ [37] K. Nakakoji, A. Tanaka, and D. Fallman. "sketching" nurturing creativity: commonalities in art, design, engineering and research. In CHI'06 extended abstracts on Human factors in computing systems, pp. 1715-1718. ACM, Montreal, Canada, 2006.
420
+
421
+ [38] T. Nelligan, S. Polsley, J. Ray, M. Helms, J. Linsey, and T. Hammond. Mechanix: a sketch-based educational interface. In Proceedings of the 20th International Conference on Intelligent User Interfaces Companion, pp. 53-56. ACM, Atlanta, Georgia, 2015.
422
+
423
+ [39] A. Noorafshan, L. Hoseini, M. Amini, M.-R. Dehghani, J. Kojuri, and L. Bazrafkan. Simultaneous anatomical sketching as learning by doing method of teaching human anatomy. Journal of education and health promotion, 3, 2014.
424
+
425
+ [40] M. A. O'Neill and M. Denos. Automating biostratigraphy in oil and gas exploration: Introducing geodaisy. Journal of Petroleum Science
426
+
427
+ and Engineering, 149:851-859, 2017.
428
+
429
+ [41] M. Pache, A. Römer, U. Lindemann, and W. Hacker. Sketching behaviour and creativity in conceptual engineering design. In Proceedings of the International Conference on Engineering Design (ICED'01), pp. 243-252. Springer, Berlin, Germany, 2001.
430
+
431
+ [42] B. Paulson and T. Hammond. Paleosketch: accurate primitive sketch recognition and beautification. In Proceedings of the ${13}^{\text{th }}$ International Conference on Intelligent User Interfaces, pp. 1-10. ACM, Gran Canaria, Spain, 2008.
432
+
433
+ [43] R. K. Poirier, T. M. Cronin, W. M. Briggs Jr, and R. Lockwood. Central arctic paleoceanography for the last ${50}\mathrm{{kyr}}$ based on ostracode faunal assemblages. Marine Micropaleontology, 88:65-76, 2012.
434
+
435
+ [44] F. Project. Foraminifera Gallery. http://www.foraminifera.eu/.Accessed: 2021-12-02.
436
+
437
+ [45] K. Quillin and S. Thomas. Drawing-to-learn: a framework for using drawings to promote model-based reasoning in biology. CBE-Life Sciences Education, 14(1):es2, 2015.
438
+
439
+ [46] K. Ranaweera, A. P. Harrison, S. Bains, and D. Joseph. Feasibility of computer-aided identification of foraminiferal tests. Marine Micropaleontology, 72(1-2):66-75, 2009.
440
+
441
+ [47] D. Rubine. Specifying gestures by example. In Proceedings of the 18th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '91, pp. 329-337. ACM, New York, NY, USA, 1991.
442
+
443
+ [48] B. J. Tewksbury, C. A. Manduca, D. W. Mogk, R. H. Macdonald, and M. Bickford. Geoscience education for the anthropocene. Geological Society of America Special Papers, 501:189-201, 2013.
444
+
445
+ [49] S. Valentine, F. Vides, G. Lucchese, D. Turner, H.-H. A. Kim, W. Li, J. Linsey, and T. Hammond. Mechanix: A sketch-based tutoring system for statics courses. In Proceedings of the Twenty-Fourth Innovative Applications of Artificial Intelligence Conference (IAAI), pp. 2253- 2260. AAAI, Toronto, Canada, July 22-26, 2012.
446
+
447
+ [50] R.-D. Vatavu, L. Anthony, and J. O. Wobbrock. Gestures as point clouds: a $\$$ p recognizer for user interface prototypes. In Proceedings of the 14th ACM international conference on Multimodal interaction, pp. 273-280, 2012.
448
+
449
+ [51] R.-D. Vatavu, L. Anthony, and J. O. Wobbrock. \$ q: A super-quick, articulation-invariant stroke-gesture recognizer for low-resource devices. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services, pp. 1-12, 2018.
450
+
451
+ [52] I. M. Verstijnen, C. van Leeuwen, G. Goldschmidt, R. Hamel, and J. Hennessey. Sketching and creative discovery. Design studies, 19(4):519-546, 1998.
452
+
453
+ [53] C. Widjaja and S. S. Sumali. Short-term memory comparison of students of faculty of medicine pelita harapan university batch 2015 between the handwriting and typing method. Medicinus, 7(4):108-111, 2020.
454
+
455
+ [54] B. Williford. Sketchtivity: Improving creativity by learning sketching with an intelligent tutoring system. In Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition, pp. 477-483. ACM, Singapore, 2017.
456
+
457
+ [55] J. O. Wobbrock, A. D. Wilson, and Y. Li. Gestures without libraries, toolkits or training: a $\$ 1$ recognizer for user interface prototypes. In Proceedings of the 20th annual ACM symposium on User interface software and technology, pp. 159-168. ACM, 2007.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/oyJfW3GmBGX/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,382 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § FOSSILSKETCH: A NOVEL INTERACTIVE WEB INTERFACE FOR TEACHING UNIVERSITY-LEVEL MICROPALEONTOLOGY
2
+
3
+ Category: Research
4
+
5
+ § ABSTRACT
6
+
7
+ Micropaleontology studies fossils that are very small and require the use of a microscope. Micropaleontologists use microfossils to analyze data critical for estimating future sea level rise, understanding the causes of past climate upheavals, and finding economically important resources like oil and gas. This subject is taught as part of some geology classes at the undergraduate and graduate university level, but training in this field is time-consuming and less classroom time is typically devoted to the topic. Although demand for geoscientists is projected to grow, fewer students are exposed and trained in micropaleontology. Geosciences currently need micropaleontol-ogists as the population of experts is declining. While interactive math and engineering web interfaces are recently becoming more common, a similar system that provides students with a repository of knowledge and interactive exercises in the micropaleontology space was lacking. To address this problem of training students in micropaleontology, we developed FossilSketch: a web-based interactive learning tool that teaches, trains, and assesses students in the basics of micropaleontology. The interface we developed contains various interactions and a new template-based system to check drawn shape accuracy helps students learn characteristic features of microfossils. Our evaluation included deploying this system to 32 students in an undergraduate geology class at our university. The accompanying user study results indicate that FossilSketch is an engaging educational tool that can be deployed alongside the classroom for in-class and at-home learning. Student feedback together with our recorded submission data for various exercises suggests that FossilSketch is an effective online learning tool that serves as a helpful reference for class activities, allows for remote learning, presents helpful and engaging interactive games, and encourages repeat submissions.
8
+
9
+ Index Terms: Human-centered computing-Interactive systems and tools; Information systems-Web applications; Applied computing-Interactive learning environments; Applied computing—Earth and atmospheric sciences;
10
+
11
+ § 1 INTRODUCTION
12
+
13
+ The fossil remains of micro-organisms preserved in modern and ancient sediments play key roles in determining the ages of geologic records, reconstructing ancient environments, and monitoring modern ecosystem health. However, training undergraduates to identify these microfossils is time-intensive and most students are not exposed to this tool in their courses. Core geoscience courses that reach all majors rarely include micropaleontology, the study of mi-crofossils, because contact hours are not sufficient to train students at the necessary level of detail. Student training in micropaleontology has declined over the last several decades as the field of geology has broadened and micropaleontology has been replaced by other methods $\left\lbrack {6,{48}}\right\rbrack$ . Thus, although the geosciences currently need micropaleontologists because the population of experts is aging, few students are trained to use this tool [40].
14
+
15
+ To enable and enhance training of undergraduates in the basics of micropaleontology in remote, hybrid and in-class conditions, we developed FossilSketch, an interactive digital tool that introduces students to micropaleontology through educational videos, sketch-based exercises and mini-games focused on microfossils and their applications in geosciences. FossilSketch, depicted in Figure 1, makes use of a modified version of the existing Hausdorff template matching technique to support automated grading of activities involving sketching microfossil outlines. This lightweight recognition technique is able to calculate cumulative distance between resampled points from the input sketch and only a single instructor-provided template. This recognition system effectively acts as a shape accuracy algorithm, returning the cumulative distance as an index of dissimilarity between a student-provided sketch and the instructor-provided template. This forms the basis of the system's recognition technique that is used in the identification exercises designed for two microfossil groups, Foraminifera and Ostracoda. The paper also outlines the other interactive games and assessments that underlie the FossilSketch system.
16
+
17
+ < g r a p h i c s >
18
+
19
+ Figure 1: A participant using the FossilSketch educational web app.
20
+
21
+ § 2 BACKGROUND INFORMATION
22
+
23
+ § 2.1 MICROPALEONTOLOGY
24
+
25
+ Micropaleontology is a critical tool for determining the ages of sedimentary rocks for both industrial (e.g., oil exploration) and scientific applications [28]. Microfossil species are also sensitive to specific environmental parameters and are often used to reconstruct past changes in ocean temperature, coastal sea-level, and seafloor oxygenation [36]. Further, microfossils are used in modern, real-time, environmental monitoring because they respond quickly to environmental change [12]. Despite their increasing usefulness, training students in micropaleontology has declined.
26
+
27
+ Foraminifera and Ostracoda are two of the most commonly used microfossils in industrial, environmental, and scientific applications; these are also some of the larger microfossils, which allows students to view them with standard stereoscopes. Foraminifera are amoeboid protists with shells made of calcium carbonate or agglutinated sediment grains and are often abundant in marine environments [6]. Ostracoda are micro-crustaceans with a bivalved calcareous carapace that are found in all aquatic environments from fresh water lakes to to the deep-sea [6]. The morphology of species in both groups is closely related to the environments in which they live $\left\lbrack {{22},{29},{43}}\right\rbrack$ and these two groups are often used in species-specific geochemical studies [25], thus accurate identification is important for using this tool. FossilSketch application focuses on these two groups of microfossils.
28
+
29
+ Accurate identification of species is the crucial first step in all applications of microfossils. Sketching is critical in understanding the morphological differences because it helps students internalize the characteristic features and better understand them by connecting their sketch to the specimen. Researchers find that sketching benefits learning in a wide range of disciplines, from human anatomy and biology to engineering, geography and math $\left\lbrack {9,{20},{21},{39},{45}}\right\rbrack$ . However, one of the challenges in teaching micropaleontology is the amount of individual feedback students need on their sketches to ensure they are learning the correct features for identification.
30
+
31
+ § 2.2 RELATED WORKS
32
+
33
+ § 2.2.1 GEOSCIENCE EDUCATIONAL TOOLS
34
+
35
+ The geosciences have rapidly adopted online and remote-based educational tools over the last five years. The popularity of online learning platforms has led to the development of online resources, new pedagogical practices, and course curricula (e.g., $\left\lbrack {5,{10},{11}}\right\rbrack$ ). Successful examples that integrate technology into geoscience classes include high resolution digital imaging for mapping and documenting geological outcrops, 3D virtual simulations, and digitalization of fossil collections $\left\lbrack {8,{15},{27}}\right\rbrack$ . For laboratory-based courses, scholarship has primarily focused on accessibility for students with visual disabilities at the introductory level [13] whereas field-based course literature on accessibility has mostly focused on inclusive practices to better serve students with mobility disabilities [13].
36
+
37
+ Some of the successful software used in geoscience education include the following. Researchers at Northwestern University and IBM pioneered sketching software uses in geoscience with the CogS-ketch application and a series of 26 introductory geoscience worksheets about key geoscience concepts [20]. CogSketch aids students in solving discipline-specific spatial problems while providing instructors with insights into student thinking and learning. Real-time feedback identifies erroneous sketch features, and helps students reconsider and correct them. Milliken developed tutorials to study sandstone petrology at the University of Texas at Austin using a "virtual microscope" [33]. Students are able to practice identification of a wide array of sandstone components outside of the laboratory and independent of the instructor. They found student attainment of petrography skills improved with tutorial use.
38
+
39
+ As for micropaleontology, researchers note a lack of human experts and decline in micropalentology training [14, 26, 34], however, most software development has been aimed at automated identification of microfossils. The earliest attempts lacked accuracy and were not fully automated $\left\lbrack {7,{46}}\right\rbrack$ . More recent approaches to automated micropaleontology identification software usually focuses on machine learning and uses 3D models for planktic and benthic foraminifera identification $\left\lbrack {{14},{26},{34}}\right\rbrack$ . Their results indicate that current image classification techniques perform identifications comparably to human experts [34].
40
+
41
+ Several large microfossil databases were built that include taxonomic hierarchy data, images, ecological characteristics and geographical distribution, as well as type species information (e.g., for Ostracoda: Modern Podocopid Database [17]; World Ostracoda Database (WorMS) [1]; for Foraminifera: World Foraminifera Database, (WorMS) [2]; Foraminifera Gallery, (Foraminifera.eu [44]). However, these online resources are designed for an advanced user and are difficult to use for entry level specialists and students without instruction on microfossil morphology.
42
+
43
+ To summarize, there is clearly a need and growing interest in developing automated AI methods for microfossil identification due to decline in human experts numbers. We believe that developing educational software on Foraminifera and Ostracoda would be a more efficient approach to solving the problem of the lack of human experts. Thus, designing novel, universally accessible, and academically rigorous educational tools is a highly relevant task for undergraduate geoscience education.
44
+
45
+ < g r a p h i c s >
46
+
47
+ Figure 2: After students log in, they are shown the landing page. Modules are divided into two columns, with required sections on the left and optional, extra credit, on the right.
48
+
49
+ § 2.2.2 DIGITAL SKETCH RECOGNITION IN THE CLASSROOM
50
+
51
+ Sketching activities in the classroom have pedagogically been linked to enhanced student creativity and learning [35, 37, 41, 52, 53]. Studies have confirmed that information retention and learning outcomes are significantly improved when engaging in drawing and writing activities vs. using a keyboard as the primary input modality [35]. Sketch-based learning tools have been linked to a higher retention of information and improved skill compared against students who do not learn with sketch-based activities $\left\lbrack {{23},{54}}\right\rbrack$
52
+
53
+ Early gesture recognition systems developed by Rubine [47] have led to improved recognition systems including template-matching algorithms from the "Dollar" family of recognizers $\left\lbrack {3,4,{50},{51},{55}}\right\rbrack$ that produced lightweight recognition systems easily added to existing software. The "Dollar" recognizers perform classification tasks by using different methods of calculating distance from user-generated input compared against several samples of trained data. Despite these recognizers being used for classification techniques rather than grading sketch accuracy, we use this work as a basis for our recognition system. Both feature-based classification techniques and template matching techniques were later expanded into more robust systems for scaffolded recognition via systems like PaleoSketch [42] and LADDER [24], the second of which is notable for its integration of domain-specific shapes to better describe relationships between sketch properties to assist in recognition. More recent works like nuSketch [19] and COGSketch [18] integrate sketch recognition algorithms into educational tools to assist with the learning experience to measurable success.
54
+
55
+ Mechanix [38,49], Newton's Pen [32] and Newton's Pen II [31], and Physics Book [16] are systems specifically written to leverage the educational advantage of drawing and sketching into the core interactions of their tools. Indeed, these systems serve as the primary conceptual basis from which FossilSketch is designed. We aimed at adapting the educational techniques presented by these tools into the domain of micropaleontology in the classroom. This led to a variety of changes and design considerations taken in the teaching approach outlined in the next section.
56
+
57
+ < g r a p h i c s >
58
+
59
+ Figure 3: Overview of the activities in FossilSketch. a) is an example of an Ostracoda Orientation Game; b), d) and e) are examples of Foraminifera Matching Games; c) is a cropped screenshot of one of the modules from the FossilSketch landing page. Red arrows indicate which sub-figure belongs to which game, but arrows are not part of the FossilSketch interface and for illustrative purposes only.
60
+
61
+ § 3 INTERFACE DESIGN
62
+
63
+ § 3.1 DESIGN CONSIDERATIONS
64
+
65
+ FossilSketch is a web-based educational tool for teaching students techniques for identifying microfossils. Educational materials for FossilSketch were developed to supplement various geoscience courses in the College of Geosciences at [author institution redacted for review]. Traditionally, undergraduate students learn about micropaleontology through lectures, diagrams, specimens viewed through a stereoscope, and hand-sized models in upper-level courses for geology majors. To allow for comparison between traditional and FossilSketch-based classes, we developed analogous educational materials for both groups. FossilSketch educational materials include the following: 1) educational videos; 2) instructional mini-games; 3) practice exercises; and 4) assessments. All four types of activities consist of content specifically created for FossilSketch and tailored to support the educational exercises in traditional and FossilSketch-based courses.
66
+
67
+ Exercises were developed based on the course learning objectives, the microfossil collections available, and the expertise of [co-author names redacted for review]. The level of difficulty and number of activities varied depending on whether the course is lower or upper division and whether the course primarily serves geoscience or non-geoscience majors. The landing page for each course also varied depending on the teaching goals and the activities assigned to students.
68
+
69
+ In Fall 2021, FossilSketch was deployed in Geol 208 ("Life on a Dynamic Planet"), a lower division undergraduate course where most students are not geology majors. Students were given access to FossilSketch 5 days before the in-person laboratory session during which one hour of laboratory time was devoted to FossilSketch activities. Students were required to complete activities for Foraminifera and could complete the Ostracoda activities included in a separate column of modules for extra credit.
70
+
71
+ § 3.2 INTERFACE DESCRIPTION
72
+
73
+ § 3.2.1 LANDING PAGE
74
+
75
+ The FossilSketch website initially prompts new and returning users to log in with their credentials. To ensure data integrity, new user registration is limited via a registration code assigned to each group of students who are part of the study, with each group being assigned a different code. Test accounts and external evaluators were assigned special login credentials and their activities were not recorded as part of the data collection.
76
+
77
+ After the participants log in, modules are listed in the order in which they are meant to be completed. Modules were added, modified, or removed depending on the class or activity in which FossilS-ketch was deployed. The landing page used in our current study is shown in Figure 2.
78
+
79
+ The self-contained nature of the exercises and the flexibility of the landing page interface offers the versatility of adding new exercises and rearranging the website experience depending on the course learning objectives.
80
+
81
+ § 3.2.2 EDUCATIONAL VIDEOS
82
+
83
+ Educational videos were created specifically for FossilSketch and were written to provide introductory information to help contextualize concepts covered in the rest of the FossilSketch's activity types. When users click on these modules, an overlay with an embedded YouTube link is displayed. Students are free to change playback with the standard embedded YouTube video controls and the overlay can be dismissed at any time by clicking outside of the video area. No progress data is recorded for this type of activity.
84
+
85
+ FossilSketch is intended to augment instructor lectures, meaning the videos are not intended to serve as a replacement for lecture material as is usually the case with typical instructional videos in an online learning interface. The FossilSketch system uses instructional videos to provide necessary information for students to engage with the rest of the modules if the students have not yet received instructor lectures, while at the same time emphasising concepts most directly relevant to the activities if they have attended in-depth lectures in the classroom.
86
+
87
+ § 3.2.3 INSTRUCTIONAL MINI-GAMES
88
+
89
+ FossilSketch integrates various kinds of interactive instructional tools. In order to improve student comprehension of microfossil identification, we broke identification tasks into small, minigame, tasks. Students were able to repeat tasks for mastery. Each mini-game consists of one or more types of interactions intended to highlight the visual-morphology aspect of learning about microfossil identification.
90
+
91
+ Matching Games require the participants to match morphological features, such as outline shape for Ostracoda, or morphototype and type of chamber arrangement for Foraminifera. At the beginning of the game the students are presented a reference image that lists each morphotype along with a sketched example, and students are able to return to this reference image again, when needed, by clicking on the zoomed-out image on the bottom right corner of the screen. When the game starts, the screen displays a small number of draggable "discs" or rectangular "cards" with actual microfossil photomicrographs that the user can move into slots with sketched categories for each feature used in this game. At the moment, four different mini games are created with this kind of interaction: Ostracoda lateral outline identification; Foraminifera apertures, chamber arrangement, and Foraminifera morphotypes identification.
92
+
93
+ < g r a p h i c s >
94
+
95
+ Figure 4: Menu of the morphotype ID exercises. Students pick from any of the unidentified morphotypes marked with a "?", and afterwards are shown their performance on a 3-star rating system.
96
+
97
+ All matching games include three rounds, with each level contributing to a final star score. The Foraminifera apertures and chamber arrangement mini games randomly pull images of Foraminifera from the database for matching to the corresponding apertures and chamber arrangement types, with each round of game having four cards to match. In the morphotype mini game, the number of drag-gable items and slots in later rounds increases from 4 in the first round, to 8 in the third round to increase difficulty. Students receive star rating form zero to three on how many rounds they got correctly in the first attempt.
98
+
99
+ Orientation Games integrate a rotation interaction to help students gain an understanding of how to correctly orient the ostracod valve for identification. An ostracod valve has four sides: dorsal, ventral, posterior, and anterior margins/side. This game starts with a general description of each of these margins to help students gain an intuition of how to identify each side of an ostracod. The user is tasked with rotating an ostracod to its position with the dorsal side up and all of its sides correctly labeled. To simplify the interaction, students rotate in one direction 90 degrees at a time by clicking or tapping once on the ostracod that is displayed on the center of the screen. When the student believes that the ostracod is oriented correctly, they submit their answer by selecting the "Finished" button on the center bottom of the screen.
100
+
101
+ Like with matching games, orientation games are divided into three rounds. In this case, each round consists of one ostracod valve that needs be rotated into correct orientation. Answers are marked "correct" if they are rotated correctly the first time the "Finished" button is clicked. Like in the matching games, students will need to correct their answer if it is incorrect to move onto to the next round, but the answer will still be marked incorrect. Students are encouraged to use the knowledge gained by correcting their wrong answer to try the exercise again to receive full credit for their answers and receive a 3 star rating.
102
+
103
+ § 3.2.4 IDENTIFICATION EXERCISES
104
+
105
+ In micropaleontology, microfossils are picked from sediment samples and the obtained variety of different species represents an assemblage characteristic of the sample and may indicate the environmental setting or geologic age of the sample. A micropaleontologist would identify the species of microfossils in this assemblage based on their morphology, or their characteristic features. One of the goals of this interface is to demonstrate to students the various applications of microfossils in geosciences. Primarily, FossilSketch offers a scaffolded learning experience to guide students through the steps needed to identify microfossils and their morphological characteristics.
106
+
107
+ For the undergraduate course Geol 208, students identified foraminiferal morphotypes, and Ostracoda genera (as an extra credit). Students are first presented with a menu depicted in Figure 4. Once chosen, the Foraminifera morphotypes identification steps can be seen in Figure 5 and are the following: 1) sketch the outline of the foraminifer image on the left; 2) sketch the outline of the foraminifer image in the center; 3) choose the overall shape of the organism from a menu; 4) choose the type of chamber arrangement from the menu; 5) find and click on the aperture location in the center image; 6) identify a morphotype based on the selected features. The Ostracoda genera identification exercise steps are shown in Figure 6 and include: 1) sketch the maximum length of the valve; 2) sketch the maximum height of the valve; 3) identify right vs left valve; 4) sketch the outline of the ostracod valve; 5 ) choose the type of outline from the menu; 6) measure approximate size of the valve and choose the size range from the menu; 7) choose the types of ornamentation; 8) identify an ostracod genus based on the selected features.
108
+
109
+ Within each exercise the types of interactions are described below:
110
+
111
+ Sketching interactions (steps 1-2 for Foraminifera, and steps 1-2 and 4 for Ostracoda) help students retain and understand the various shapes and outlines they observe in different microfossils. It is the primary method of interaction after which the project is named. Sketching interactions integrate functionality from a library called paper.js to deliver flexible drawing interactions. Although the system is intended to be used with styli and touch to most naturally resemble a sketching activity, it is also possible to draw with a mouse or trackpad. Drawing interactions are usually integrated as the first steps of both kinds of identification exercises, as the overall shape of the sample is critical in identifying the microfossil.
112
+
113
+ < g r a p h i c s >
114
+
115
+ Figure 5: Step by step morphotype ID exercise, starting at the top-left screen and ending at the bottom-right, it includes the following steps: 1) sketch the left view of the organism, 2) sketch the middle view, 3) pick the overall shape, 4) pick the chamber arrangement, 5) click on the area of the aperture location, and 6) draw your conclusion - identify Foraminifera morphotype.
116
+
117
+ The FossilSketch system checks for correctness using a template-matching recognition heuristic. The template recognizer coded specifically for FossilSketch uses the Hausdorff-distance template matching technique as a baseline, implemented to act as a shape accuracy algorithm. We first resample both, the template and the input sketch, to a lower sampling rate with roughly equidistant points. The formula followed for calculating the interspace distance is:
118
+
119
+ $$
120
+ S = \frac{\sqrt{{\left( {x}_{m} - {x}_{n}\right) }^{2} + {\left( {y}_{m} - {y}_{n}\right) }^{2}}}{c = {256}} \tag{1}
121
+ $$
122
+
123
+ where $c = {256}$ is a constant empirically derived to adjust the distance between the points for optimal calculation of the distance metric. With the distance calculated, the sketch is resampled using the technique outlined in Algorithm 1.
124
+
125
+ Algorithm 1 Resampling Technique
126
+
127
+ Require: Point list path, distance $S$
128
+
129
+ Ensure: Re-sampled point list out
130
+
131
+ $D \leftarrow 0$
132
+
133
+ for $i$ in path do
134
+
135
+ BetweenDist $\leftarrow \sqrt{{\left( {x}_{i + 1} - {x}_{i}\right) }^{2} + {\left( {y}_{i + 1} - {y}_{i}\right) }^{2}}$
136
+
137
+ $D \leftarrow D +$ BetweenDist
138
+
139
+ if $D > S$ then
140
+
141
+ $D \leftarrow$ BetweenDist
142
+
143
+ out $\leftarrow$ new point $\left( {{x}_{i},{y}_{i}}\right)$
144
+
145
+ end if
146
+
147
+ end for
148
+
149
+ This iterates through each point in the provided path and gradually adds the distance between the current point and the next until the predetermined distance $S$ is reached, which is where the point will be placed. The algorithm repeats this process for every point in the input path.
150
+
151
+ We then iterate through each point in the input sketch, compare it with the corresponding point for the template sketch, and calculate the Euclidean distance between the two. Total distance is calculated across all the compared points and the cumulative sum is the overall "distance" between a template and the student input (see Figure 7). If the average deviation of the points is greater than the pixel with of the canvas divided by a constant, we would determine that the input sketch is too different from the template sketch. This constant was empirically determined after internal testing to match the desired student experience; students are meant to provide a relatively accurate, but not perfect, recreation of the template. This algorithm is outlined in Algorithm 2.
152
+
153
+ Algorithm 2 Compare Sketches
154
+
155
+ Require: Student Spath, template Tpath
156
+
157
+ Ensure: Boolean result
158
+
159
+ totalDeviation $\leftarrow 0$
160
+
161
+ for $i$ in Spath do
162
+
163
+ closestDistance $\leftarrow$ INF
164
+
165
+ longestIndex $\leftarrow 0$
166
+
167
+ for $j$ in $T$ path do
168
+
169
+ tempDist $\leftarrow$ distance between ${Spat}{h}_{i}$ and ${Tpat}{h}_{j}$
170
+
171
+ if tempDist < closestDistance then
172
+
173
+ closestDist $\leftarrow$ tempDist
174
+
175
+ closestIndex $\leftarrow j$
176
+
177
+ end if
178
+
179
+ end for
180
+
181
+ end for
182
+
183
+ avgDeviation $\leftarrow \frac{\text{ totalDeviation }}{\text{ spathlength }}$
184
+
185
+ cwidth $\leftarrow$ pixel width of canvas
186
+
187
+ if avgDeviation $> \frac{\text{ cwidth }}{70}$ then
188
+
189
+ result $\leftarrow$ True
190
+
191
+ else
192
+
193
+ result $\leftarrow$ False
194
+
195
+ end if
196
+
197
+ The template sketches are provided by [co-author names redacted for review] and coded directly into each foraminifer or ostracod image. Every foraminifer in FossilSketch has a database containing template sketch data the outline for its left view, its center view, its largest chamber, and coordinates for the location of the opening - aperture. The last item is used in the interaction labeled "Pointing Interactions" in this section. For every ostracod in a database there is a template sketch data for the outline, maximum length, and maximum height.
198
+
199
+ < g r a p h i c s >
200
+
201
+ Figure 6: Step by step of the ostracod ID exercise, starting at the top-left screen and ending at the bottom-right, it includes the following steps: 1) draw the max length of the ostracod, 2) draw the max height, 3) identify if it is a left or right valve, 4) sketch the outline of the ostracod, 5) choose the overall shape, 6) determine the length, 7) choose if the valve has ornamentation, and what are the ornamentation features, 8) draw your conclusion - identify Ostracoda genus.
202
+
203
+ Identification interactions (steps 3-5 for Foraminifera, and steps 3, 5-6 for Ostracoda) are presented to students as a horizontal multiple-choice menu along the bottom of the screen, and the student is asked to identify one of several characteristic features of the microfossils. For instance, the student might be asked "what is the overall shape of the organism?" and the possible answers might be "vase-like", "convex", "low-conical", "spherical" and "arch" among others. With each option, a sample sketched outline of each shape is shown, but it is important to note these are sketched examples and not photorealistic depictions of the choices. The student is tasked with remembering the particular physical properties of each characteristic feature rather than simply matching the pictures with the closest choice. Of these, one is the correct answer. In this part of the exercise, the student does not receive immediate feedback as to the correctness of this particular question, since all of these answers are summarized for the student to use to identify the foraminifer's morphotype or ostracod's genus.
204
+
205
+ Pointing interactions (step 5 for Foraminifera) are simplified forms of "sketching interactions" that require students to click once in a general area of interest, and FossilSketch checks if the identified location is correct. Specifically, this interaction is used to identify the general location of the aperture of a given foraminifer. The student is asked to click once in the region where they believe the aperture is. Each foraminifer in the FossilSketch database contains data on a rectangular region that points to the general area of its aperture. When the student clicks "Submit" after identifying the aperture area, FossilSketch checks to see if the location of the click is within the provided rectangular region. If it is, it is marked as correct. The location of the aperture is only used for identifying a foraminifer's morphotype.
206
+
207
+ < g r a p h i c s >
208
+
209
+ Figure 7: To grade answers, FossilSketch resamples and overlays both the student input and instructor-provided sketch, and a total distance metric is calculating by summing the Euclidean distance between sampled points.
210
+
211
+ The summary screen (step 6 for Foraminifera, and step 8 for Ostracoda) appears as the last step for each identification exercise, asking the student to draw from their observations and make the final selection of the foraminiferal morphotype, or Ostracoda genus. Each foraminiferal morphotype or Ostracoda genus has a list of characteristic features, and based on student answers, each feature correctly marked during the identification steps would have a blue check-mark. Choices of foraminiferal morphotypes, and Ostracoda genera are ranked by the highest number of matching properties with student answers. If student answers are correct, the choice is easy since it has the most check-marks and is the first item listed. Additionally, a picture of each choice is included, letting students double-check to see if their best-ranked choice is the most accurate. This system allows students to develop self-assessment skills to see if their choices match up with any given morphotype or genus. At any time students are able to revisit any of the previous steps, so this final choice would be a good motivation to do so if they notice their prior choices did not yield a definitive conclusion. It also allows students to see different properties that might be common between some morphotypes or genera, but each foraminifer and ostracod will have only one correct final answer.
212
+
213
+ § 3.2.5 ASSESSMENT EXERCISE
214
+
215
+ Once the students gain mastery of microfossil identification through practicing mini-games and microfossil identification, they proceed to the final type of exercise and assessment where they can apply their knowledge to reconstruct environments from an assemblage of different microfossils. In this exercise, the students view microfossil assemblages with approximately 20 foraminifer or ostracod individuals and identify the foraminiferal morphotypes or Ostracoda genera present. These assemblages imitate an actual microfossil "slide", as seen under a microscope that contains an assemblage of Foraminifera or Ostracoda. Students are asked to identify how many of each foraminiferal morphotype or ostracod genus are present in the slide. Before students start working on the exercise, they can view a screen with a summary of the information on foraminiferal morphotypes or ostracod genera and how they can be used to interpret environmental properties, such as the oxygenation or salinity of the water. This exercise includes 3 rounds and a summary. The student then needs to identify the different genera or morphotypes and select from the menu on the right side of the screen the number of each morphotype. It is intended that students will draw on their knowledge from the previous exercises to quickly identify the morphotypes or genera they see in these assemblages. For the ostracod assemblages, the menu to select from includes both the genera that are and genera that are not present in the assemblage. For the foraminiferal morphotypes, the assemblage includes two morpho-types to select from and "Other" category. To answer correctly, the student must provide a correct number for all categories, that is the two morphotypes or "other" for Foraminifera or genera for Ostracoda in an assemblage.
216
+
217
+ Both assemblage exercises conclude with a summary page where the student is asked to make an overall conclusion about the environment based on the assemblages. For instance, the Foraminifera morphotype assemblage exercise uses assemblages to determine for bottom water oxygenation. It has been shown that in environments where cylindrical- and flat-tapered morphotypes are found in abundance, the environments usually have low oxygenation [30]. The students are asked to rank each assemblage by relative oxygenation level. They should be able to do so when they consider the relative abundance of cylindrical-tapered and flat-tapered morphotypes they found in each of the three assemblages. Similarly for Ostracoda genera, students count the number of individuals of each genera, and determine the bottom water salinity indicated by each of the assemblages. These exercises assess microfossil identification learned and honed across all exercises of the FossilSketch system, and shows how microfossil research is applied.
218
+
219
+ § 4 EVALUATION
220
+
221
+ FossilSketch was deployed as part of a laboratory exercise in a class titled "Life on a Dynamic Planet" for Fall 2021 at the investigator's university. [co-author name redacted for review] is the course's instructor for this class, and she introduced the students to the FossilSketch system. Students were instructed to watch the educational videos before coming to class. During the lab, they went through Foraminifera mini-games, morphotype identification and assessment exercise modules. Ostracoda modules were offered as an extra credit.
222
+
223
+ < g r a p h i c s >
224
+
225
+ Figure 8: Distribution of student ages among those who consented to have their age information included in the study.
226
+
227
+ § 4.1 DESIGN STUDY
228
+
229
+ Over the course of two weeks, a total of 32 students were asked to complete their assignment. All students were instructed to use the FossilSketch system as part of their assignment but consent to provide us data (surveys, focus group and sketch data) was fully optional. A total of 22 students consented to provide us data on their usage of FossilSketch for analysis.
230
+
231
+ § 4.1.1 STUDY POPULATION AND INFORMED CONSENT
232
+
233
+ This study conformed to the university's Institutional Review Board protocol, IRB2019-1218M (expiration date 02/09/2023) ensuring the data is published only on users who gave us informed consent. Consents were distributed on the paper during the introductory portion of in the laboratory session. Of the 22 students who gave consent to have their demographic information published, 13 provided data on their race/ethnicity: 8 were White, 3 were Hispanic, 1 was Black, and 1 was Asian. Student ages ranged from 18 to 24, with specific age distribution shown in Figure 8
234
+
235
+ § 4.1.2 DATA COLLECTION PROTOCOL
236
+
237
+ The first module in FossilSketch has students complete a pre-study questionnaire that requests basic demographic information, and information on prior experience with micropaleontology and the topics covered in the FossilSketch interface, interest and self assessment in micropaleontology skills, and interest in future careers in micropaleontology. Similarly, the final module in FossilSketch is a post-study questionnaire that repeated questions regarding self-assessment of skill, interest in future careers involving micropaleontology, and feedback on use of FossilSketch. Most of the questions used a five-point Likert scale questions, and to provide us feedback students could elaborate in free-response forms. At the conclusion of the study, students were asked for feedback on their experience with the FossilSketch UI as part of informal interviews using focus groups with a subset of participants who were selected based on their agreement to take part on the focus group interviews.
238
+
239
+ FossilSketch tracks student performance by recording a student's "star rating" for each submitted exercise in an off-site grade-book SQL database. As a reminder, the final score of all exercises in FossilSketch is a rating ranging from one to three stars, with one being the most error-prone performance and three being error-free. Students are encouraged to repeat exercises if they did not receive three stars, and the website records every completed attempt in the grade-book database. This information lets us gauge overall performance in student activity on a per-exercise basis, and combining these responses with the more qualitative responses from students during focus group interviews and post-study questionnaires lets us analyze student interest.
240
+
241
+ < g r a p h i c s >
242
+
243
+ Figure 9: Visualization of the star ratings of submissions across all students.
244
+
245
+ Rating (Stars)
246
+
247
+ max width=
248
+
249
+ 2|c|Activity 1 ” 2 3
250
+
251
+ 1-5
252
+ 4*Foraminifera Identification 15 25 109
253
+
254
+ 2-5
255
+ Morpho Match 0 6 5
256
+
257
+ 2-5
258
+ Chamber Match 0 7 8
259
+
260
+ 2-5
261
+ Assemblage 0 5 6
262
+
263
+ 1-5
264
+ 4*Ostracoda Identification 0 4 59
265
+
266
+ 2-5
267
+ Orientation 0 0 6
268
+
269
+ 2-5
270
+ Outline 0 6 3
271
+
272
+ 2-5
273
+ Assemblage 0 13 0
274
+
275
+ 1-5
276
+ 2|c|Totals 15 66 196
277
+
278
+ 1-5
279
+
280
+ Table 1: Number of student submissions for each FossilSketch activity.
281
+
282
+ § 4.2 RESULTS
283
+
284
+ Study data can be summarized as "Quantitative" and "Qualitative", with the former being the recorded performance metrics found in the grade-book SQL database and the latter summarizing student sentiment about the FossilSketch user experience.
285
+
286
+ § 4.2.1 QUANTITATIVE
287
+
288
+ Modules that were tracked included all exercises and assessments, but activity on viewing videos was not tracked. However, the Fos-silSketch layout first displays the video modules, and instructors verbally encouraged students to complete the site's modules in order. The activities that were tracked in the grade-book SQL database are: foraminifera chamber matching game, foraminiferal morphotype matching game, morphotype identification exercise, assessment - paleoreconstruction using morphotypes, ostracod orientation game, ostracod outline matching game, ostracod genera identification exercise, and an assessment - ostracod assemblage exercise. Details on the exercises can be found in Sections 3.3 and 3.4. As a reminder, in this activity students were only required to complete the foraminifera exercises, with ostracod exercises existing as optional extra credit activities.
289
+
290
+ Table 1 summarizes the student submission data during our study. Figure 9 shows the submission score for the star ratings for the Morphotype Identification exercise. As expected, ostracod exercises received fewer submissions due to the extra credit nature of the exercises. However, it should also be pointed that both ID exercises received a much higher volume of submissions due to the module requiring at least 3 submitted foraminiferal morphotypes (out of a possible 17) and 3 submitted ostracods (out of a possible 10). If students further submitted one of the three but decided to retry for a better score, it would be counted as another submission. Out of a total of 32 students, this means students submitted an average of 3 submissions of ID exercises per student for foraminifer morphotypes, and 7 submissions per student for ostracods of the students who chose to complete the extra credit (a total of 9 students completed the extra credit modules).
291
+
292
+ § 4.2.2 QUALITATIVE
293
+
294
+ Surveys and lab assignments feedback. The following feedback was requested from students: 1 . On a scale of 1 to 5, with 1 being completely disagree, and 5 being completely agree, how would you respond to the statement "I enjoyed the micropaleontology activities in this class." Please provide at least one example to explain your answer.
295
+
296
+ The most common rating the students gave was $3\left( {\mathrm{n} = {11}}\right)$ . Most of students pointed to some software bugs and this is likely why few people rated it 4 and 5 . Students' open-ended comments indicated that they: "enjoyed the identification aspect of the activities that allowed me to investigate and figure out where a sample fossil was found." "it was very buggy and that made it frustrating but the overall system was a good way to learn."
297
+
298
+ 2. Did you work on the micropaleontology activities outside of class (other than class time)? If so, please explain what you did.
299
+
300
+ Approximately ${50}\%$ of the students completed activities outside the class. Students' answers indicated that many of them used FossilSketch to finish lab assignment at home: "I did not finish in class so I completed the assignment at home." "yes, I watched videos and checked my lab answers." "Yes, I just finished the lab on my own time."
301
+
302
+ 3. How did you feel, typically, while you were working on micropaleontology activities in this class?
303
+
304
+ Eleven students provided the answer, five indicated that activities were enjoyable, and six people felt it was confusing since they did not have prior knowledge.
305
+
306
+ 4. Do you think the micropaleontology activities in this class are and will be useful to you? How so?
307
+
308
+ More than half of the students $\left( {\mathrm{n} = {10}}\right)$ who provided answers said that micropaleontology activities in this class are useful for future work, and career. The following quotes were associated with these answers: "yes, I am a geology major so I will likely use this later in school and in my career." "Yes, because I would like to go into paleontology as a career (although not micropaleontology), so it would be good to have prior knowledge in these areas."
309
+
310
+ 5. When did you feel uncertain or unsure about something while working on micropaleontology activities in this class? How did you deal with this uncertainty?
311
+
312
+ The most common answer $\left( {\mathrm{n} = 7}\right)$ was that a student went back to FossilSketch to look for answers.
313
+
314
+ 6. What was helpful in FossilSketch activities?
315
+
316
+ Students almost unanimously $\left( {\mathrm{n} = {12}}\right)$ said that videos and mini games were very helpful. The following quotes were associated with this question: "The videos and games."; "yes, I watched videos and checked my lab answers." "It was difficult to remember everything, so I went back in the videos and games." "Practice with identification" "Videos helped a lot with the lab questions." "YT videos + mini games" "The games were quite difficult, I rewatched videos and replayed the games until I was confident." "The videos were the most informative" "The videos and minigames were very helpful in explaining the different morphotypes".
317
+
318
+ max width=
319
+
320
+ Resource Type Count
321
+
322
+ 1-2
323
+ Rewatched FossilSketch videos 16
324
+
325
+ 1-2
326
+ Retried FossilSketch games 14
327
+
328
+ 1-2
329
+ Retried Morphotype ID games 12
330
+
331
+ 1-2
332
+ Collaborated with others 9
333
+
334
+ 1-2
335
+ Used in-person handouts 5
336
+
337
+ 1-2
338
+ Other 2
339
+
340
+ 1-2
341
+
342
+ Table 2: A count on the different resources that students used to complete their lab assignment.
343
+
344
+ Additionally, when completing their lab assignment, students were asked what resources did they use to answer questions about microfossils. The following table shows that students used FossilS-ketch activities for completing their lab assignment, with videos, mini games and morphotype ID being the most common.
345
+
346
+ Focus group feedback In the focus groups discussion, students provided the following feedback:
347
+
348
+ 1. How was your experience using FossilSketch?
349
+
350
+ "Good. The website was easy to navigate. There were no crushes and bugs. Learning material was easy to access. I like how we could learn with the videos, but I do wish that videos also had slides to go back to individually rather watching the entire video."; "It was good, the videos were good, the games were cool."; "I liked the games and that we could re-try them until we've learned."
351
+
352
+ 2. Anything you disliked? "I wish we had feedback to know what we did wrong instead of just saying "it's wrong"."; "Sometimes it was buggy, zooming in and out didn't work."
353
+
354
+ 3. If you were to add new features to FossilSketch, what would it be?
355
+
356
+ "The games need hints for correct answers."; "Review sheet for the videos would help."; "For the stars, add percent, or partial stars, like 3.5.”
357
+
358
+ 4. If you were to take another class would you want to use FossilSketch, or be in traditional class without software?
359
+
360
+ "Prefer to use software, creative applications make learning easier."; "FossilSketch could be supplementary to traditional classes. The best would be to combine."
361
+
362
+ 5. What was your favorite activity in FossilSketch?
363
+
364
+ "Morphotypes identification game."; "I like the extra credit (Ostracoda) activities, they were easier than the main ones."; "I liked the videos, they were the most informative."
365
+
366
+ 6. For sections with mini-games, morphotype identification and the paleoreconstruction assessment, the first time you worked with it, did you know what to do? Was it intuitive?
367
+
368
+ Majority of the students reported it was intuitive and they did not have any problems navigating between different steps of each section.
369
+
370
+ § 4.3 DISCUSSION
371
+
372
+ We observed a measurable amount of student interest across FossilS-ketch submissions overall via a combination of analyzing exercise submissions and qualitative results, although we will also note there were varying degrees of interest when observing individual exercises and games. Morphotype and Genus ID exercises for both types of microfossils comprised the highest number of submissions by a wide margin, with lower observable numbers of submissions in template matching and environmental reconstruction games for the required portion of the lab assignment. There were a total of 15 submissions for the chamber matching game, and 11 submissions for the environmental reconstruction out of a total of 32 participants who used the system in the class (see Table 1). For morphotype ID, the 149 total submissions is partially explained by the requirement of completing 3 submissions as part of the lab exercise, but that alone does not account for all submissions since students submitted an average of 4.66 submissions. One possible inference is that students felt encouraged to complete the ID exercises in particular because the design of these activities was more appealing, an observation we found important due these exercises being the most complex in Fossilsketch. As section 3.4 specifies, ID exercises consisted of several interactions including sketching, pointing, and completing multiple choice questions over 6-8 separate steps, which offer cumulative observations about the morphotype or genus in question. By contrast, the matching games consist of one main interaction and do not involve the student drawing a conclusion. We believe the engaging design and applied problem solving implemented into the ID exercises can be accounted for the increase in the number of total submissions and average submissions per student, well above the required three per student.
373
+
374
+ Qualitative feedback was overall positive with various students indicating intuitive user experience. Some students specifically mention the identification exercises as the activity they most enjoyed. Students rated videos, games and ID exercises as very useful when completing the lab assignment. Some students mentioned they found the games initially difficult and others consider the subject of micropaleontology to be difficult in general, but were able to improve their understanding of the subject by referring to the informational materials in FossilSketch and rewatching videos and repeating exercises in the system. Students were also able to complete the lab assignment remotely at home, which would not have been possible in a traditional lab environment without FossilSketch. Table 2 lists the student answers for resources used to complete the lab assignment, with 42 of 58 answers (72%) using either FossilSketch videos, games, or ID exercises for assistance in their assignment.
375
+
376
+ The primary difficulty in interactions was the lack of scaling in the FossilSketch interface, which made certain low-resolution or zoomed-in displays leave out UI elements that made it difficult to complete the exercises. Some students would change the zoom level of their screen, which would result in the "bugs" that some students mentioned in their qualitative feedback. Some students also expressed disinterest in the system largely due to micropaleontology not being relevant to their major of study.
377
+
378
+ Overall, we observed that the proposed system was successful in providing an engaging and informative tool for learning that students were interested in using on their own to complete the class's laboratory assignment. Generally positive feedback from students and a large number of submissions for the identification exercises suggest a positive overall learning experience and succeeds in our goal of an intuitive educational tool that can be used in tandem with in-class learning.
379
+
380
+ § 5 FUTURE WORK
381
+
382
+ The modular design of FossilSketch provides flexibility in creating course-specific landing pages, so we will continue to iterate on the existing exercises for additional polish and bug-fixing reported in the study. Additionally, we intend to implement an instructor interface that would provide instructors with a login that would display their student submissions and performance. In addition, this interface will provide instructors with a system to create their own landing pages from within the website, allowing them to alter the order, add or remove exercises. We also intend for this interface to allow instructors to add more Foraminifera and Ostracoda morphotypes and genera for the identification exercises. We expect that these additions will allow this system to be deployed in various classrooms with a large number of instructors without the need for web developers to implement changes for each instructors' needs.
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/r3G_ReFNpM9/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,419 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Active Learning Neural C-space Signed Distance Fields for Reduced Deformable Self-Collision
2
+
3
+ Anonymous Review
4
+
5
+ ![01963e5e-48c7-7648-a188-966126714fad_0_230_395_1336_209_0.jpg](images/01963e5e-48c7-7648-a188-966126714fad_0_230_395_1336_209_0.jpg)
6
+
7
+ Figure 1: Examples from our supplementary video, showing self collision for the bracelet, spring, bunny, and snake models. Self-collision is identified using a learned neural network SDF, and collision response uses the SDF gradient computed via back-. propogation within a constraint solver. Nerual SDFs work well for low dimension reduced spaces (e.g., the bracelet and spring with dimension 3), while models that need more dimensions to provide good reduced deformation models (e.g., the bunny with dimension 10 , and the snake with dimension 7) have much less accurate learned collision manifolds.
8
+
9
+ ## Abstract
10
+
11
+ We present a novel method to preprocess a reduced model, training a neural network to approximate the reduced model signed distance field using active learning technique. The trained neural network is used to evaluate the self-collision state as well as the self-collision handling during real time simulation. Our offline learning approach consists of two passes of learning. The first pass learning generates positive and negative point cloud which is used in the second pass learning to learn the signed distance field of reduced subspace. Unlike common fully supervised learning approaches, we make use of semi-supervised active learning technique in generating more informative samples for training, improving the convergence speed. We also propose methods to use the learned SDF function in real time self-collision detection and assemble it in the constraint Jacobian matrix to solve the self-collision.
12
+
13
+ Index Terms: reduced model-self-collision-configuration space-signed distance field; active learning-contact constraint
14
+
15
+ ## 1 INTRODUCTION
16
+
17
+ In computer animation, simulating the physics of models usually requires solving large linear systems whose size conforms to the generalized coordinate of the model, and this can be costly if the model consists of a huge number of vertices. Model reduction is a technique used to approximate the simulation of the full dynamic system with a simplified one by projecting the high-dimensional system onto low-dimensional subspace. With much fewer variables in the reduced system, the equations can be solved much quicker while maintaining high fidelity of the original system. Reduced model deformation is an application of model reduction in computer animation to improve efficiency and is very relevant to applications such as games and training simulations, where a real-time computation is required.
18
+
19
+ Reduced model deformation can simplify the dynamic system solving, but the complexity of self-collision detection of the reduced model is still related to the complexity of the model mesh, since we must test each pair of the triangle mesh. Although a number of algorithms or data structures have been proposed to speed up the self-collision detection by culling unnecessary tests, like the BD-Tree and various culling strategies, some triangle-triangle intersection tests are still inevitable.
20
+
21
+ In this paper, we build our work based on reduced model deformation, evaluating the model self-collision state with respect to reduced coordinates in the configuration space (C-space). We learn a function to approximate the C-space signed distance function of a deformable model. The idea is inspired by the ellipsoid bound used by Barbic and James [5] to conservatively rule out self collision, but to extend that implicit function to a more complex function that represents the actual collision boundary. We show that a single, inexpensive function can replace the collision hierarchy, while also providing the gradients necessary to compute a collision response.
22
+
23
+ However, using traditional supervised learning methods in this case poses two challenges. First, the actual C-space self-collision boundary is unknown. Given a random deformation configuration, the only information we can get is the sign (i.e., whether the model is in self-collision or not), so there are no ground truth signed distance values to be used in training. Second, as more modes are used to deform a model, the number of dimensions of the C-space increases, and the number of uniform samples needed to learn the C-space boundary increases exponentially. In order to overcome these difficulties, we use approximated signed distances and eikonal loss terms to help the neural network function learn the C-space signed distance field. We also use active learning as our learning strategy for efficient sampling.
24
+
25
+ Active learning is a kind of semi-supervised learning where the learner automatically chooses the most informative data to label for training, which can improve the convergence of training. With active learning, the picked training data tends to distribute around the ground truth self-collision boundary, so we harvest the point cloud based on this observation and use that to approximate the signed distance value of a given configuration.
26
+
27
+ The contribution of this paper is to explore a new way to preprocess reduced deformable models, using active learning to learn the self-collision signed distance field (SDF) in C-space. We also show how to use the learned SDF function in real-time self-collision detection and self-intersection handling during physics simulations.
28
+
29
+ ## 2 RELATED WORK
30
+
31
+ Our work is based on reduced deformable models. We learn reduced C-space SDF of reduced models, and use the trained neural network in self-collision detection and self-collision handling. The initial model reduction applications $\left\lbrack {9,{12},{18}}\right\rbrack$ in computer animation are based on linear systems. Since the linear elastic internal forces are computed using the rest shape stiffness matrix, the deformation produces noticeable artifacts when the model has large deformation. In order to relieve the distortion produced by large deformation, Barbič and James [3,4] investigate St.Venant-Kirchhoff deformable models with elastic forces which are cubic polynomials in reduced coordinate and provide methods to evaluate the elastic forces in real time. In addition to solid deformable models, model reduction is also used in acoustic simulations $\left\lbrack {6,{11}}\right\rbrack$ and fluid simulations $\left\lbrack {{20},{22}}\right\rbrack$ .
32
+
33
+ Self-collision detection (SCD) has been widely studied in computer animation. Bounding volume hierarchies (BVH) is the most commonly used data structure both in inter-object collision detection and SCD [19]. For cloth surfaces, Volino and Thalmann [21] use an improved hierarchical representation, taking advantage of geometrical regularity to skip SCD between large surface regions that are close, yet impossible to contact. The approaches for improving the speed of SCD have mainly focused on two techniques: improving BVH updates, and culling unnecessary BV node tests. For improving BVH updates, Larsson and Akenine-Möller [15] propose a hybrid update method using a combination of an incremental bottom-up update and a selective top-down update. Then, they blend associated sets of reference bounding volumes to enable lazy BVH updates [16]. James and Pai [13] propose the bounded deformation tree (BD-Tree), which makes use of the information on deformation modes, and updates the bounding sphere conservatively. For culling unnecessary tests, subspace self-collision culling [5] is proposed, in which a conservative certificate in C-space is precomputed and used to rule out some tests. Energy based self-collision culling certificates [24] have also been proposed by exploiting the idea that a mesh cannot self collide unless it deforms enough.
34
+
35
+ Machine learning (ML) methods build models based on sample data. A trained model can serve as a fast approximation of the studied problem. Practically, machine learning has been used abundantly in the fields of robotics, geometry processing, and computer animation and works well as a black box algorithm. Jiang and Liu [14] use a fully connected neural network to fit human motion with limits such as self contact, and they use network gradients to define constraint directions, which is an inspiration to our self-collision response. Neural networks are also used in geometry reconstruction. Atzmon and Lipman [1] propose a sign agnostic learning (SAL) method, in which an unsigned loss function was used to learn the signed distance field defined by the geometry. SAL is later improved into SALD [2], where derivatives of the loss term are incorporated into regression loss, which is the inspiration of our eikonal term in the loss function. Similar to our work on self collision, Zhesch et al. [23] also propose neural collision detection for reduced deformable models with a focus on collision between objects.
36
+
37
+ Machine learning has also been used for learning the physics of animation. Fulton et al. [8] use autoencoder neural networks to produce reduced model dynamics. Holden et al. [10] propose a data-driven reduced model physics simulation method, which does include the collision response, and satisfies memory and performance constraints imposed by modern interactive applications. One of the machine learning techniques that interests us is active learning. Active learning automatically chooses samples to label and thus can improve the convergence rate compared with regular supervised learning. Pan et al. [17] propose an active learning approach to learn the C-space boundary between rigid bodies and use the boundary to approximate global penetration depth. In our work, labeling the samples requires performing self-collision detection on the model, and this can be expensive in the time cost depending on the complexity of the model. So, active learning is a key part in our work since it largely reduces the number of samples that require labeling.
38
+
39
+ ## 3 Reduced Model C-space Signed Distance
40
+
41
+ The deformation of a reduced model is represented by a reduced coordinate (or deformation configuration) $\mathbf{q} \in {\mathbb{R}}^{r}$ , where $r$ is the number of deformation modes. Then, the full coordinate of the vertices displacement is reconstructed by $\Delta \mathbf{x} = \mathbf{{Uq}}$ , where each column of the matrix $\mathbf{U} \in {\mathbb{R}}^{{3n} \times r}$ is a deformation mode. The $r$ -D space where the configuration $\mathbf{q}$ lives is the configuration space (C-space).
42
+
43
+ ![01963e5e-48c7-7648-a188-966126714fad_1_933_156_709_435_0.jpg](images/01963e5e-48c7-7648-a188-966126714fad_1_933_156_709_435_0.jpg)
44
+
45
+ Figure 2: Bracelet model C-space with 2 modes, showing a configuration in the free space (left), a contact configuration on the boundary (middle), and a configuration involving interpenetration (right).
46
+
47
+ The signed distances in C-space are determined by a self-collision boundary ${T}_{\text{bound }}$ , which is a collection of points that make a reduced model deform to just having self-contact. The boundary divides the C-space into collision space ${T}_{\text{collision }}$ where the configurations generate self-intersections and free space ${T}_{\text{free }}$ where the model is free from self-collision.
48
+
49
+ Sign: The sign represents if the model is self-collision free. We use $t\left( \mathbf{q}\right) \in \{ 1, - 1\}$ to denote the target sign of the given configuration $\mathbf{q}$ . If $\mathbf{q}$ makes the model in self-intersection or just touch itself then $t\left( \mathbf{q}\right) = - 1$ otherwise $t\left( \mathbf{q}\right) = 1$ . All the positive signs form the free space ${T}_{\text{free }} = \{ \mathbf{q} \mid t\left( \mathbf{q}\right) = 1\}$ and all the negative signs form the collision space ${T}_{\text{collision }} = \{ \mathbf{q} \mid t\left( \mathbf{q}\right) = - 1\}$ .
50
+
51
+ Distance: The distance is defined as the Euclidean distance from the configuration to the closest point in ${T}_{\text{bound }}$ , i.e., $d\left( \mathbf{q}\right) =$ $\mathop{\min }\limits_{{{\mathbf{q}}^{ * } \in {T}_{\text{bound }}}}{\begin{Vmatrix}\mathbf{q} - {\mathbf{q}}^{ * }\end{Vmatrix}}_{2}.$
52
+
53
+ Figure 2 shows the plot of example configurations in the 2D C-space of a bracelet model, as well as the geometry under the deformation. When $\mathbf{q}$ causes the model to just touch itself, $\mathbf{q}$ is on the self-collision boundary (Figure ??), which is highlighted in a red line. The colored signed distance field (SDF) shows the closest Euclidean distance to the boundary. The self-collision boundary and the SDF are what we want to learn with neural networks. Note that a reduced model may have more than two modes, and in that case the target collision boundary and SDF live in an $r$ -D C-space where $r$ is the number of modes.
54
+
55
+ The dashed line in Figure 2 shows an equal-energy level set on which the configurations produce the same elastic energy. Intuitively, the model does not deform enough to produce self-contact unless it reaches a certain amount of elastic energy, so it would be reasonable to sample within an equal-energy bound during training. The equal-energy level set here is a sphere since the deformation modes are simply obtained from LMA, but it can become irregular if the deformation modes are obtained from modal derivatives or manual selections. For generality, we set the training and sampling domain to be a ${2}^{r}$ hyper cube with each dimension limited within $\left\lbrack {-1,1}\right\rbrack$ . In order to make sure the configurations during simulation are safely included by the sampling domain, we simulate the model to collect the maximum absolute value of each configuration entry and scale the deformation bases before the learning process.
56
+
57
+ ![01963e5e-48c7-7648-a188-966126714fad_2_163_154_696_286_0.jpg](images/01963e5e-48c7-7648-a188-966126714fad_2_163_154_696_286_0.jpg)
58
+
59
+ Figure 3: Exploitation samples near the boundary help improve local accuracy, while exploration samples help identify missing parts of the boundary.
60
+
61
+ ## 4 ACTIVE LEARNING C-SPACE SDF
62
+
63
+ We use a two-pass active learning algorithm to train a neural network to represent a C-space SDF with the sample labels only consisting of signs. In the first pass, we use active learning to learn the collision boundary, but our main goal is to cache the growing training set as a point cloud. In the second pass, we train the neural network to learn the SDF using the cached point cloud in the first pass.
64
+
65
+ ### 4.1 Two-Pass Active Learning Overview
66
+
67
+ Active learning (AL) is a semi-supervised machine learning approach. It has been used by Pan et al. [17] to learn inter-object rigid body C-space and achieve great success. During the training process, an active learner continuously chooses samples from an unlabeled data pool, and the selected data are labeled to train the machine learning model.
68
+
69
+ In active learning, exploitation and exploration strategies are used to choose samples. Figure 3 shows an example of exploitation samples and exploration samples. Exploitation is good at selecting data that are close to the current decision boundary and helps efficiently refine the decision boundary, but it can also cause serious sample bias and consequently poor performance. Exploration is good at shaping the overall structure of the decision boundary and selecting samples in undetected regions, but it can also cause serious sample bias and consequently poor performance.
70
+
71
+ For learning the C-space SDF of a reduced model, we perform active learning twice, and each with different purposes. The first pass generates a point cloud where all the samples are divided according to their signs. For the second pass, this point cloud is used to compute approximated signed distances by looking for the closest points to the samples. In the following discussions, we use subscript $i$ to denote the learning iteration, superscript(k)to denote a sample index inside a batch, and $f$ to represent the neural network function.
72
+
73
+ Both passes go through a fixed number of iterations to train a neural network to fit the collision boundary. For the first pass, the loss function to optimize consists of a sign loss term ${L}_{\text{sign }}$ , and an eikonal loss term ${L}_{\text{eik }}$ . At each iteration of the first pass, we first generate adaptive training samples ${\mathbf{Q}}_{i}$ using both exploration and exploitation strategy, and query for their signs ${\mathbf{T}}_{i}$ by performing SCD. Then we add the generated samples $\left( {{\mathbf{Q}}_{i},{\mathbf{T}}_{i}}\right)$ to the adaptive training batch $\left( {{\mathbf{Q}}_{\text{adapt }},{\mathbf{T}}_{\text{adapt }}}\right)$ , which is maintained and grows at each iteration. The adaptive training batch corresponds to the sign loss term ${L}_{\text{sign }}$ that measures the sign predictions error. In addition, eikonal samples ${\mathbf{Q}}_{{\text{eik }}_{i}}$ are randomly generated at each iteration, which correspond to the eikonal loss term ${L}_{\text{eik }}$ that constrains the gradient magnitude. The neural network $f$ is then trained a user-defined number of epochs $n$ . Note that we use incremental training, which means the neural network begins step $i$ with the trained network from step $i - 1$ . After the first pass training is finished, the adaptive training batch is then divided according to the signs of the samples to be the cached point cloud.
74
+
75
+ In addition to ${L}_{\text{sign }}$ and ${L}_{\text{eik }}$ , the loss function for the second pass has a signed distance loss ${L}_{\text{sign }}$ that trains the neural network to learn the signed distances. The adaptive training batch and the eikonal samples are generated in the same way as in the first pass Additionally for the second pass, we uniformly generate samples ${\mathbf{Q}}_{{\mathrm{{sd}}}_{i}}$ and query the input point cloud for their approximated signed distances ${\mathbf{D}}_{i}$ . Then the neural network is trained with the adaptive training batch $\left( {{\mathbf{Q}}_{\text{adapt }},{\mathbf{T}}_{\text{adapt }}}\right)$ , eikonal samples ${\mathbf{Q}}_{{\text{eik }}_{i}}$ , and the signed distance batch $\left( {{\mathbf{Q}}_{{\mathrm{{sd}}}_{i}},{\mathbf{D}}_{i}}\right)$
76
+
77
+ ### 4.2 Exploration Samples
78
+
79
+ Exploration samples serve as detecting regions, bubbles, and sharp features that are unrecognized by the current network. In our approach, we uniformly generate ${N}_{\text{explore }}$ random configurations ${\mathbf{Q}}_{{\text{rand }}_{i}}$ , and use the current network to predict the signs. If the predicted sign is wrong, we add the sample to the training batch. So at each step, the exploration sample batch is
80
+
81
+ $$
82
+ {\mathbf{Q}}_{{\text{exploration }}_{i}} = \left\{ {{\mathbf{Q}}_{{\text{rand }}_{i}}^{\left( k\right) } \mid f\left( {\mathbf{Q}}_{{\text{rand }}_{i}}^{\left( k\right) }\right) t\left( {\mathbf{Q}}_{{\text{rand }}_{i}}^{\left( k\right) }\right) < 0}\right\} , \tag{1}
83
+ $$
84
+
85
+ $$
86
+ \text{where}k \in \left\lbrack {1,{N}_{\text{explore }}}\right\rbrack \text{.}
87
+ $$
88
+
89
+ Batch size ${N}_{\text{explore }}$ can be relatively small (we choose ${N}_{\text{explore }} =$ 500 in tests) since the sign query can be expensive depending on the complexity of the model, and in practice the accumulation of the exploration samples does help in detecting bubbles and sharp features.
90
+
91
+ ### 4.3 Exploitation Samples
92
+
93
+ Exploitation samples help refine the network sign decision boundary and push the prediction boundary $\{ \mathbf{q} \mid f\left( \mathbf{q}\right) = 0\}$ closer to the ground truth ${T}_{\text{bound }}$ . As the learning progresses, the exploitation samples tend to focus around ${T}_{\text{bound }}$ . In our approach, we first uniformly generate random configuration pool ${\mathbf{Q}}_{{\text{pool }}_{i}}$ with ${N}_{\text{pool }}$ samples. Then we find candidate samples ${\mathbf{Q}}_{{\text{cand }}_{i}}$ that are closest to the current prediction boundary. The top ${N}_{\text{cand }}$ samples that have the highest scores, which are computed by
94
+
95
+ $$
96
+ \text{ score } = \frac{1}{1 + \left| {f\left( {\mathbf{Q}}_{{\text{pool }}_{i}}^{\left( k\right) }\right) }\right| }, \tag{2}
97
+ $$
98
+
99
+ are picked as candidates ${\mathbf{Q}}_{{\text{cand }}_{i}}$ . From ${\mathbf{Q}}_{{\text{cand }}_{i}}$ , we pick the samples with wrong sign predictions and additionally ${N}_{\text{extra }}$ samples with the highest scores to form exploitation samples ${\mathbf{Q}}_{\text{exploit }}$ .
100
+
101
+ In practice, ${N}_{\text{pool }}$ can be large (we choose ${N}_{\text{pool }} = {50000}$ ) since evaluating the network output is cheap. Batch sizes ${N}_{\text{cand }}$ and ${N}_{\text{extra }}$ are relatively small, and we set ${N}_{\text{cand }} = {500}$ and ${N}_{\text{extra }} = {80}$ in our tests.
102
+
103
+ ### 4.4 Eikonal Samples
104
+
105
+ The eikonal samples correspond to an eikonal loss term ${L}_{\text{eik }}$ in the loss function that imposes constraints to the gradient magnitude of the neural network function. The eikonal samples are uniformly drawn in the domain, and are drawn at each iteration to assist the learning in producing a function with unit gradient, so that the neural network not only learns the self-collision boundary, but also the Euclidean distance to the boundary.
106
+
107
+ Since the neural network aims at learning the SDF in the whole C-space, the gradient magnitude constraint should be uniformly applied everywhere within the sample domain. However as the number of modes increases, the number of uniform eikonal samples needed to abundantly spread across the sample domain increases exponentially, and the time cost of computing the eikonal loss increases as well. Therefore, we borrow the idea from stochastic gradient descent, and randomly draw ${N}_{\text{eik }}$ eikonal samples at each step (we set ${N}_{\text{eik }} = {5000}$ in our tests). At each step, the eikonal loss is computed from the new eikonal samples, such that the gradient magnitude stochastically converges to the desired range.
108
+
109
+ The eikonal loss uses the magnitude of the network gradient with respect to the input, and is used to apply a penalty when the gradient magnitude is not 1 or within the range set by the user. The eikonal term is computed by
110
+
111
+ $$
112
+ {L}_{\text{eik }} = \frac{1}{{N}_{\text{eik }}}\mathop{\sum }\limits_{{k = 1}}^{{N}_{\text{eik }}}h\left( \left| {\nabla f\left( {\mathbf{Q}}_{{\text{eik }}_{i}}\right) }\right| \right) , \tag{3}
113
+ $$
114
+
115
+ $$
116
+ \text{where}h\left( x\right) = \left\{ \begin{array}{ll} x + \frac{1}{x} & 0 < x < 1 \\ 2 & 1 \leq x \leq 1 + \xi . \\ x - \xi + \frac{1}{x - \xi } & x > 1 + \xi \end{array}\right. \tag{4}
117
+ $$
118
+
119
+ In the eikonal loss, we use a piecewise loss function $h\left( x\right)$ that causes an infinitely large penalty when the gradient magnitude is 0 or $+ \infty$ , and less loss penalty when the gradient magnitude approaches a biased region near 1 . In practice, we set $\xi = {0.2}$ . The biased region is set slightly larger than 1 because we want the trained neural network to be more decisive around the decision boundary, which means the trained SDF around the decision boundary should have a larger gradient magnitude rather than a smaller one. This is because when the trained neural network is used in self-collision response, the queried configuration is always off the boundary due to time discretization, and we want to make sure the gradient is always (at least generally) pointing towards the closest point on the boundary. If the trained SDF generally has small gradient size, it is likely the gradient direction slightly off the boundary is messed up and pointing to a random direction.
120
+
121
+ One of the challenges in the implementation of the eikonal loss is that second order derivatives of the NN function are needed to optimize the loss. According to the chain rule, the gradient used to update the weights within the nerual network includes two parts. First we need to compute the derivative of the eikonal loss function. Then we need to compute the gradient of the eikonal loss argument $\nabla f\left( {\mathbf{Q}}_{{\text{eik }}_{i}}\right)$ with respect to the network weights, which is a second order derivative. Some of the neural network tools do not support back propagation for computing second order derivatives. In our implementation, the network gradient $\nabla f\left( {\mathbf{Q}}_{{\text{eik }}_{i}}\right)$ is computed using finite differences, so that the second order derivatives can be treated like first order derivatives and computed by back propagation.
122
+
123
+ ### 4.5 First Pass Learning (Point-cloud Generation Pass)
124
+
125
+ The first pass treats the input training data as a binary classification problem, but the main purpose is to collect the point cloud, which is used to generate approximated signed distance data in the second pass.
126
+
127
+ Point Cloud Collection: The point cloud is meant to be used to generate approximated signed distance, so the it's samples need to densely spread around the whole self-collision boundary, not missing any bubbles or sharp features. The adaptive training samples picked by exploitation and exploration naturally meet the demand, so we use the adaptive training batch in the first pass as the point cloud. We divide the adaptive training batch $\left( {{\mathbf{Q}}_{\text{adapt }},{\mathbf{T}}_{\text{adapt }}}\right)$ into positive cloud and negative cloud according to the signs of the samples, then ${\mathbf{Q}}_{\text{pos }}$ and ${\mathbf{Q}}_{\text{neg }}$ are
128
+
129
+ $$
130
+ {\mathbf{Q}}_{\text{pos }} = \left\{ {{\mathbf{Q}}_{\text{adapt }}^{\left( k\right) } \mid {\mathbf{T}}_{\text{adapt }}^{\left( k\right) } = 1, k \in \left\lbrack {1,{N}_{\text{adapt }}}\right\rbrack }\right\} , \tag{5}
131
+ $$
132
+
133
+ $$
134
+ {\mathbf{Q}}_{\text{neg }} = \left\{ {{\mathbf{Q}}_{\text{adapt }}^{\left( k\right) } \mid {\mathbf{T}}_{\text{adapt }}^{\left( k\right) } = - 1, k \in \left\lbrack {1,{N}_{\text{adapt }}}\right\rbrack }\right\} , \tag{6}
135
+ $$
136
+
137
+ where ${N}_{\text{adapt }}$ is the number of samples in the adaptive training batch.
138
+
139
+ First Pass Loss Function: The loss $L$ of the first pass consists of sign loss and eikonal loss, which is computed by
140
+
141
+ $$
142
+ L = {L}_{\text{sign }} + \lambda {L}_{\text{eik }}. \tag{7}
143
+ $$
144
+
145
+ The eikonal loss term is evaluated according to the gradient magnitude of the network, which is discussed in Section 4.4.
146
+
147
+ The sign loss penalizes the unmatching signs between predicted signs and target ones. It compares the signs between the prediction and the ground truth, and samples only contribute to this loss term when the signs do not match. The sign loss is computed by
148
+
149
+ $$
150
+ {L}_{\text{sign }} = \mathop{\sum }\limits_{{k = 1}}^{{N}_{\text{adapt }}}W\left( {\mathbf{Q}}_{\text{adapt }}^{\left( k\right) }\right) \max \left\{ {-{T}_{\text{adapt }}^{\left( k\right) }\left( {{2\sigma }\left( {f\left( {\mathbf{Q}}_{\text{adapt }}^{\left( k\right) }\right) }\right) - 1}\right) ,0}\right\} ,
151
+ $$
152
+
153
+ (8)
154
+
155
+ $$
156
+ \text{where}W\left( x\right) = \frac{{e}^{-\left| {f\left( x\right) }\right| }}{\mathop{\sum }\limits_{{j = 1}}^{{N}_{\text{adapt }}}{e}^{-\left| {f\left( {\mathbf{Q}}_{\text{adapt }}^{\left( j\right) }\right) }\right| }}\text{.} \tag{9}
157
+ $$
158
+
159
+ Here we apply weight function $W\left( x\right)$ to have more importance on the samples closer to the decision boundary $f = 0$ . The number of samples in the adaptive training set is represented by ${N}_{\text{adapt }}$ , and $\sigma \left( x\right)$ is the sigmoid function.
160
+
161
+ ### 4.6 Second Pass Learning
162
+
163
+ The second pass aims to learn the SDF to the C-space boundary. Additional to the first pass, we maintain a signed distance training batch $\left( {{\mathbf{Q}}_{\text{sd }},\mathbf{D}}\right)$ , which keeps growing as the training moves on. At each iteration, we uniformly generate configurations ${\mathbf{Q}}_{{\mathbf{{sd}}}_{i}}$ for signed distance samples, and query the model for signs and the input point cloud for their signed distances ${\mathbf{D}}_{i}$ . Then we use the accumulated signed distance batch to help guide the neural network to learn the SDF to the boundary.
164
+
165
+ Signed Distance Query: For each configuration ${\mathbf{Q}}_{{\mathrm{{sd}}}_{i}}^{\left( k\right) }$ from the signed distance samples, we approximate the closest distances ${\mathbf{D}}_{i}^{\left( k\right) }$ by finding the closest samples in the point cloud of the opposite sign,
166
+
167
+ $$
168
+ {\mathbf{D}}_{i}^{\left( k\right) } = \left\{ {\begin{matrix} \mathop{\min }\limits_{{\mathbf{q} \in {\mathbf{Q}}_{\text{neg }}}}{\begin{Vmatrix}\mathbf{q} - {\mathbf{Q}}_{{\mathrm{{sd}}}_{i}}^{\left( k\right) }\end{Vmatrix}}_{2}, & \text{ if }\;T\left( {\mathbf{Q}}_{{\mathrm{{sd}}}_{i}}^{\left( k\right) }\right) = 1 \\ - \mathop{\min }\limits_{{\mathbf{q} \in {\mathbf{Q}}_{\text{pos }}}}{\begin{Vmatrix}\mathbf{q} - {\mathbf{Q}}_{{\mathrm{{sd}}}_{i}}^{\left( k\right) }\end{Vmatrix}}_{2}, & \text{ if }\;T\left( {\mathbf{Q}}_{{\mathrm{{sd}}}_{i}}^{\left( k\right) }\right) = - 1 \end{matrix}.}\right. \tag{10}
169
+ $$
170
+
171
+ Second Pass Loss Function: The loss $L$ of the second pass is composed of three terms: sign loss, eikonal loss and signed distance loss, i.e.,
172
+
173
+ $$
174
+ L = {L}_{\text{sign }} + {\lambda }_{1}{L}_{\text{Eik }} + {\lambda }_{2}{L}_{\text{sd }}. \tag{11}
175
+ $$
176
+
177
+ The sign loss and eikonal loss are the same as in the first pass. The signed distance loss ${L}_{\mathrm{{sd}}}$ penalizes the difference between the predicted distances and the reference distances, and it takes the accumulated signed distance batch $\left( {{\mathbf{Q}}_{\mathrm{{sd}}},\mathbf{D}}\right)$ as input. Since the signed distances obtained from the point cloud are only approximations, we apply a weight that measures the confidence to each signed distance sample. Suppose there are ${N}_{\mathrm{{sd}}}$ samples in the signed distance batch, then the weighted signed distance loss becomes
178
+
179
+ $$
180
+ {L}_{\mathrm{{sd}}} = \mathop{\sum }\limits_{{k = 1}}^{{N}_{\mathrm{{sd}}}}W\left( {w\left( {\mathbf{D}}^{\left( k\right) }\right) }\right) {\left( f\left( {\mathbf{Q}}_{\mathrm{{sd}}}^{\left( k\right) }\right) - {\mathbf{D}}^{\left( k\right) }\right) }^{2}, \tag{12}
181
+ $$
182
+
183
+ $$
184
+ \text{where}W\left( x\right) = \frac{{e}^{-x}}{\mathop{\sum }\limits_{{j = 1}}^{{N}_{\mathrm{{sd}}}}{e}^{-w\left( {\mathbf{D}}^{\left( j\right) }\right) }}\text{.} \tag{13}
185
+ $$
186
+
187
+ The function $w\left( x\right)$ gives a trusting weight that measures the confidence according to the input signed distance.
188
+
189
+ Trusting Weight: Since the point cloud is an approximated and discretized representation of ${T}_{\text{bound }}$ , the distance computed from the point cloud is an approximation of the ground truth signed distance. Thus, we assign a trusting weight $w\left( x\right)$ with each signed distance sample. The trusting weight is set based on the intuition that when the queried configuration is far from the collision boundary and the approximated distance is large compared to the granularity of the point cloud representation, the error caused by the distance approximation can be ignored. In this sense, we map the signed distance to a piecewise weight function
190
+
191
+ $$
192
+ w\left( x\right) = \left\{ \begin{array}{ll} 1 & x \leq - {\eta }_{2}. \\ \frac{1}{2}\left( {\cos \frac{x + {\eta }_{2}}{{\eta }_{2} - {\eta }_{1}}\pi + 1}\right) & - {\eta }_{2} < x \leq - {\eta }_{1}. \\ 0 & - {\eta }_{1} < x < {\eta }_{2}. \\ \frac{1}{2}\left( {\cos \frac{x - {\eta }_{2}}{{\eta }_{2} - {\eta }_{1}}\pi + 1}\right) & {\eta }_{1} \leq x < {\eta }_{2}. \\ 1 & x \geq {\eta }_{2}. \end{array}\right. \tag{14}
193
+ $$
194
+
195
+ Note that ${\eta }_{1}$ serves as the distance threshold where the learner starts to trust, and ${\eta }_{2}$ is the threshold of the distance getting fully trusted. Given our sampling domain is a ${2}^{r}$ hyper cube in the range of $\left\lbrack {-1,1}\right\rbrack$ at each dimension, ${\eta }_{1}$ and ${\eta }_{2}$ are set as
196
+
197
+ $$
198
+ {\eta }_{1} = \frac{2}{{\left( {N}_{\text{pos }} + {N}_{\text{neg }}\right) }^{\frac{1}{r}}}, \tag{15}
199
+ $$
200
+
201
+ $$
202
+ {\eta }_{2} = \alpha {\eta }_{1}, \tag{16}
203
+ $$
204
+
205
+ where the ${N}_{\text{pos }}$ and ${N}_{\text{neg }}$ are the number of samples in positive point cloud and negative point cloud, and $\alpha$ is a user defined hyperpa-rameter (we set $\alpha = {10}$ in our tests). This is a function symmetric with respect to $x = 0$ given that the weight is purely based on the unsigned distance. The weight function produces weight 0 when the unsigned distance is smaller than ${\eta }_{1}$ and it produces weight 1 when the unsigned distance is larger that ${\eta }_{2}$ , indicating we fully trust the provided distance, and a cosine function interpolates the weights in between.
206
+
207
+ ## 5 REAL-TIME SIMULATION
208
+
209
+ Our contribution in real-time simulation consists of real-time SCD and collision response. For self-collision detection, the trained neural network $f\left( \mathbf{q}\right)$ is used to replace the algorithmic methods, and we evaluate the SDF function instead of observing the geometry of the model. The collision response includes collision handling between pairs of reduced models and the self-collision response of each model where the network gradient $\nabla f\left( \mathbf{q}\right)$ is used. The collision response forces are generated by first forming constraint Jacobian matrices that define the contact constraints and then solving for the Lagrange multipliers that represent the response forces.
210
+
211
+ ### 5.1 Real-time Self-collision Detection
212
+
213
+ During the real-time simulation, we need to detect whether the model is in self-collision at each time step. Instead of resorting to traditional geometrical intersection tests, we evaluate the learned SDF function $f\left( \mathbf{q}\right)$ . Although the prediction boundary $f\left( \mathbf{q}\right) = 0$ does not completely align with the ground truth, it can still work well for self-collision detection because the slight misalignment of the boundary is not easily visible in the form of geometrical self-intersection.
214
+
215
+ In each time step, we plug the current deformation configuration $\mathbf{q}$ into the evaluation function. If $f\left( \mathbf{q}\right) > 0$ , the model is considered self-collision free, regardless of the actual shape of the model. If $f\left( \mathbf{q}\right) < 0$ , the model is considered in self-collision, then we need to compute the configuration velocity update caused by the self-contact constraint, which will be discussed in the next section.
216
+
217
+ ### 5.2 Real-time Collision Response
218
+
219
+ Our real-time collision response is based on the contact constraint used by Erleben [7] for solving rigid body contacts. Since we simulate reduced models in a way that mixes rigid body motion and elastic deformation, we can easily extend the rigid body contact to reduced elastic body contact by adding extra entries to the contact constraint matrix to thereby incorporate deformation.
220
+
221
+ ![01963e5e-48c7-7648-a188-966126714fad_4_943_174_674_232_0.jpg](images/01963e5e-48c7-7648-a188-966126714fad_4_943_174_674_232_0.jpg)
222
+
223
+ Figure 4: Diagram showing contact between objects $i$ and $j$ , with contact point positions show at left, and velocities at the contact point shown at right.
224
+
225
+ We include the gradient of the learned function into the constraint matrix to form self-collision contact constraints. We can thus solve for the configuration velocity update such that in the next time step the configuration velocity is not taking the configuration deeper into the collision space.
226
+
227
+ #### 5.2.1 Mix of Rigid and Elastic Motion
228
+
229
+ Our simulation of a reduced model consists of the rigid motion (translation and rotation) of the center of mass (COM) and the reduced elastic deformation of the model. Initially the origin of the COM frame is set to be the center of mass of the model at the rest shape. For each vertex, we compute the deformed position in the COM frame and then transform it into the world frame to get the world position
230
+
231
+ $$
232
+ {\mathbf{x}}_{\mathrm{w}} = \mathbf{R}\left( {{\mathbf{x}}_{0} + \mathbf{{Uq}}}\right) + \mathbf{p}, \tag{17}
233
+ $$
234
+
235
+ where $\mathbf{R}$ and $\mathbf{p}$ are the rotation matrix and world position of the center of mass, and ${\mathbf{x}}_{0}$ is the initial position of the vertex in COM frame The rotation matrix $\mathbf{R}$ is a matrix form of the axis-angle rotation representation $\mathbf{\theta } \in {\mathbb{R}}^{3}$ , and is obtained by Rodrigues’ formula.
236
+
237
+ In the following discussion, we use a generalized coordinate $\widetilde{\mathbf{x}}$ to represent the rigid motion and reduced deformation of the model,
238
+
239
+ $$
240
+ \widetilde{\mathbf{x}} = \left\lbrack \begin{matrix} \mathbf{p} \\ \mathbf{\theta } \\ \mathbf{q} \end{matrix}\right\rbrack \in {\mathbb{R}}^{6 + r}. \tag{18}
241
+ $$
242
+
243
+ The approximation made here is that we disregard the rotational inertia change due to deformations. Since our focus is on collision detection and response, we make the approximation to enable a simple extension from rigid body contact constraint to reduced model contact constraint. In practice, the reduced model still behaves naturally after applying the approximation in our simulation.
244
+
245
+ #### 5.2.2 Reduced Model Contact Constraints
246
+
247
+ In order to solve the contact between two objects, the relative velocity of the contact point should be zero or cause separation in the normal direction, and this inequality is expressed in the form of a row of the contact constraint matrix. Figure 4 shows an example contact, where $\mathbf{n}$ is the normal of the tangent plane pointing from $i$ to $j$ , and ${\mathbf{r}}_{i},{\mathbf{r}}_{j}$ are relative positions of the contact point to COMs of the two objects. We can write velocity level constraint into(19)where $\overrightarrow{{\mathbf{v}}_{i}}$ and $\overrightarrow{{\mathbf{v}}_{j}}$ are the translation velocities of center of mass, $\overrightarrow{{\mathbf{\omega }}_{i}}$ and ${\overrightarrow{\mathbf{\omega }}}_{j}$ are the angular velocities. Additionally, ${\mathbf{U}}_{i}$ and ${\mathbf{U}}_{j}$ are the interpolated deformation bases of the points at the contact point of the models, $\overrightarrow{{\mathbf{u}}_{i}}$ and $\overrightarrow{{\mathbf{u}}_{j}}$ are the velocities of the deformation configurations, which represent how fast the model is deforming.
248
+
249
+ ![01963e5e-48c7-7648-a188-966126714fad_4_1126_1889_314_252_0.jpg](images/01963e5e-48c7-7648-a188-966126714fad_4_1126_1889_314_252_0.jpg)
250
+
251
+ #### 5.2.3 Reduced Self-collision Constraints
252
+
253
+ When the configuration of a reduced model has a negative signed distance, the model is determined to be in self-collision. This can be considered as a violation of the self-collision constraint $f\left( \mathbf{q}\right) \geq 0$ .
254
+
255
+ In order to move the configuration to a contact free area in C-space, we can make use of the SDF gradient $\nabla f\left( \mathbf{q}\right)$ which generally provides the direction to the closest point on the self-collision boundary. The goal of the self-collision handling when $f\left( \mathbf{q}\right) < 0$ is to finally take the deformation configuration into a collision free area in C-space, so the signed distance evaluation in the next time step should be no smaller than the current one
256
+
257
+ $$
258
+ f\left( {\mathbf{q} + \mathbf{u}{\Delta t}}\right) \geq f\left( \mathbf{q}\right) , \tag{20}
259
+ $$
260
+
261
+ where $\mathbf{u}$ denotes the configuration velocity and ${\Delta t}$ is the time step size in the simulation. Expanding left-hand side using a first order Taylor series gives us
262
+
263
+ $$
264
+ f\left( \mathbf{q}\right) + \nabla f{\left( \mathbf{q}\right) }^{T}\mathbf{u}{\Delta t} \geq f\left( \mathbf{q}\right) , \tag{21}
265
+ $$
266
+
267
+ $$
268
+ \nabla f{\left( \mathbf{q}\right) }^{T}\mathbf{u} \geq 0, \tag{22}
269
+ $$
270
+
271
+ which defines the self-collision constraint in velocity level. Then we can add an additional row in the constraint Jacobian matrix, and put the SDF gradient in the block corresponding to the model in the whole system:
272
+
273
+ $$
274
+ \underset{{\mathbf{J}}_{\mathbf{k}}}{\underbrace{\left\lbrack \begin{array}{lll} {\mathbf{0}}^{T} & {\mathbf{0}}^{T} & \nabla f{\left( \mathbf{q}\right) }^{T} \end{array}\right\rbrack }}\underset{\widehat{\mathbf{x}}}{\underbrace{\left\lbrack \begin{array}{l} \overrightarrow{{\mathbf{v}}_{\mathbf{i}}} \\ \overrightarrow{{\mathbf{\omega }}_{\mathbf{i}}} \\ \overrightarrow{{\mathbf{u}}_{\mathbf{i}}} \end{array}\right\rbrack }} \geq \mathbf{0}. \tag{23}
275
+ $$
276
+
277
+ The self-collision response generated by this constraint matrix using the SDF gradient takes the configuration to the self-collision boundary in approximated shortest C-space distance. This may not be the fastest way to bring the model out of self-intersection considering the extremal points in intersection, because the Euclidean distance in C-space does not correspond to the distance of the extremal points in a self-intersection. However, the self-collision response using this method is plausible during simulation.
278
+
279
+ ## 6 RESULTS
280
+
281
+ We perform multiple tests on different models with our two-pass active learning algorithm to show the performance of the learned SDF neural network function. First, we perform a first-pass algorithm on different sizes of the neural networks to have a general knowledge of the C-space complexity of different models. Then, we test the performance of our trained SDF, including quantified scores of the performance and visualizing some of the trained SDF and its ground truth. Finally, we discuss some animation results when applying the SDF function to real-time self-collision detection and handling.
282
+
283
+ ### 6.1 Network and Boundary Complexity
284
+
285
+ We perform grid tests on the expressiveness of the neural network sizes, showing the complexity of the model's collision boundary, so that we can properly choose the sizes of the neural networks. In this set of tests, we only perform the sign accuracy tests on the neural networks that are trained in the first pass learner. This is because in this test we do not need the signed distance and its gradient, and what we need is just the sign accuracy test of the trained network to see how well it fits the collision boundary. Performing learning of the first pass is enough to fit the neural network to the collision boundary and see its expressiveness.
286
+
287
+ The experiments are conducted on each model we plan to learn, and the tests span the number of modes from 3 to 7 . The network structures consist of 1 to 3 hidden fully-connected layers, and each of the hidden layers has the same layer size which spans from 10 to 100. The input layer is of the same size as the model's deformation mode number and takes configuration $\mathbf{q}$ as input. The activation functions for all the hidden layers are ReLU functions because the ReLU activation provides fast learning as it reduces the likelihood of gradient vanishing and is used commonly in deep learning.
288
+
289
+ We spend 500 iterations uniformly generating ${\mathbf{Q}}_{{\text{pool }}_{i}}$ for exploitation. In some cases where the network training becomes stuck in a local optimum and gives extremely low sign prediction accuracy, we do multiple tests and report the best test accuracy. The sign prediction accuracy is plotted in Figure 5. We can observe that for the exact same network architecture, the test sign accuracy decreases when the model has more modes, which indicates that the expressiveness of the neural network is less likely to be capable of representing the collision boundary. This suggests that the C-space boundary becomes more complex when there are more modes, and in turn requires a more complicated neural network to represent. However, the increase in the number of modes results in more accuracy decrease for the snake than for the bunny. This is probably because the addition of new modes for the snake model enables new collision between geometry parts which cannot deform to contact with the old deformation basis. On the other hand, the new modes of the bunny model are just wiggles of the geometry, which does not complicate the self-collision boundary too much except for adding a new dimension.
290
+
291
+ Another observation is that the C-space boundary can be represented by a simple neural network. For both of the models, increasing the hidden layer number from 2 to 3 while keeping the layer size fixed only slightly improves the sign accuracy, which leads us to believe that 2 hidden layers is sufficient for the bunny, the snake, and other similar models. For a simpler model like the bunny, we can get at least ${95}\%$ sign prediction accuracy in approximating its 7D collision boundary, using a simple neural network with 2 hidden layers, and having 50 or more nodes in each of the hidden layers. For the snake model with 7 degrees of freedom, the sign accuracy reaches around 93% with a simple network with 2 hidden layers, having 70 or more nodes in each of the hidden layers. In order to select the best layer size to set up the neural network when the change in layer size does not significantly affect the accuracy, we tend to pick the point at the knee of the graph. In terms of learning the models with 7 modes, the architecture picked for the bunny and the snake appears to be the same, which is a fully connected network with 2 hidden layers and 70 nodes.
292
+
293
+ ### 6.2 SDF Quality Measurement
294
+
295
+ We also show the performance results of the neural networks trained by our two-pass algorithm. This includes two tests: visualized learning results of $2\mathrm{D}$ SDFs and quantified performance scores of the learned C-space SDF of reduced models.
296
+
297
+ We test our approach on multiple models, and train our neural network to learn the target SDF. For both passes of each model, we spend 500 iterations in the training. The network used in both passes has 2 hidden layers of the same size 70 , with ReLU activation function used. The same network architecture is used in this test so that we can compare the results between different models or the same model with a different number of modes. During each iteration, we train 10 epochs with the learning rate set to 0.001 .
298
+
299
+ ![01963e5e-48c7-7648-a188-966126714fad_6_151_143_1499_815_0.jpg](images/01963e5e-48c7-7648-a188-966126714fad_6_151_143_1499_815_0.jpg)
300
+
301
+ Figure 5: Evaluations of necessary network complexity for reduced deformation dimension varying between 3 and 7. The bunny model (top) is generally easier to learn than the snake model (bottom), and accuracy for high dimensional reduced spaces can be higher with additional hidden layers (second and third column).
302
+
303
+ ![01963e5e-48c7-7648-a188-966126714fad_6_159_1114_707_608_0.jpg](images/01963e5e-48c7-7648-a188-966126714fad_6_159_1114_707_608_0.jpg)
304
+
305
+ Figure 6: Visualization of two dimensional SDF learning tests show excellent accuracy at the boundary, and less accurate distances in the interior.
306
+
307
+ #### 6.2.1 2D SDF
308
+
309
+ We first present the test results of learning 2D SDFs. In this test, the self-collision boundary is defined by binary images. We visualize the trained SDF and compare it with the ground truth SDF that we compute exactly by finding the closest point on the boundary. Note that in training we still only use the sign labels of samples, and the ground truth distance is only used in visualization.
310
+
311
+ We train and visualize the SDF defined by an Apple logo where the decision boundary is generally smooth and a Twitter logo where a lot of sharp features exist and the decision boundary is generally harder to learn. The test result is visualized in Figure 6. By qualitative comparisons between the trained SDFs and the target SDFs, we can see the two-pass algorithm works very well for $2\mathrm{D}$ examples. The trained neural network not only provides a very good approximation of the boundary, capturing the sharp features, but also has smooth SDF gradient over the domain despite some differences compared to the target SDF. The success in these examples suggests that our learning approach does have the ability to detect and learn the sharp features in C-space boundary, and it is also feasible to directly learn the SDF instead of just the boundary representation. This success in 2D examples also gives us the encouragement to extend our tests to higher dimensions.
312
+
313
+ #### 6.2.2 Reduced Model C-space
314
+
315
+ In order to get target signed distances for quality measurement of the predicted signed distance, we spend 5000 iterations on the first pass to generate a denser point cloud. Then, the dense point cloud in computing the target signed distance value. Note that this can be accurate when the model has a small number of deformation bases but less accurate as the deformation degrees of freedom increases.
316
+
317
+ In testing the trained neural network, we uniformly generate ${N}_{\text{test }} = {50000}$ samples, and compute the error ${e}_{\text{sd }}$ between the predicted signed distance and queried signed distance on the dense point cloud. We also compute the sign prediction accuracy $\eta$ , gradient size error ${e}_{\text{grad }}$ and gradient size standard variance ${\sigma }_{\text{grad }}$ ,
318
+
319
+ $$
320
+ {e}_{\text{grad }} = \frac{1}{{N}_{\text{test }}}\mathop{\sum }\limits_{{k = 1}}^{{N}_{\text{test }}}\left| {\parallel \nabla f\left( {Q}_{\text{test }}^{\left( k\right) }\right) {\parallel }_{2} - 1}\right| , \tag{24}
321
+ $$
322
+
323
+ $$
324
+ {\sigma }_{\text{grad }} = \sqrt{\frac{1}{{N}_{\text{test }}}\mathop{\sum }\limits_{{k = 1}}^{{N}_{\text{test }}}{\left( {\begin{Vmatrix}\nabla f\left( {Q}_{\text{test }}^{\left( k\right) }\right) \end{Vmatrix}}_{2} - {\mu }_{\text{grad }}\right) }^{2}}, \tag{25}
325
+ $$
326
+
327
+ $$
328
+ \text{where}{\mu }_{\text{grad }} = \frac{1}{{N}_{\text{test }}}\mathop{\sum }\limits_{{k = 1}}^{{N}_{\text{test }}}\parallel \nabla f\left( {Q}_{\text{test }}^{\left( k\right) }\right) {\parallel }_{2}\text{.} \tag{26}
329
+ $$
330
+
331
+ Table 1: Learned C-space SDF function quality measurement, with $\eta$ measures the sign predictions accuracy, ${e}_{sd}$ measures the signed distances error, ${e}_{\text{grad }}$ and ${\sigma }_{\text{grad }}$ measure the signed distances error and variance computed from 50000 samples.
332
+
333
+ <table><tr><td>model name</td><td>#modes</td><td>$\eta \left( \% \right)$</td><td>${e}_{sd}$</td><td>${e}_{grad}$</td><td>${\sigma }_{grad}$</td></tr><tr><td rowspan="5">snake</td><td>3</td><td>99.77</td><td>0.0139</td><td>0.0866</td><td>0.1192</td></tr><tr><td>4</td><td>99.62</td><td>0.0206</td><td>0.1039</td><td>0.1343</td></tr><tr><td>5</td><td>98.51</td><td>0.0439</td><td>0.1995</td><td>0.2594</td></tr><tr><td>6</td><td>96.11</td><td>0.0834</td><td>0.2140</td><td>0.2728</td></tr><tr><td>7</td><td>95.20</td><td>0.1151</td><td>0.2542</td><td>0.3226</td></tr><tr><td rowspan="5">bunny</td><td>3</td><td>99.93</td><td>0.0122</td><td>0.0840</td><td>0.1170</td></tr><tr><td>4</td><td>99.50</td><td>0.0312</td><td>0.1911</td><td>0.2505</td></tr><tr><td>5</td><td>98.69</td><td>0.0436</td><td>0.2305</td><td>0.2949</td></tr><tr><td>6</td><td>96.95</td><td>0.0746</td><td>0.2583</td><td>0.3249</td></tr><tr><td>7</td><td>94.87</td><td>0.1036</td><td>0.3205</td><td>0.3960</td></tr><tr><td rowspan="5">bracelet</td><td>3</td><td>99.77</td><td>0.0189</td><td>0.1191</td><td>0.1595</td></tr><tr><td>4</td><td>99.15</td><td>0.0344</td><td>0.1754</td><td>0.2263</td></tr><tr><td>5</td><td>98.12</td><td>0.0507</td><td>0.1886</td><td>0.2403</td></tr><tr><td>6</td><td>96.39</td><td>0.0738</td><td>0.2025</td><td>0.2538</td></tr><tr><td>7</td><td>94.68</td><td>0.1099</td><td>0.2263</td><td>0.2805</td></tr></table>
334
+
335
+ The sign prediction accuracy $\eta$ is used to measure the ability of the trained neural network to detect self-collision of the reduced model. Gradient size error ${e}_{grad}$ and gradient size standard variance ${\sigma }_{grad}$ denote how well the learned SDF is representing the Euclidean distance in the C-space. They are included to rule out the cases where the decision boundary fits well but the gradient is dramatically changing in the C-space. Smaller values of ${\sigma }_{\text{grad }}$ and ${e}_{\text{grad }}$ mean that the gradient is smooth in the learned SDF and thus can potentially give good directions when queried to solve self-collision.
336
+
337
+ We test our two-pass learning algorithm on bunny, snake and bracelet model. The bracelet and bunny have relatively simpler C-space boundaries. For the bracelet model, it can only generate collisions between the two ends of the crack. For the bunny model, the collision only happens between the two ears and the back. However the snake model has a more complex boundary and SDF as well because it has many adjacent coils that can have collisions.
338
+
339
+ The test results are shown in Table 1. The test sign accuracy for models with 3 or 4 modes can reach more than 99%. The sign accuracy goes down as we increase the number of modes, and the sign accuracy becomes around ${95}\%$ at 7 modes. This is not so ideal considering the test samples are uniformly sampled in C-space, so a lot of them are far from the ground truth boundary, where it is easy to correctly predict the sign. Figure 7 shows the visualization of the 2D decision boundary of trained neural networks of the bracelet model, providing some intuition into the sign accuracy in Table 1. For a bracelet with 3 deformation modes, the sign accuracy 99.77% indicates a very well-aligned collision boundary between the predicted one and ground truth. When the number of modes increases to 7, the 94.68% accuracy prediction boundary is less ideal, and becomes a coarse approximation of the ground truth.
340
+
341
+ The signed distance error for models with 3 or 4 modes is around 0.01 , which is good considering we are testing in the range of a $\left\lbrack {-1,1}\right\rbrack$ hyper-cube. It becomes approximately 0.1 when the number of modes reaches 7 .
342
+
343
+ ![01963e5e-48c7-7648-a188-966126714fad_7_929_147_713_239_0.jpg](images/01963e5e-48c7-7648-a188-966126714fad_7_929_147_713_239_0.jpg)
344
+
345
+ Figure 7: Visualization of a 2D slice (first two modes) of higher dimensional C-space boundaries for the bracelet model, with all other coordinates set to zero. The NN prediction struggles to fit the boundary in higher dimensional spaces.
346
+
347
+ ### 6.3 Summary
348
+
349
+ In our experiments, we perform sign accuracy tests on neural networks with different architecture and different sizes. Through this test, we can have an intuition of the self-collision boundary complexity of different models, and we can also reasonably choose the number of hidden layers as well as the size of the hidden layers. We also measure the quality of the trained SDF which is learned by applying the two-pass learning method, and apply the learned SDF in real-time reduced model simulation for self-collision detection and response.
350
+
351
+ Our method works very well in learning the SDF in low dimensional configuration space. The 2D examples show that the two-pass learning algorithm not only successfully learns the representation of the boundary, but also provides smooth SDF within the 2D configuration space. The simulation examples of the spring and bracelet with 3 modes also show that the trained neural networks provide good self-collision approximation and generate reasonable self-collision response. However, when the target SDF has more dimensions, our learning method has a difficult time to learn a good approximation of the boundary as well as the SDF. This can be seen in supplementary video for the 7 mode snake and the 10 mode bunny.
352
+
353
+ ## 7 CONCLUSION
354
+
355
+ In this work, we propose the concept of self-collision boundary and C-space SDF. We also propose and implement a two-pass active learning algorithm that approximates the C-space SDF with a neural network that is trained on samples with only sign labels. The main idea is to use exploration and exploitation criteria to pick the most informative samples so that the convergence speed is improved. We also use an eikonal loss term and approximated signed distances to ensure that the neural network is not only skilled at determining the boundary, but also representing the distance to the boundary. Moreover, we propose a method to make use of the trained SDF function in self-collision detection and setting up a self-contact constraint matrix with the gradient.
356
+
357
+ ### 7.1 Advantages and Limitations
358
+
359
+ Our learning approach uses active learning to select samples for training, which helps improve sample efficiency and reduce the number of SCD queries. We have shown that our method can do a great job at learning the collision boundary of models with a small number of modes, and can reconstruct the signed distance field very well in 2 dimensional space. The learning is purely based on samples with only sign labels, which helps us bypass the dilemma where we have neither the ground truth distances nor a stable way to get approximated signed distances. Furthermore, the cost of evaluating the learned SDF to detect self-collision is constant, and making use of the gradient for self-contact handling is compatible with standard constraint solving methods.
360
+
361
+ Our work also has important limitations. One limitation is that currently our learning method only works well when the model has a small number of modes. Although we reduce the adverse impact from dimension increase by selecting informative samples, the curse of dimension still persists and makes it hard to learn the SDF in high dimensional space. For simple models that require a small number of modes to deform, our algorithm works nicely and produces great results. But the learner still struggles to learn the collision boundary in high dimensional space, which is the case when the model needs a large number of modes to produce plausible deformation. Another limitation is that our way of moving out of self-contact along the SDF gradient cannot take into account frictional contact. Although this method generates plausible self-collision solutions, the relative velocity between intersecting parts is not necessarily along the normal direction of the contact plane. This means by only evaluating the signed distances in the configuration space, our method cannot provide the information of the contact normals, and thus we can not set up constraints for frictions caused by self-collision.
362
+
363
+ ### 7.2 Future Work
364
+
365
+ There are many possible ways to overcome existing limitations. One possible improvement is looking for a new sampling strategy to improve the reliability and accuracy in learning high dimensional subspace. Currently the exploitation samples are selected within a data pool ${\mathbf{Q}}_{{\text{pool }}_{i}}$ that is generated at each iteration. The picked exploitation samples are close to $f\left( \mathbf{q}\right)$ only if the data pool has samples close to the boundary, which cannot be easily achieved when the C-space dimension is high. Instead, we can look for a method that can generate ${\mathbf{Q}}_{{\text{pool }}_{i}}$ whose samples are mostly close to the boundary. One of the ideas is making use of the network gradient, and using Newton’s method to find roots for $f\left( \mathbf{q}\right) = 0$ .
366
+
367
+ We can also further extend our work to other applications. We can consider inter-object collision detection as a self-collision problem by taking all the objects as a whole, and learn the C-space. Given that our learning method is good at reconstructing 3D space, we can also possibly make use of our learning method in mesh reconstruction.
368
+
369
+ ## REFERENCES
370
+
371
+ [1] M. Atzmon and Y. Lipman. SAL: Sign Agnostic Learning of Shapes from Raw Data. CoRR, abs/1911.10414, 2019.
372
+
373
+ [2] M. Atzmon and Y. Lipman. SAL++: Sign Agnostic Learning with Derivatives. CoRR, abs/2006.05400, 2020.
374
+
375
+ [3] J. Barbic. Real-Time Reduced Large-Deformation Models and Distributed Contact for Computer Graphics and Haptics. PhD thesis, USA, 2007. AAI3279452.
376
+
377
+ [4] J. Barbič and D. L. James. Real-Time Subspace Integration for St. Venant-Kirchhoff Deformable Models. ACM Trans. Graph., 24(3):982-990, July 2005. doi: 10.1145/1073204.1073300
378
+
379
+ [5] J. Barbič and D. L. James. Subspace Self-Collision Culling. ACM Trans. Graph., 29(4), jul 2010. doi: 10.1145/1778765.1778818
380
+
381
+ [6] J. N. Chadwick, S. S. An, and D. L. James. Harmonic Shells: A Practical Nonlinear Sound Model for near-Rigid Thin Shells. ACM Trans. Graph., 28(5):1-10, Dec. 2009. doi: 10.1145/1618452.1618465
382
+
383
+ [7] K. Erleben. Velocity-Based Shock Propagation for Multibody Dynamics Animation. ACM Trans. Graph., 26(2):12, June 2007. doi: 10. 1145/1243980.1243986
384
+
385
+ [8] L. Fulton, V. Modi, D. Duvenaud, D. I. W. Levin, and A. Jacobson. Latent-space Dynamics for Reduced Deformable Simulation. Computer Graphics Forum, 38, 2019.
386
+
387
+ [9] K. K. Hauser, C. Shen, and J. F. O'Brien. Interactive Deformation Using Modal Analysis with Constraints. In Graphics Interface, pp. 247-256. CIPS, Canadian Human-Computer Commnication Society, June 2003.
388
+
389
+ [10] D. Holden, B. C. Duong, S. Datta, and D. Nowrouzezahrai. Subspace Neural Physics: Fast Data-Driven Interactive Simulation. In Proceedings of the 18th Annual ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA '19. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3309486.3340245
390
+
391
+ [11] D. L. James, J. Barbič, and D. K. Pai. Precomputed Acoustic Trans-
392
+
393
+ fer: Output-Sensitive, Accurate Sound Generation for Geometrically Complex Vibration Sources. ACM Trans. Graph., 25(3):987-995, July 2006. doi: 10.1145/1141911.1141983
394
+
395
+ [12] D. L. James and D. K. Pai. DyRT: Dynamic Response Textures for Real Time Deformation Simulation with Graphics Hardware. ACM Trans. Graph., 21(3):582-585, July 2002. doi: 10.1145/566654.566621
396
+
397
+ [13] D. L. James and D. K. Pai. BD-Tree: Output-Sensitive Collision Detection for Reduced Deformable Models. ACM Trans. Graph., 23(3):393-398, aug 2004. doi: 10.1145/1015706.1015735
398
+
399
+ [14] Y. Jiang and K. Liu. Data-Driven Approach to Simulating Realistic Human Joint Constraints. pp. 1098-1103, 05 2018. doi: 10.1109/ICRA .2018.8461010
400
+
401
+ [15] T. Larsson and T. Akenine-Möller. Collision Detection for Continuously Deforming Bodies. In Eurographics, 2001.
402
+
403
+ [16] T. Larsson and T. Akenine-Möller. Efficient collision detection for models deformed by morphing. The Visual Computer, 19:164-174, 05 2003. doi: 10.1007/s00371-002-0190-y
404
+
405
+ [17] J. Pan, X. Zhang, and D. Manocha. Efficient Penetration Depth Approximation Using Active Learning. ACM Trans. Graph., 32(6), Nov. 2013. doi: 10.1145/2508363.2508385
406
+
407
+ [18] A. Pentland and J. Williams. Good Vibrations: Modal Dynamics for Graphics and Animation. SIGGRAPH Comput. Graph., 23(3):207-214, July 1989. doi: 10.1145/74334.74355
408
+
409
+ [19] M. Teschner, S. Kimmerle, B. Heidelberger, G. Zachmann, L. Raghu-pathi, A. Fuhrmann, M.-P. Cani, F. Faure, N. Magnenat-Thalmann, W. Strasser, and P. Volino. Collision Detection for Deformable Objects. In Eurographics 2004 - STARs. Eurographics Association, 2004. doi: 10.2312/egst.20041028
410
+
411
+ [20] A. Treuille, A. Lewis, and Z. Popović. Model Reduction for Real-Time Fluids. ACM Trans. Graph., 25(3):826-834, July 2006. doi: 10. 1145/1141911.1141962
412
+
413
+ [21] P. Volino and N. Thalmann. Efficient self-collision detection on smoothly discretized surface animations using geometrical shape regularity. Computer Graphics Forum, 13, 1994.
414
+
415
+ [22] M. Wicke, M. Stanton, and A. Treuille. Modular Bases for Fluid Dynamics. ACM Trans. Graph., 28(3), July 2009. doi: 10.1145/1531326. 1531345
416
+
417
+ [23] R. S. Zesch, B. R. Witemeyer, Z. Xiong, D. I. W. Levin, and S. Sueda. Neural collision detection for deformable objects. CoRR, abs/2202.02309, 2022.
418
+
419
+ [24] C. Zheng and D. L. James. Energy-Based Self-Collision Culling for Arbitrary Mesh Deformations. ACM Trans. Graph., 31(4), July 2012. doi: 10.1145/2185520.2185594
papers/Graphics_Interface/Graphics_Interface 2022/Graphics_Interface 2022 Conference/r3G_ReFNpM9/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,416 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § ACTIVE LEARNING NEURAL C-SPACE SIGNED DISTANCE FIELDS FOR REDUCED DEFORMABLE SELF-COLLISION
2
+
3
+ Anonymous Review
4
+
5
+ < g r a p h i c s >
6
+
7
+ Figure 1: Examples from our supplementary video, showing self collision for the bracelet, spring, bunny, and snake models. Self-collision is identified using a learned neural network SDF, and collision response uses the SDF gradient computed via back-. propogation within a constraint solver. Nerual SDFs work well for low dimension reduced spaces (e.g., the bracelet and spring with dimension 3), while models that need more dimensions to provide good reduced deformation models (e.g., the bunny with dimension 10, and the snake with dimension 7) have much less accurate learned collision manifolds.
8
+
9
+ § ABSTRACT
10
+
11
+ We present a novel method to preprocess a reduced model, training a neural network to approximate the reduced model signed distance field using active learning technique. The trained neural network is used to evaluate the self-collision state as well as the self-collision handling during real time simulation. Our offline learning approach consists of two passes of learning. The first pass learning generates positive and negative point cloud which is used in the second pass learning to learn the signed distance field of reduced subspace. Unlike common fully supervised learning approaches, we make use of semi-supervised active learning technique in generating more informative samples for training, improving the convergence speed. We also propose methods to use the learned SDF function in real time self-collision detection and assemble it in the constraint Jacobian matrix to solve the self-collision.
12
+
13
+ Index Terms: reduced model-self-collision-configuration space-signed distance field; active learning-contact constraint
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ In computer animation, simulating the physics of models usually requires solving large linear systems whose size conforms to the generalized coordinate of the model, and this can be costly if the model consists of a huge number of vertices. Model reduction is a technique used to approximate the simulation of the full dynamic system with a simplified one by projecting the high-dimensional system onto low-dimensional subspace. With much fewer variables in the reduced system, the equations can be solved much quicker while maintaining high fidelity of the original system. Reduced model deformation is an application of model reduction in computer animation to improve efficiency and is very relevant to applications such as games and training simulations, where a real-time computation is required.
18
+
19
+ Reduced model deformation can simplify the dynamic system solving, but the complexity of self-collision detection of the reduced model is still related to the complexity of the model mesh, since we must test each pair of the triangle mesh. Although a number of algorithms or data structures have been proposed to speed up the self-collision detection by culling unnecessary tests, like the BD-Tree and various culling strategies, some triangle-triangle intersection tests are still inevitable.
20
+
21
+ In this paper, we build our work based on reduced model deformation, evaluating the model self-collision state with respect to reduced coordinates in the configuration space (C-space). We learn a function to approximate the C-space signed distance function of a deformable model. The idea is inspired by the ellipsoid bound used by Barbic and James [5] to conservatively rule out self collision, but to extend that implicit function to a more complex function that represents the actual collision boundary. We show that a single, inexpensive function can replace the collision hierarchy, while also providing the gradients necessary to compute a collision response.
22
+
23
+ However, using traditional supervised learning methods in this case poses two challenges. First, the actual C-space self-collision boundary is unknown. Given a random deformation configuration, the only information we can get is the sign (i.e., whether the model is in self-collision or not), so there are no ground truth signed distance values to be used in training. Second, as more modes are used to deform a model, the number of dimensions of the C-space increases, and the number of uniform samples needed to learn the C-space boundary increases exponentially. In order to overcome these difficulties, we use approximated signed distances and eikonal loss terms to help the neural network function learn the C-space signed distance field. We also use active learning as our learning strategy for efficient sampling.
24
+
25
+ Active learning is a kind of semi-supervised learning where the learner automatically chooses the most informative data to label for training, which can improve the convergence of training. With active learning, the picked training data tends to distribute around the ground truth self-collision boundary, so we harvest the point cloud based on this observation and use that to approximate the signed distance value of a given configuration.
26
+
27
+ The contribution of this paper is to explore a new way to preprocess reduced deformable models, using active learning to learn the self-collision signed distance field (SDF) in C-space. We also show how to use the learned SDF function in real-time self-collision detection and self-intersection handling during physics simulations.
28
+
29
+ § 2 RELATED WORK
30
+
31
+ Our work is based on reduced deformable models. We learn reduced C-space SDF of reduced models, and use the trained neural network in self-collision detection and self-collision handling. The initial model reduction applications $\left\lbrack {9,{12},{18}}\right\rbrack$ in computer animation are based on linear systems. Since the linear elastic internal forces are computed using the rest shape stiffness matrix, the deformation produces noticeable artifacts when the model has large deformation. In order to relieve the distortion produced by large deformation, Barbič and James [3,4] investigate St.Venant-Kirchhoff deformable models with elastic forces which are cubic polynomials in reduced coordinate and provide methods to evaluate the elastic forces in real time. In addition to solid deformable models, model reduction is also used in acoustic simulations $\left\lbrack {6,{11}}\right\rbrack$ and fluid simulations $\left\lbrack {{20},{22}}\right\rbrack$ .
32
+
33
+ Self-collision detection (SCD) has been widely studied in computer animation. Bounding volume hierarchies (BVH) is the most commonly used data structure both in inter-object collision detection and SCD [19]. For cloth surfaces, Volino and Thalmann [21] use an improved hierarchical representation, taking advantage of geometrical regularity to skip SCD between large surface regions that are close, yet impossible to contact. The approaches for improving the speed of SCD have mainly focused on two techniques: improving BVH updates, and culling unnecessary BV node tests. For improving BVH updates, Larsson and Akenine-Möller [15] propose a hybrid update method using a combination of an incremental bottom-up update and a selective top-down update. Then, they blend associated sets of reference bounding volumes to enable lazy BVH updates [16]. James and Pai [13] propose the bounded deformation tree (BD-Tree), which makes use of the information on deformation modes, and updates the bounding sphere conservatively. For culling unnecessary tests, subspace self-collision culling [5] is proposed, in which a conservative certificate in C-space is precomputed and used to rule out some tests. Energy based self-collision culling certificates [24] have also been proposed by exploiting the idea that a mesh cannot self collide unless it deforms enough.
34
+
35
+ Machine learning (ML) methods build models based on sample data. A trained model can serve as a fast approximation of the studied problem. Practically, machine learning has been used abundantly in the fields of robotics, geometry processing, and computer animation and works well as a black box algorithm. Jiang and Liu [14] use a fully connected neural network to fit human motion with limits such as self contact, and they use network gradients to define constraint directions, which is an inspiration to our self-collision response. Neural networks are also used in geometry reconstruction. Atzmon and Lipman [1] propose a sign agnostic learning (SAL) method, in which an unsigned loss function was used to learn the signed distance field defined by the geometry. SAL is later improved into SALD [2], where derivatives of the loss term are incorporated into regression loss, which is the inspiration of our eikonal term in the loss function. Similar to our work on self collision, Zhesch et al. [23] also propose neural collision detection for reduced deformable models with a focus on collision between objects.
36
+
37
+ Machine learning has also been used for learning the physics of animation. Fulton et al. [8] use autoencoder neural networks to produce reduced model dynamics. Holden et al. [10] propose a data-driven reduced model physics simulation method, which does include the collision response, and satisfies memory and performance constraints imposed by modern interactive applications. One of the machine learning techniques that interests us is active learning. Active learning automatically chooses samples to label and thus can improve the convergence rate compared with regular supervised learning. Pan et al. [17] propose an active learning approach to learn the C-space boundary between rigid bodies and use the boundary to approximate global penetration depth. In our work, labeling the samples requires performing self-collision detection on the model, and this can be expensive in the time cost depending on the complexity of the model. So, active learning is a key part in our work since it largely reduces the number of samples that require labeling.
38
+
39
+ § 3 REDUCED MODEL C-SPACE SIGNED DISTANCE
40
+
41
+ The deformation of a reduced model is represented by a reduced coordinate (or deformation configuration) $\mathbf{q} \in {\mathbb{R}}^{r}$ , where $r$ is the number of deformation modes. Then, the full coordinate of the vertices displacement is reconstructed by $\Delta \mathbf{x} = \mathbf{{Uq}}$ , where each column of the matrix $\mathbf{U} \in {\mathbb{R}}^{{3n} \times r}$ is a deformation mode. The $r$ -D space where the configuration $\mathbf{q}$ lives is the configuration space (C-space).
42
+
43
+ < g r a p h i c s >
44
+
45
+ Figure 2: Bracelet model C-space with 2 modes, showing a configuration in the free space (left), a contact configuration on the boundary (middle), and a configuration involving interpenetration (right).
46
+
47
+ The signed distances in C-space are determined by a self-collision boundary ${T}_{\text{ bound }}$ , which is a collection of points that make a reduced model deform to just having self-contact. The boundary divides the C-space into collision space ${T}_{\text{ collision }}$ where the configurations generate self-intersections and free space ${T}_{\text{ free }}$ where the model is free from self-collision.
48
+
49
+ Sign: The sign represents if the model is self-collision free. We use $t\left( \mathbf{q}\right) \in \{ 1, - 1\}$ to denote the target sign of the given configuration $\mathbf{q}$ . If $\mathbf{q}$ makes the model in self-intersection or just touch itself then $t\left( \mathbf{q}\right) = - 1$ otherwise $t\left( \mathbf{q}\right) = 1$ . All the positive signs form the free space ${T}_{\text{ free }} = \{ \mathbf{q} \mid t\left( \mathbf{q}\right) = 1\}$ and all the negative signs form the collision space ${T}_{\text{ collision }} = \{ \mathbf{q} \mid t\left( \mathbf{q}\right) = - 1\}$ .
50
+
51
+ Distance: The distance is defined as the Euclidean distance from the configuration to the closest point in ${T}_{\text{ bound }}$ , i.e., $d\left( \mathbf{q}\right) =$ $\mathop{\min }\limits_{{{\mathbf{q}}^{ * } \in {T}_{\text{ bound }}}}{\begin{Vmatrix}\mathbf{q} - {\mathbf{q}}^{ * }\end{Vmatrix}}_{2}.$
52
+
53
+ Figure 2 shows the plot of example configurations in the 2D C-space of a bracelet model, as well as the geometry under the deformation. When $\mathbf{q}$ causes the model to just touch itself, $\mathbf{q}$ is on the self-collision boundary (Figure ??), which is highlighted in a red line. The colored signed distance field (SDF) shows the closest Euclidean distance to the boundary. The self-collision boundary and the SDF are what we want to learn with neural networks. Note that a reduced model may have more than two modes, and in that case the target collision boundary and SDF live in an $r$ -D C-space where $r$ is the number of modes.
54
+
55
+ The dashed line in Figure 2 shows an equal-energy level set on which the configurations produce the same elastic energy. Intuitively, the model does not deform enough to produce self-contact unless it reaches a certain amount of elastic energy, so it would be reasonable to sample within an equal-energy bound during training. The equal-energy level set here is a sphere since the deformation modes are simply obtained from LMA, but it can become irregular if the deformation modes are obtained from modal derivatives or manual selections. For generality, we set the training and sampling domain to be a ${2}^{r}$ hyper cube with each dimension limited within $\left\lbrack {-1,1}\right\rbrack$ . In order to make sure the configurations during simulation are safely included by the sampling domain, we simulate the model to collect the maximum absolute value of each configuration entry and scale the deformation bases before the learning process.
56
+
57
+ < g r a p h i c s >
58
+
59
+ Figure 3: Exploitation samples near the boundary help improve local accuracy, while exploration samples help identify missing parts of the boundary.
60
+
61
+ § 4 ACTIVE LEARNING C-SPACE SDF
62
+
63
+ We use a two-pass active learning algorithm to train a neural network to represent a C-space SDF with the sample labels only consisting of signs. In the first pass, we use active learning to learn the collision boundary, but our main goal is to cache the growing training set as a point cloud. In the second pass, we train the neural network to learn the SDF using the cached point cloud in the first pass.
64
+
65
+ § 4.1 TWO-PASS ACTIVE LEARNING OVERVIEW
66
+
67
+ Active learning (AL) is a semi-supervised machine learning approach. It has been used by Pan et al. [17] to learn inter-object rigid body C-space and achieve great success. During the training process, an active learner continuously chooses samples from an unlabeled data pool, and the selected data are labeled to train the machine learning model.
68
+
69
+ In active learning, exploitation and exploration strategies are used to choose samples. Figure 3 shows an example of exploitation samples and exploration samples. Exploitation is good at selecting data that are close to the current decision boundary and helps efficiently refine the decision boundary, but it can also cause serious sample bias and consequently poor performance. Exploration is good at shaping the overall structure of the decision boundary and selecting samples in undetected regions, but it can also cause serious sample bias and consequently poor performance.
70
+
71
+ For learning the C-space SDF of a reduced model, we perform active learning twice, and each with different purposes. The first pass generates a point cloud where all the samples are divided according to their signs. For the second pass, this point cloud is used to compute approximated signed distances by looking for the closest points to the samples. In the following discussions, we use subscript $i$ to denote the learning iteration, superscript(k)to denote a sample index inside a batch, and $f$ to represent the neural network function.
72
+
73
+ Both passes go through a fixed number of iterations to train a neural network to fit the collision boundary. For the first pass, the loss function to optimize consists of a sign loss term ${L}_{\text{ sign }}$ , and an eikonal loss term ${L}_{\text{ eik }}$ . At each iteration of the first pass, we first generate adaptive training samples ${\mathbf{Q}}_{i}$ using both exploration and exploitation strategy, and query for their signs ${\mathbf{T}}_{i}$ by performing SCD. Then we add the generated samples $\left( {{\mathbf{Q}}_{i},{\mathbf{T}}_{i}}\right)$ to the adaptive training batch $\left( {{\mathbf{Q}}_{\text{ adapt }},{\mathbf{T}}_{\text{ adapt }}}\right)$ , which is maintained and grows at each iteration. The adaptive training batch corresponds to the sign loss term ${L}_{\text{ sign }}$ that measures the sign predictions error. In addition, eikonal samples ${\mathbf{Q}}_{{\text{ eik }}_{i}}$ are randomly generated at each iteration, which correspond to the eikonal loss term ${L}_{\text{ eik }}$ that constrains the gradient magnitude. The neural network $f$ is then trained a user-defined number of epochs $n$ . Note that we use incremental training, which means the neural network begins step $i$ with the trained network from step $i - 1$ . After the first pass training is finished, the adaptive training batch is then divided according to the signs of the samples to be the cached point cloud.
74
+
75
+ In addition to ${L}_{\text{ sign }}$ and ${L}_{\text{ eik }}$ , the loss function for the second pass has a signed distance loss ${L}_{\text{ sign }}$ that trains the neural network to learn the signed distances. The adaptive training batch and the eikonal samples are generated in the same way as in the first pass Additionally for the second pass, we uniformly generate samples ${\mathbf{Q}}_{{\mathrm{{sd}}}_{i}}$ and query the input point cloud for their approximated signed distances ${\mathbf{D}}_{i}$ . Then the neural network is trained with the adaptive training batch $\left( {{\mathbf{Q}}_{\text{ adapt }},{\mathbf{T}}_{\text{ adapt }}}\right)$ , eikonal samples ${\mathbf{Q}}_{{\text{ eik }}_{i}}$ , and the signed distance batch $\left( {{\mathbf{Q}}_{{\mathrm{{sd}}}_{i}},{\mathbf{D}}_{i}}\right)$
76
+
77
+ § 4.2 EXPLORATION SAMPLES
78
+
79
+ Exploration samples serve as detecting regions, bubbles, and sharp features that are unrecognized by the current network. In our approach, we uniformly generate ${N}_{\text{ explore }}$ random configurations ${\mathbf{Q}}_{{\text{ rand }}_{i}}$ , and use the current network to predict the signs. If the predicted sign is wrong, we add the sample to the training batch. So at each step, the exploration sample batch is
80
+
81
+ $$
82
+ {\mathbf{Q}}_{{\text{ exploration }}_{i}} = \left\{ {{\mathbf{Q}}_{{\text{ rand }}_{i}}^{\left( k\right) } \mid f\left( {\mathbf{Q}}_{{\text{ rand }}_{i}}^{\left( k\right) }\right) t\left( {\mathbf{Q}}_{{\text{ rand }}_{i}}^{\left( k\right) }\right) < 0}\right\} , \tag{1}
83
+ $$
84
+
85
+ $$
86
+ \text{ where }k \in \left\lbrack {1,{N}_{\text{ explore }}}\right\rbrack \text{ . }
87
+ $$
88
+
89
+ Batch size ${N}_{\text{ explore }}$ can be relatively small (we choose ${N}_{\text{ explore }} =$ 500 in tests) since the sign query can be expensive depending on the complexity of the model, and in practice the accumulation of the exploration samples does help in detecting bubbles and sharp features.
90
+
91
+ § 4.3 EXPLOITATION SAMPLES
92
+
93
+ Exploitation samples help refine the network sign decision boundary and push the prediction boundary $\{ \mathbf{q} \mid f\left( \mathbf{q}\right) = 0\}$ closer to the ground truth ${T}_{\text{ bound }}$ . As the learning progresses, the exploitation samples tend to focus around ${T}_{\text{ bound }}$ . In our approach, we first uniformly generate random configuration pool ${\mathbf{Q}}_{{\text{ pool }}_{i}}$ with ${N}_{\text{ pool }}$ samples. Then we find candidate samples ${\mathbf{Q}}_{{\text{ cand }}_{i}}$ that are closest to the current prediction boundary. The top ${N}_{\text{ cand }}$ samples that have the highest scores, which are computed by
94
+
95
+ $$
96
+ \text{ score } = \frac{1}{1 + \left| {f\left( {\mathbf{Q}}_{{\text{ pool }}_{i}}^{\left( k\right) }\right) }\right| }, \tag{2}
97
+ $$
98
+
99
+ are picked as candidates ${\mathbf{Q}}_{{\text{ cand }}_{i}}$ . From ${\mathbf{Q}}_{{\text{ cand }}_{i}}$ , we pick the samples with wrong sign predictions and additionally ${N}_{\text{ extra }}$ samples with the highest scores to form exploitation samples ${\mathbf{Q}}_{\text{ exploit }}$ .
100
+
101
+ In practice, ${N}_{\text{ pool }}$ can be large (we choose ${N}_{\text{ pool }} = {50000}$ ) since evaluating the network output is cheap. Batch sizes ${N}_{\text{ cand }}$ and ${N}_{\text{ extra }}$ are relatively small, and we set ${N}_{\text{ cand }} = {500}$ and ${N}_{\text{ extra }} = {80}$ in our tests.
102
+
103
+ § 4.4 EIKONAL SAMPLES
104
+
105
+ The eikonal samples correspond to an eikonal loss term ${L}_{\text{ eik }}$ in the loss function that imposes constraints to the gradient magnitude of the neural network function. The eikonal samples are uniformly drawn in the domain, and are drawn at each iteration to assist the learning in producing a function with unit gradient, so that the neural network not only learns the self-collision boundary, but also the Euclidean distance to the boundary.
106
+
107
+ Since the neural network aims at learning the SDF in the whole C-space, the gradient magnitude constraint should be uniformly applied everywhere within the sample domain. However as the number of modes increases, the number of uniform eikonal samples needed to abundantly spread across the sample domain increases exponentially, and the time cost of computing the eikonal loss increases as well. Therefore, we borrow the idea from stochastic gradient descent, and randomly draw ${N}_{\text{ eik }}$ eikonal samples at each step (we set ${N}_{\text{ eik }} = {5000}$ in our tests). At each step, the eikonal loss is computed from the new eikonal samples, such that the gradient magnitude stochastically converges to the desired range.
108
+
109
+ The eikonal loss uses the magnitude of the network gradient with respect to the input, and is used to apply a penalty when the gradient magnitude is not 1 or within the range set by the user. The eikonal term is computed by
110
+
111
+ $$
112
+ {L}_{\text{ eik }} = \frac{1}{{N}_{\text{ eik }}}\mathop{\sum }\limits_{{k = 1}}^{{N}_{\text{ eik }}}h\left( \left| {\nabla f\left( {\mathbf{Q}}_{{\text{ eik }}_{i}}\right) }\right| \right) , \tag{3}
113
+ $$
114
+
115
+ $$
116
+ \text{ where }h\left( x\right) = \left\{ \begin{array}{ll} x + \frac{1}{x} & 0 < x < 1 \\ 2 & 1 \leq x \leq 1 + \xi . \\ x - \xi + \frac{1}{x - \xi } & x > 1 + \xi \end{array}\right. \tag{4}
117
+ $$
118
+
119
+ In the eikonal loss, we use a piecewise loss function $h\left( x\right)$ that causes an infinitely large penalty when the gradient magnitude is 0 or $+ \infty$ , and less loss penalty when the gradient magnitude approaches a biased region near 1 . In practice, we set $\xi = {0.2}$ . The biased region is set slightly larger than 1 because we want the trained neural network to be more decisive around the decision boundary, which means the trained SDF around the decision boundary should have a larger gradient magnitude rather than a smaller one. This is because when the trained neural network is used in self-collision response, the queried configuration is always off the boundary due to time discretization, and we want to make sure the gradient is always (at least generally) pointing towards the closest point on the boundary. If the trained SDF generally has small gradient size, it is likely the gradient direction slightly off the boundary is messed up and pointing to a random direction.
120
+
121
+ One of the challenges in the implementation of the eikonal loss is that second order derivatives of the NN function are needed to optimize the loss. According to the chain rule, the gradient used to update the weights within the nerual network includes two parts. First we need to compute the derivative of the eikonal loss function. Then we need to compute the gradient of the eikonal loss argument $\nabla f\left( {\mathbf{Q}}_{{\text{ eik }}_{i}}\right)$ with respect to the network weights, which is a second order derivative. Some of the neural network tools do not support back propagation for computing second order derivatives. In our implementation, the network gradient $\nabla f\left( {\mathbf{Q}}_{{\text{ eik }}_{i}}\right)$ is computed using finite differences, so that the second order derivatives can be treated like first order derivatives and computed by back propagation.
122
+
123
+ § 4.5 FIRST PASS LEARNING (POINT-CLOUD GENERATION PASS)
124
+
125
+ The first pass treats the input training data as a binary classification problem, but the main purpose is to collect the point cloud, which is used to generate approximated signed distance data in the second pass.
126
+
127
+ Point Cloud Collection: The point cloud is meant to be used to generate approximated signed distance, so the it's samples need to densely spread around the whole self-collision boundary, not missing any bubbles or sharp features. The adaptive training samples picked by exploitation and exploration naturally meet the demand, so we use the adaptive training batch in the first pass as the point cloud. We divide the adaptive training batch $\left( {{\mathbf{Q}}_{\text{ adapt }},{\mathbf{T}}_{\text{ adapt }}}\right)$ into positive cloud and negative cloud according to the signs of the samples, then ${\mathbf{Q}}_{\text{ pos }}$ and ${\mathbf{Q}}_{\text{ neg }}$ are
128
+
129
+ $$
130
+ {\mathbf{Q}}_{\text{ pos }} = \left\{ {{\mathbf{Q}}_{\text{ adapt }}^{\left( k\right) } \mid {\mathbf{T}}_{\text{ adapt }}^{\left( k\right) } = 1,k \in \left\lbrack {1,{N}_{\text{ adapt }}}\right\rbrack }\right\} , \tag{5}
131
+ $$
132
+
133
+ $$
134
+ {\mathbf{Q}}_{\text{ neg }} = \left\{ {{\mathbf{Q}}_{\text{ adapt }}^{\left( k\right) } \mid {\mathbf{T}}_{\text{ adapt }}^{\left( k\right) } = - 1,k \in \left\lbrack {1,{N}_{\text{ adapt }}}\right\rbrack }\right\} , \tag{6}
135
+ $$
136
+
137
+ where ${N}_{\text{ adapt }}$ is the number of samples in the adaptive training batch.
138
+
139
+ First Pass Loss Function: The loss $L$ of the first pass consists of sign loss and eikonal loss, which is computed by
140
+
141
+ $$
142
+ L = {L}_{\text{ sign }} + \lambda {L}_{\text{ eik }}. \tag{7}
143
+ $$
144
+
145
+ The eikonal loss term is evaluated according to the gradient magnitude of the network, which is discussed in Section 4.4.
146
+
147
+ The sign loss penalizes the unmatching signs between predicted signs and target ones. It compares the signs between the prediction and the ground truth, and samples only contribute to this loss term when the signs do not match. The sign loss is computed by
148
+
149
+ $$
150
+ {L}_{\text{ sign }} = \mathop{\sum }\limits_{{k = 1}}^{{N}_{\text{ adapt }}}W\left( {\mathbf{Q}}_{\text{ adapt }}^{\left( k\right) }\right) \max \left\{ {-{T}_{\text{ adapt }}^{\left( k\right) }\left( {{2\sigma }\left( {f\left( {\mathbf{Q}}_{\text{ adapt }}^{\left( k\right) }\right) }\right) - 1}\right) ,0}\right\} ,
151
+ $$
152
+
153
+ (8)
154
+
155
+ $$
156
+ \text{ where }W\left( x\right) = \frac{{e}^{-\left| {f\left( x\right) }\right| }}{\mathop{\sum }\limits_{{j = 1}}^{{N}_{\text{ adapt }}}{e}^{-\left| {f\left( {\mathbf{Q}}_{\text{ adapt }}^{\left( j\right) }\right) }\right| }}\text{ . } \tag{9}
157
+ $$
158
+
159
+ Here we apply weight function $W\left( x\right)$ to have more importance on the samples closer to the decision boundary $f = 0$ . The number of samples in the adaptive training set is represented by ${N}_{\text{ adapt }}$ , and $\sigma \left( x\right)$ is the sigmoid function.
160
+
161
+ § 4.6 SECOND PASS LEARNING
162
+
163
+ The second pass aims to learn the SDF to the C-space boundary. Additional to the first pass, we maintain a signed distance training batch $\left( {{\mathbf{Q}}_{\text{ sd }},\mathbf{D}}\right)$ , which keeps growing as the training moves on. At each iteration, we uniformly generate configurations ${\mathbf{Q}}_{{\mathbf{{sd}}}_{i}}$ for signed distance samples, and query the model for signs and the input point cloud for their signed distances ${\mathbf{D}}_{i}$ . Then we use the accumulated signed distance batch to help guide the neural network to learn the SDF to the boundary.
164
+
165
+ Signed Distance Query: For each configuration ${\mathbf{Q}}_{{\mathrm{{sd}}}_{i}}^{\left( k\right) }$ from the signed distance samples, we approximate the closest distances ${\mathbf{D}}_{i}^{\left( k\right) }$ by finding the closest samples in the point cloud of the opposite sign,
166
+
167
+ $$
168
+ {\mathbf{D}}_{i}^{\left( k\right) } = \left\{ {\begin{matrix} \mathop{\min }\limits_{{\mathbf{q} \in {\mathbf{Q}}_{\text{ neg }}}}{\begin{Vmatrix}\mathbf{q} - {\mathbf{Q}}_{{\mathrm{{sd}}}_{i}}^{\left( k\right) }\end{Vmatrix}}_{2}, & \text{ if }\;T\left( {\mathbf{Q}}_{{\mathrm{{sd}}}_{i}}^{\left( k\right) }\right) = 1 \\ - \mathop{\min }\limits_{{\mathbf{q} \in {\mathbf{Q}}_{\text{ pos }}}}{\begin{Vmatrix}\mathbf{q} - {\mathbf{Q}}_{{\mathrm{{sd}}}_{i}}^{\left( k\right) }\end{Vmatrix}}_{2}, & \text{ if }\;T\left( {\mathbf{Q}}_{{\mathrm{{sd}}}_{i}}^{\left( k\right) }\right) = - 1 \end{matrix}.}\right. \tag{10}
169
+ $$
170
+
171
+ Second Pass Loss Function: The loss $L$ of the second pass is composed of three terms: sign loss, eikonal loss and signed distance loss, i.e.,
172
+
173
+ $$
174
+ L = {L}_{\text{ sign }} + {\lambda }_{1}{L}_{\text{ Eik }} + {\lambda }_{2}{L}_{\text{ sd }}. \tag{11}
175
+ $$
176
+
177
+ The sign loss and eikonal loss are the same as in the first pass. The signed distance loss ${L}_{\mathrm{{sd}}}$ penalizes the difference between the predicted distances and the reference distances, and it takes the accumulated signed distance batch $\left( {{\mathbf{Q}}_{\mathrm{{sd}}},\mathbf{D}}\right)$ as input. Since the signed distances obtained from the point cloud are only approximations, we apply a weight that measures the confidence to each signed distance sample. Suppose there are ${N}_{\mathrm{{sd}}}$ samples in the signed distance batch, then the weighted signed distance loss becomes
178
+
179
+ $$
180
+ {L}_{\mathrm{{sd}}} = \mathop{\sum }\limits_{{k = 1}}^{{N}_{\mathrm{{sd}}}}W\left( {w\left( {\mathbf{D}}^{\left( k\right) }\right) }\right) {\left( f\left( {\mathbf{Q}}_{\mathrm{{sd}}}^{\left( k\right) }\right) - {\mathbf{D}}^{\left( k\right) }\right) }^{2}, \tag{12}
181
+ $$
182
+
183
+ $$
184
+ \text{ where }W\left( x\right) = \frac{{e}^{-x}}{\mathop{\sum }\limits_{{j = 1}}^{{N}_{\mathrm{{sd}}}}{e}^{-w\left( {\mathbf{D}}^{\left( j\right) }\right) }}\text{ . } \tag{13}
185
+ $$
186
+
187
+ The function $w\left( x\right)$ gives a trusting weight that measures the confidence according to the input signed distance.
188
+
189
+ Trusting Weight: Since the point cloud is an approximated and discretized representation of ${T}_{\text{ bound }}$ , the distance computed from the point cloud is an approximation of the ground truth signed distance. Thus, we assign a trusting weight $w\left( x\right)$ with each signed distance sample. The trusting weight is set based on the intuition that when the queried configuration is far from the collision boundary and the approximated distance is large compared to the granularity of the point cloud representation, the error caused by the distance approximation can be ignored. In this sense, we map the signed distance to a piecewise weight function
190
+
191
+ $$
192
+ w\left( x\right) = \left\{ \begin{array}{ll} 1 & x \leq - {\eta }_{2}. \\ \frac{1}{2}\left( {\cos \frac{x + {\eta }_{2}}{{\eta }_{2} - {\eta }_{1}}\pi + 1}\right) & - {\eta }_{2} < x \leq - {\eta }_{1}. \\ 0 & - {\eta }_{1} < x < {\eta }_{2}. \\ \frac{1}{2}\left( {\cos \frac{x - {\eta }_{2}}{{\eta }_{2} - {\eta }_{1}}\pi + 1}\right) & {\eta }_{1} \leq x < {\eta }_{2}. \\ 1 & x \geq {\eta }_{2}. \end{array}\right. \tag{14}
193
+ $$
194
+
195
+ Note that ${\eta }_{1}$ serves as the distance threshold where the learner starts to trust, and ${\eta }_{2}$ is the threshold of the distance getting fully trusted. Given our sampling domain is a ${2}^{r}$ hyper cube in the range of $\left\lbrack {-1,1}\right\rbrack$ at each dimension, ${\eta }_{1}$ and ${\eta }_{2}$ are set as
196
+
197
+ $$
198
+ {\eta }_{1} = \frac{2}{{\left( {N}_{\text{ pos }} + {N}_{\text{ neg }}\right) }^{\frac{1}{r}}}, \tag{15}
199
+ $$
200
+
201
+ $$
202
+ {\eta }_{2} = \alpha {\eta }_{1}, \tag{16}
203
+ $$
204
+
205
+ where the ${N}_{\text{ pos }}$ and ${N}_{\text{ neg }}$ are the number of samples in positive point cloud and negative point cloud, and $\alpha$ is a user defined hyperpa-rameter (we set $\alpha = {10}$ in our tests). This is a function symmetric with respect to $x = 0$ given that the weight is purely based on the unsigned distance. The weight function produces weight 0 when the unsigned distance is smaller than ${\eta }_{1}$ and it produces weight 1 when the unsigned distance is larger that ${\eta }_{2}$ , indicating we fully trust the provided distance, and a cosine function interpolates the weights in between.
206
+
207
+ § 5 REAL-TIME SIMULATION
208
+
209
+ Our contribution in real-time simulation consists of real-time SCD and collision response. For self-collision detection, the trained neural network $f\left( \mathbf{q}\right)$ is used to replace the algorithmic methods, and we evaluate the SDF function instead of observing the geometry of the model. The collision response includes collision handling between pairs of reduced models and the self-collision response of each model where the network gradient $\nabla f\left( \mathbf{q}\right)$ is used. The collision response forces are generated by first forming constraint Jacobian matrices that define the contact constraints and then solving for the Lagrange multipliers that represent the response forces.
210
+
211
+ § 5.1 REAL-TIME SELF-COLLISION DETECTION
212
+
213
+ During the real-time simulation, we need to detect whether the model is in self-collision at each time step. Instead of resorting to traditional geometrical intersection tests, we evaluate the learned SDF function $f\left( \mathbf{q}\right)$ . Although the prediction boundary $f\left( \mathbf{q}\right) = 0$ does not completely align with the ground truth, it can still work well for self-collision detection because the slight misalignment of the boundary is not easily visible in the form of geometrical self-intersection.
214
+
215
+ In each time step, we plug the current deformation configuration $\mathbf{q}$ into the evaluation function. If $f\left( \mathbf{q}\right) > 0$ , the model is considered self-collision free, regardless of the actual shape of the model. If $f\left( \mathbf{q}\right) < 0$ , the model is considered in self-collision, then we need to compute the configuration velocity update caused by the self-contact constraint, which will be discussed in the next section.
216
+
217
+ § 5.2 REAL-TIME COLLISION RESPONSE
218
+
219
+ Our real-time collision response is based on the contact constraint used by Erleben [7] for solving rigid body contacts. Since we simulate reduced models in a way that mixes rigid body motion and elastic deformation, we can easily extend the rigid body contact to reduced elastic body contact by adding extra entries to the contact constraint matrix to thereby incorporate deformation.
220
+
221
+ < g r a p h i c s >
222
+
223
+ Figure 4: Diagram showing contact between objects $i$ and $j$ , with contact point positions show at left, and velocities at the contact point shown at right.
224
+
225
+ We include the gradient of the learned function into the constraint matrix to form self-collision contact constraints. We can thus solve for the configuration velocity update such that in the next time step the configuration velocity is not taking the configuration deeper into the collision space.
226
+
227
+ § 5.2.1 MIX OF RIGID AND ELASTIC MOTION
228
+
229
+ Our simulation of a reduced model consists of the rigid motion (translation and rotation) of the center of mass (COM) and the reduced elastic deformation of the model. Initially the origin of the COM frame is set to be the center of mass of the model at the rest shape. For each vertex, we compute the deformed position in the COM frame and then transform it into the world frame to get the world position
230
+
231
+ $$
232
+ {\mathbf{x}}_{\mathrm{w}} = \mathbf{R}\left( {{\mathbf{x}}_{0} + \mathbf{{Uq}}}\right) + \mathbf{p}, \tag{17}
233
+ $$
234
+
235
+ where $\mathbf{R}$ and $\mathbf{p}$ are the rotation matrix and world position of the center of mass, and ${\mathbf{x}}_{0}$ is the initial position of the vertex in COM frame The rotation matrix $\mathbf{R}$ is a matrix form of the axis-angle rotation representation $\mathbf{\theta } \in {\mathbb{R}}^{3}$ , and is obtained by Rodrigues’ formula.
236
+
237
+ In the following discussion, we use a generalized coordinate $\widetilde{\mathbf{x}}$ to represent the rigid motion and reduced deformation of the model,
238
+
239
+ $$
240
+ \widetilde{\mathbf{x}} = \left\lbrack \begin{matrix} \mathbf{p} \\ \mathbf{\theta } \\ \mathbf{q} \end{matrix}\right\rbrack \in {\mathbb{R}}^{6 + r}. \tag{18}
241
+ $$
242
+
243
+ The approximation made here is that we disregard the rotational inertia change due to deformations. Since our focus is on collision detection and response, we make the approximation to enable a simple extension from rigid body contact constraint to reduced model contact constraint. In practice, the reduced model still behaves naturally after applying the approximation in our simulation.
244
+
245
+ § 5.2.2 REDUCED MODEL CONTACT CONSTRAINTS
246
+
247
+ In order to solve the contact between two objects, the relative velocity of the contact point should be zero or cause separation in the normal direction, and this inequality is expressed in the form of a row of the contact constraint matrix. Figure 4 shows an example contact, where $\mathbf{n}$ is the normal of the tangent plane pointing from $i$ to $j$ , and ${\mathbf{r}}_{i},{\mathbf{r}}_{j}$ are relative positions of the contact point to COMs of the two objects. We can write velocity level constraint into(19)where $\overrightarrow{{\mathbf{v}}_{i}}$ and $\overrightarrow{{\mathbf{v}}_{j}}$ are the translation velocities of center of mass, $\overrightarrow{{\mathbf{\omega }}_{i}}$ and ${\overrightarrow{\mathbf{\omega }}}_{j}$ are the angular velocities. Additionally, ${\mathbf{U}}_{i}$ and ${\mathbf{U}}_{j}$ are the interpolated deformation bases of the points at the contact point of the models, $\overrightarrow{{\mathbf{u}}_{i}}$ and $\overrightarrow{{\mathbf{u}}_{j}}$ are the velocities of the deformation configurations, which represent how fast the model is deforming.
248
+
249
+ < g r a p h i c s >
250
+
251
+ § 5.2.3 REDUCED SELF-COLLISION CONSTRAINTS
252
+
253
+ When the configuration of a reduced model has a negative signed distance, the model is determined to be in self-collision. This can be considered as a violation of the self-collision constraint $f\left( \mathbf{q}\right) \geq 0$ .
254
+
255
+ In order to move the configuration to a contact free area in C-space, we can make use of the SDF gradient $\nabla f\left( \mathbf{q}\right)$ which generally provides the direction to the closest point on the self-collision boundary. The goal of the self-collision handling when $f\left( \mathbf{q}\right) < 0$ is to finally take the deformation configuration into a collision free area in C-space, so the signed distance evaluation in the next time step should be no smaller than the current one
256
+
257
+ $$
258
+ f\left( {\mathbf{q} + \mathbf{u}{\Delta t}}\right) \geq f\left( \mathbf{q}\right) , \tag{20}
259
+ $$
260
+
261
+ where $\mathbf{u}$ denotes the configuration velocity and ${\Delta t}$ is the time step size in the simulation. Expanding left-hand side using a first order Taylor series gives us
262
+
263
+ $$
264
+ f\left( \mathbf{q}\right) + \nabla f{\left( \mathbf{q}\right) }^{T}\mathbf{u}{\Delta t} \geq f\left( \mathbf{q}\right) , \tag{21}
265
+ $$
266
+
267
+ $$
268
+ \nabla f{\left( \mathbf{q}\right) }^{T}\mathbf{u} \geq 0, \tag{22}
269
+ $$
270
+
271
+ which defines the self-collision constraint in velocity level. Then we can add an additional row in the constraint Jacobian matrix, and put the SDF gradient in the block corresponding to the model in the whole system:
272
+
273
+ $$
274
+ \underset{{\mathbf{J}}_{\mathbf{k}}}{\underbrace{\left\lbrack \begin{array}{lll} {\mathbf{0}}^{T} & {\mathbf{0}}^{T} & \nabla f{\left( \mathbf{q}\right) }^{T} \end{array}\right\rbrack }}\underset{\widehat{\mathbf{x}}}{\underbrace{\left\lbrack \begin{array}{l} \overrightarrow{{\mathbf{v}}_{\mathbf{i}}} \\ \overrightarrow{{\mathbf{\omega }}_{\mathbf{i}}} \\ \overrightarrow{{\mathbf{u}}_{\mathbf{i}}} \end{array}\right\rbrack }} \geq \mathbf{0}. \tag{23}
275
+ $$
276
+
277
+ The self-collision response generated by this constraint matrix using the SDF gradient takes the configuration to the self-collision boundary in approximated shortest C-space distance. This may not be the fastest way to bring the model out of self-intersection considering the extremal points in intersection, because the Euclidean distance in C-space does not correspond to the distance of the extremal points in a self-intersection. However, the self-collision response using this method is plausible during simulation.
278
+
279
+ § 6 RESULTS
280
+
281
+ We perform multiple tests on different models with our two-pass active learning algorithm to show the performance of the learned SDF neural network function. First, we perform a first-pass algorithm on different sizes of the neural networks to have a general knowledge of the C-space complexity of different models. Then, we test the performance of our trained SDF, including quantified scores of the performance and visualizing some of the trained SDF and its ground truth. Finally, we discuss some animation results when applying the SDF function to real-time self-collision detection and handling.
282
+
283
+ § 6.1 NETWORK AND BOUNDARY COMPLEXITY
284
+
285
+ We perform grid tests on the expressiveness of the neural network sizes, showing the complexity of the model's collision boundary, so that we can properly choose the sizes of the neural networks. In this set of tests, we only perform the sign accuracy tests on the neural networks that are trained in the first pass learner. This is because in this test we do not need the signed distance and its gradient, and what we need is just the sign accuracy test of the trained network to see how well it fits the collision boundary. Performing learning of the first pass is enough to fit the neural network to the collision boundary and see its expressiveness.
286
+
287
+ The experiments are conducted on each model we plan to learn, and the tests span the number of modes from 3 to 7 . The network structures consist of 1 to 3 hidden fully-connected layers, and each of the hidden layers has the same layer size which spans from 10 to 100. The input layer is of the same size as the model's deformation mode number and takes configuration $\mathbf{q}$ as input. The activation functions for all the hidden layers are ReLU functions because the ReLU activation provides fast learning as it reduces the likelihood of gradient vanishing and is used commonly in deep learning.
288
+
289
+ We spend 500 iterations uniformly generating ${\mathbf{Q}}_{{\text{ pool }}_{i}}$ for exploitation. In some cases where the network training becomes stuck in a local optimum and gives extremely low sign prediction accuracy, we do multiple tests and report the best test accuracy. The sign prediction accuracy is plotted in Figure 5. We can observe that for the exact same network architecture, the test sign accuracy decreases when the model has more modes, which indicates that the expressiveness of the neural network is less likely to be capable of representing the collision boundary. This suggests that the C-space boundary becomes more complex when there are more modes, and in turn requires a more complicated neural network to represent. However, the increase in the number of modes results in more accuracy decrease for the snake than for the bunny. This is probably because the addition of new modes for the snake model enables new collision between geometry parts which cannot deform to contact with the old deformation basis. On the other hand, the new modes of the bunny model are just wiggles of the geometry, which does not complicate the self-collision boundary too much except for adding a new dimension.
290
+
291
+ Another observation is that the C-space boundary can be represented by a simple neural network. For both of the models, increasing the hidden layer number from 2 to 3 while keeping the layer size fixed only slightly improves the sign accuracy, which leads us to believe that 2 hidden layers is sufficient for the bunny, the snake, and other similar models. For a simpler model like the bunny, we can get at least ${95}\%$ sign prediction accuracy in approximating its 7D collision boundary, using a simple neural network with 2 hidden layers, and having 50 or more nodes in each of the hidden layers. For the snake model with 7 degrees of freedom, the sign accuracy reaches around 93% with a simple network with 2 hidden layers, having 70 or more nodes in each of the hidden layers. In order to select the best layer size to set up the neural network when the change in layer size does not significantly affect the accuracy, we tend to pick the point at the knee of the graph. In terms of learning the models with 7 modes, the architecture picked for the bunny and the snake appears to be the same, which is a fully connected network with 2 hidden layers and 70 nodes.
292
+
293
+ § 6.2 SDF QUALITY MEASUREMENT
294
+
295
+ We also show the performance results of the neural networks trained by our two-pass algorithm. This includes two tests: visualized learning results of $2\mathrm{D}$ SDFs and quantified performance scores of the learned C-space SDF of reduced models.
296
+
297
+ We test our approach on multiple models, and train our neural network to learn the target SDF. For both passes of each model, we spend 500 iterations in the training. The network used in both passes has 2 hidden layers of the same size 70, with ReLU activation function used. The same network architecture is used in this test so that we can compare the results between different models or the same model with a different number of modes. During each iteration, we train 10 epochs with the learning rate set to 0.001 .
298
+
299
+ < g r a p h i c s >
300
+
301
+ Figure 5: Evaluations of necessary network complexity for reduced deformation dimension varying between 3 and 7. The bunny model (top) is generally easier to learn than the snake model (bottom), and accuracy for high dimensional reduced spaces can be higher with additional hidden layers (second and third column).
302
+
303
+ < g r a p h i c s >
304
+
305
+ Figure 6: Visualization of two dimensional SDF learning tests show excellent accuracy at the boundary, and less accurate distances in the interior.
306
+
307
+ § 6.2.1 2D SDF
308
+
309
+ We first present the test results of learning 2D SDFs. In this test, the self-collision boundary is defined by binary images. We visualize the trained SDF and compare it with the ground truth SDF that we compute exactly by finding the closest point on the boundary. Note that in training we still only use the sign labels of samples, and the ground truth distance is only used in visualization.
310
+
311
+ We train and visualize the SDF defined by an Apple logo where the decision boundary is generally smooth and a Twitter logo where a lot of sharp features exist and the decision boundary is generally harder to learn. The test result is visualized in Figure 6. By qualitative comparisons between the trained SDFs and the target SDFs, we can see the two-pass algorithm works very well for $2\mathrm{D}$ examples. The trained neural network not only provides a very good approximation of the boundary, capturing the sharp features, but also has smooth SDF gradient over the domain despite some differences compared to the target SDF. The success in these examples suggests that our learning approach does have the ability to detect and learn the sharp features in C-space boundary, and it is also feasible to directly learn the SDF instead of just the boundary representation. This success in 2D examples also gives us the encouragement to extend our tests to higher dimensions.
312
+
313
+ § 6.2.2 REDUCED MODEL C-SPACE
314
+
315
+ In order to get target signed distances for quality measurement of the predicted signed distance, we spend 5000 iterations on the first pass to generate a denser point cloud. Then, the dense point cloud in computing the target signed distance value. Note that this can be accurate when the model has a small number of deformation bases but less accurate as the deformation degrees of freedom increases.
316
+
317
+ In testing the trained neural network, we uniformly generate ${N}_{\text{ test }} = {50000}$ samples, and compute the error ${e}_{\text{ sd }}$ between the predicted signed distance and queried signed distance on the dense point cloud. We also compute the sign prediction accuracy $\eta$ , gradient size error ${e}_{\text{ grad }}$ and gradient size standard variance ${\sigma }_{\text{ grad }}$ ,
318
+
319
+ $$
320
+ {e}_{\text{ grad }} = \frac{1}{{N}_{\text{ test }}}\mathop{\sum }\limits_{{k = 1}}^{{N}_{\text{ test }}}\left| {\parallel \nabla f\left( {Q}_{\text{ test }}^{\left( k\right) }\right) {\parallel }_{2} - 1}\right| , \tag{24}
321
+ $$
322
+
323
+ $$
324
+ {\sigma }_{\text{ grad }} = \sqrt{\frac{1}{{N}_{\text{ test }}}\mathop{\sum }\limits_{{k = 1}}^{{N}_{\text{ test }}}{\left( {\begin{Vmatrix}\nabla f\left( {Q}_{\text{ test }}^{\left( k\right) }\right) \end{Vmatrix}}_{2} - {\mu }_{\text{ grad }}\right) }^{2}}, \tag{25}
325
+ $$
326
+
327
+ $$
328
+ \text{ where }{\mu }_{\text{ grad }} = \frac{1}{{N}_{\text{ test }}}\mathop{\sum }\limits_{{k = 1}}^{{N}_{\text{ test }}}\parallel \nabla f\left( {Q}_{\text{ test }}^{\left( k\right) }\right) {\parallel }_{2}\text{ . } \tag{26}
329
+ $$
330
+
331
+ Table 1: Learned C-space SDF function quality measurement, with $\eta$ measures the sign predictions accuracy, ${e}_{sd}$ measures the signed distances error, ${e}_{\text{ grad }}$ and ${\sigma }_{\text{ grad }}$ measure the signed distances error and variance computed from 50000 samples.
332
+
333
+ max width=
334
+
335
+ model name #modes $\eta \left( \% \right)$ ${e}_{sd}$ ${e}_{grad}$ ${\sigma }_{grad}$
336
+
337
+ 1-6
338
+ 5*snake 3 99.77 0.0139 0.0866 0.1192
339
+
340
+ 2-6
341
+ 4 99.62 0.0206 0.1039 0.1343
342
+
343
+ 2-6
344
+ 5 98.51 0.0439 0.1995 0.2594
345
+
346
+ 2-6
347
+ 6 96.11 0.0834 0.2140 0.2728
348
+
349
+ 2-6
350
+ 7 95.20 0.1151 0.2542 0.3226
351
+
352
+ 1-6
353
+ 5*bunny 3 99.93 0.0122 0.0840 0.1170
354
+
355
+ 2-6
356
+ 4 99.50 0.0312 0.1911 0.2505
357
+
358
+ 2-6
359
+ 5 98.69 0.0436 0.2305 0.2949
360
+
361
+ 2-6
362
+ 6 96.95 0.0746 0.2583 0.3249
363
+
364
+ 2-6
365
+ 7 94.87 0.1036 0.3205 0.3960
366
+
367
+ 1-6
368
+ 5*bracelet 3 99.77 0.0189 0.1191 0.1595
369
+
370
+ 2-6
371
+ 4 99.15 0.0344 0.1754 0.2263
372
+
373
+ 2-6
374
+ 5 98.12 0.0507 0.1886 0.2403
375
+
376
+ 2-6
377
+ 6 96.39 0.0738 0.2025 0.2538
378
+
379
+ 2-6
380
+ 7 94.68 0.1099 0.2263 0.2805
381
+
382
+ 1-6
383
+
384
+ The sign prediction accuracy $\eta$ is used to measure the ability of the trained neural network to detect self-collision of the reduced model. Gradient size error ${e}_{grad}$ and gradient size standard variance ${\sigma }_{grad}$ denote how well the learned SDF is representing the Euclidean distance in the C-space. They are included to rule out the cases where the decision boundary fits well but the gradient is dramatically changing in the C-space. Smaller values of ${\sigma }_{\text{ grad }}$ and ${e}_{\text{ grad }}$ mean that the gradient is smooth in the learned SDF and thus can potentially give good directions when queried to solve self-collision.
385
+
386
+ We test our two-pass learning algorithm on bunny, snake and bracelet model. The bracelet and bunny have relatively simpler C-space boundaries. For the bracelet model, it can only generate collisions between the two ends of the crack. For the bunny model, the collision only happens between the two ears and the back. However the snake model has a more complex boundary and SDF as well because it has many adjacent coils that can have collisions.
387
+
388
+ The test results are shown in Table 1. The test sign accuracy for models with 3 or 4 modes can reach more than 99%. The sign accuracy goes down as we increase the number of modes, and the sign accuracy becomes around ${95}\%$ at 7 modes. This is not so ideal considering the test samples are uniformly sampled in C-space, so a lot of them are far from the ground truth boundary, where it is easy to correctly predict the sign. Figure 7 shows the visualization of the 2D decision boundary of trained neural networks of the bracelet model, providing some intuition into the sign accuracy in Table 1. For a bracelet with 3 deformation modes, the sign accuracy 99.77% indicates a very well-aligned collision boundary between the predicted one and ground truth. When the number of modes increases to 7, the 94.68% accuracy prediction boundary is less ideal, and becomes a coarse approximation of the ground truth.
389
+
390
+ The signed distance error for models with 3 or 4 modes is around 0.01, which is good considering we are testing in the range of a $\left\lbrack {-1,1}\right\rbrack$ hyper-cube. It becomes approximately 0.1 when the number of modes reaches 7 .
391
+
392
+ < g r a p h i c s >
393
+
394
+ Figure 7: Visualization of a 2D slice (first two modes) of higher dimensional C-space boundaries for the bracelet model, with all other coordinates set to zero. The NN prediction struggles to fit the boundary in higher dimensional spaces.
395
+
396
+ § 6.3 SUMMARY
397
+
398
+ In our experiments, we perform sign accuracy tests on neural networks with different architecture and different sizes. Through this test, we can have an intuition of the self-collision boundary complexity of different models, and we can also reasonably choose the number of hidden layers as well as the size of the hidden layers. We also measure the quality of the trained SDF which is learned by applying the two-pass learning method, and apply the learned SDF in real-time reduced model simulation for self-collision detection and response.
399
+
400
+ Our method works very well in learning the SDF in low dimensional configuration space. The 2D examples show that the two-pass learning algorithm not only successfully learns the representation of the boundary, but also provides smooth SDF within the 2D configuration space. The simulation examples of the spring and bracelet with 3 modes also show that the trained neural networks provide good self-collision approximation and generate reasonable self-collision response. However, when the target SDF has more dimensions, our learning method has a difficult time to learn a good approximation of the boundary as well as the SDF. This can be seen in supplementary video for the 7 mode snake and the 10 mode bunny.
401
+
402
+ § 7 CONCLUSION
403
+
404
+ In this work, we propose the concept of self-collision boundary and C-space SDF. We also propose and implement a two-pass active learning algorithm that approximates the C-space SDF with a neural network that is trained on samples with only sign labels. The main idea is to use exploration and exploitation criteria to pick the most informative samples so that the convergence speed is improved. We also use an eikonal loss term and approximated signed distances to ensure that the neural network is not only skilled at determining the boundary, but also representing the distance to the boundary. Moreover, we propose a method to make use of the trained SDF function in self-collision detection and setting up a self-contact constraint matrix with the gradient.
405
+
406
+ § 7.1 ADVANTAGES AND LIMITATIONS
407
+
408
+ Our learning approach uses active learning to select samples for training, which helps improve sample efficiency and reduce the number of SCD queries. We have shown that our method can do a great job at learning the collision boundary of models with a small number of modes, and can reconstruct the signed distance field very well in 2 dimensional space. The learning is purely based on samples with only sign labels, which helps us bypass the dilemma where we have neither the ground truth distances nor a stable way to get approximated signed distances. Furthermore, the cost of evaluating the learned SDF to detect self-collision is constant, and making use of the gradient for self-contact handling is compatible with standard constraint solving methods.
409
+
410
+ Our work also has important limitations. One limitation is that currently our learning method only works well when the model has a small number of modes. Although we reduce the adverse impact from dimension increase by selecting informative samples, the curse of dimension still persists and makes it hard to learn the SDF in high dimensional space. For simple models that require a small number of modes to deform, our algorithm works nicely and produces great results. But the learner still struggles to learn the collision boundary in high dimensional space, which is the case when the model needs a large number of modes to produce plausible deformation. Another limitation is that our way of moving out of self-contact along the SDF gradient cannot take into account frictional contact. Although this method generates plausible self-collision solutions, the relative velocity between intersecting parts is not necessarily along the normal direction of the contact plane. This means by only evaluating the signed distances in the configuration space, our method cannot provide the information of the contact normals, and thus we can not set up constraints for frictions caused by self-collision.
411
+
412
+ § 7.2 FUTURE WORK
413
+
414
+ There are many possible ways to overcome existing limitations. One possible improvement is looking for a new sampling strategy to improve the reliability and accuracy in learning high dimensional subspace. Currently the exploitation samples are selected within a data pool ${\mathbf{Q}}_{{\text{ pool }}_{i}}$ that is generated at each iteration. The picked exploitation samples are close to $f\left( \mathbf{q}\right)$ only if the data pool has samples close to the boundary, which cannot be easily achieved when the C-space dimension is high. Instead, we can look for a method that can generate ${\mathbf{Q}}_{{\text{ pool }}_{i}}$ whose samples are mostly close to the boundary. One of the ideas is making use of the network gradient, and using Newton’s method to find roots for $f\left( \mathbf{q}\right) = 0$ .
415
+
416
+ We can also further extend our work to other applications. We can consider inter-object collision detection as a self-collision problem by taking all the objects as a whole, and learn the C-space. Given that our learning method is good at reconstructing 3D space, we can also possibly make use of our learning method in mesh reconstruction.