Daoze commited on
Commit
cf64f7a
·
verified ·
1 Parent(s): e6d975c

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. papers/GSK/GSK 2023/GSK 2023 CBC/Cx9B85IlEVR/Initial_manuscript_md/Initial_manuscript.md +102 -0
  2. papers/GSK/GSK 2023/GSK 2023 CBC/Cx9B85IlEVR/Initial_manuscript_tex/Initial_manuscript.tex +127 -0
  3. papers/GSK/GSK 2023/GSK 2023 CBC/TOaPl9tXlmD/Initial_manuscript_md/Initial_manuscript.md +545 -0
  4. papers/GSK/GSK 2023/GSK 2023 CBC/TOaPl9tXlmD/Initial_manuscript_tex/Initial_manuscript.tex +117 -0
  5. papers/GSK/GSK 2023/GSK 2023 CBC/Wf0QRYUkhwV/Initial_manuscript_md/Initial_manuscript.md +89 -0
  6. papers/GSK/GSK 2023/GSK 2023 CBC/Wf0QRYUkhwV/Initial_manuscript_tex/Initial_manuscript.tex +61 -0
  7. papers/GSK/GSK 2023/GSK 2023 CBC/gpDOOAOmMe/Initial_manuscript_md/Initial_manuscript.md +95 -0
  8. papers/GSK/GSK 2023/GSK 2023 CBC/gpDOOAOmMe/Initial_manuscript_tex/Initial_manuscript.tex +87 -0
  9. papers/GSK/GSK 2023/GSK 2023 CBC/hFx9EUs320I/Initial_manuscript_md/Initial_manuscript.md +111 -0
  10. papers/GSK/GSK 2023/GSK 2023 CBC/hFx9EUs320I/Initial_manuscript_tex/Initial_manuscript.tex +99 -0
  11. papers/GSK/GSK 2023/GSK 2023 CBC/hYT_pgTxjrR/Initial_manuscript_md/Initial_manuscript.md +47 -0
  12. papers/GSK/GSK 2023/GSK 2023 CBC/hYT_pgTxjrR/Initial_manuscript_tex/Initial_manuscript.tex +43 -0
  13. papers/GSK/GSK 2023/GSK 2023 CBC/nB9zUwS2gpI/Initial_manuscript_md/Initial_manuscript.md +69 -0
  14. papers/GSK/GSK 2023/GSK 2023 CBC/nB9zUwS2gpI/Initial_manuscript_tex/Initial_manuscript.tex +171 -0
  15. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference/KP0u0nSIwOW/Initial_manuscript_md/Initial_manuscript.md +276 -0
  16. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference/qp1fTRLKbIj/Initial_manuscript_md/Initial_manuscript.md +301 -0
  17. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference/qp1fTRLKbIj/Initial_manuscript_tex/Initial_manuscript.tex +189 -0
  18. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/1bxh-dKdrn4/Initial_manuscript_md/Initial_manuscript.md +633 -0
  19. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/1bxh-dKdrn4/Initial_manuscript_tex/Initial_manuscript.tex +379 -0
  20. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/AME0sErWj0j/Initial_manuscript_md/Initial_manuscript.md +411 -0
  21. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/AME0sErWj0j/Initial_manuscript_tex/Initial_manuscript.tex +289 -0
  22. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/AUa_CiMnZ9/Initial_manuscript_md/Initial_manuscript.md +273 -0
  23. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/AUa_CiMnZ9/Initial_manuscript_tex/Initial_manuscript.tex +229 -0
  24. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/CrkHdts-KT/Initial_manuscript_md/Initial_manuscript.md +327 -0
  25. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/CrkHdts-KT/Initial_manuscript_tex/Initial_manuscript.tex +211 -0
  26. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/Gkogn48LeI/Initial_manuscript_md/Initial_manuscript.md +495 -0
  27. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/Gkogn48LeI/Initial_manuscript_tex/Initial_manuscript.tex +684 -0
  28. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/N0RiLoidWE/Initial_manuscript_md/Initial_manuscript.md +377 -0
  29. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/N0RiLoidWE/Initial_manuscript_tex/Initial_manuscript.tex +339 -0
  30. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/U8p66V2PeEa/Initial_manuscript_md/Initial_manuscript.md +533 -0
  31. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/U8p66V2PeEa/Initial_manuscript_tex/Initial_manuscript.tex +365 -0
  32. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/UlIJS3dcMi/Initial_manuscript_md/Initial_manuscript.md +225 -0
  33. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/UlIJS3dcMi/Initial_manuscript_tex/Initial_manuscript.tex +209 -0
  34. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/akc8f5ampp/Initial_manuscript_md/Initial_manuscript.md +445 -0
  35. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/akc8f5ampp/Initial_manuscript_tex/Initial_manuscript.tex +440 -0
  36. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/gItvr7Xl66/Initial_manuscript_md/Initial_manuscript.md +369 -0
  37. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/gItvr7Xl66/Initial_manuscript_tex/Initial_manuscript.tex +253 -0
  38. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/hZlwUFmka-U/Initial_manuscript_md/Initial_manuscript.md +335 -0
  39. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/hZlwUFmka-U/Initial_manuscript_tex/Initial_manuscript.tex +283 -0
  40. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/ivIPr2ukrwk/Initial_manuscript_md/Initial_manuscript.md +473 -0
  41. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/ivIPr2ukrwk/Initial_manuscript_tex/Initial_manuscript.tex +253 -0
  42. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/ot-dY9S1U-/Initial_manuscript_md/Initial_manuscript.md +415 -0
  43. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/ot-dY9S1U-/Initial_manuscript_tex/Initial_manuscript.tex +376 -0
  44. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/sJPz-4Rwghv/Initial_manuscript_md/Initial_manuscript.md +431 -0
  45. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/sJPz-4Rwghv/Initial_manuscript_tex/Initial_manuscript.tex +675 -0
  46. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/x_MfBxtP2Y/Initial_manuscript_md/Initial_manuscript.md +397 -0
  47. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/x_MfBxtP2Y/Initial_manuscript_tex/Initial_manuscript.tex +320 -0
  48. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/yWIplfQfx8/Initial_manuscript_md/Initial_manuscript.md +341 -0
  49. papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/yWIplfQfx8/Initial_manuscript_tex/Initial_manuscript.tex +261 -0
  50. papers/HRI/HRI 2022/HRI 2022 Workshop/HRI 2022 Workshop VAM-HRI/BSrx_Q2-Akq/Initial_manuscript_md/Initial_manuscript.md +111 -0
papers/GSK/GSK 2023/GSK 2023 CBC/Cx9B85IlEVR/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CausalBench Challenge: Differences in Mean Expression
2
+
3
+ Marcin Kowiel Wojciech Kotlowski ${}^{1}$ Dariusz Brzezinski ${}^{1}$
4
+
5
+ ${}^{1}$ Institute of Computing Science, Poznan University of Technology
6
+
7
+ Abstract
8
+
9
+ In this write-up, we describe our solution to the 2023 CausalBench Challenge. We describe our approaches to preprocessing the data, parameterizations of DCDI and GRNBoost, and modifications to the baseline algorithms.
10
+
11
+ ## 1 Data pre-processing and post-processing
12
+
13
+ Parallel to developing modifications of the baseline DCDI and GRNBoost algorithms, we considered modifications to the input and output data of these algorithms. In particular, we analyzed good initial values for the gene expression threshold and output graph size.
14
+
15
+ Gene expression threshold. The gene expression threshold is used to remove genes that have a non-zero expression in less than a user-defined fraction of the samples. The default value of 0.25 resulted in DCDI performance that was visibly worse than that reported in [1]. Therefore, we changed the default expression threshold to 0.5 and used this value in further experiments. Moreover, we omitted samples labeled as 'excluded'.
16
+
17
+ Output graph size. The challenge submissions are evaluated based on the mean Wasserstein distance between the expression distributions of connected pairs of nodes in the output graph. Seeing that not all pairs are equally important and methods such as GRNBoost rely on sorting pairs according to importance and then selecting only a subset of them using a threshold, we assumed that smaller graphs will be more likely to have a higher value of the mean Wasser-stein distance. To verify this hypothesis, we plotted the mean Wasserstein distance for GRNBoost graphs of different sizes. As can be noticed by looking at Figure 1, the mean Wasserstein distance indeed decreases as the number of edges in the graph grows. Although GRNBoost does not perfectly sort gene pairs according to differences between expression distributions, the results are still very good. Therefore, in further experiments, we have always limited the number of edges to 1,000, which was the smallest allowed output graph according to the competition rules.
18
+
19
+ 4000 5000 Number of edges in output graph 0.24 Mean Wasserstein distance 0.22 0.20 0.18 0.16 0.14 1000
20
+
21
+ Figure 1: Mean Wasserstein distance for different sizes of GRNBoost output graphs.
22
+
23
+ ## 2 Tested approaches
24
+
25
+ In this section, we will discuss the subsequent models we tested while preparing our final CausalBench Challenge submission.
26
+
27
+ DCDI and GRNBoost baselines. Before designing modifications, we ran experiments on DCDI [2] and GRNBoost [3] with different parameters. As mentioned in the previous section, we finally settled for a gene expression threshold of 0.5 and an output graph consisting of 1,000 edges. We also tested different versions of DCDI-G (which offered better performance than DCDI-DSF). As can be seen in the left panel of Figure 2, our results on the RPE dataset [4] are in accordance with those presented in [1], i.e., DCDI-G offers the best performance followed by GRNBoost. These results served as a reference point for our modifications.
28
+
29
+ Baseline Modification Algorithm DCDI-G GRNBoost GRNBoost expression + intervention GRNBoost intervention encoding GRNBoost only intervention flag 0.50 0.75 1.00 Mean Wasserstein distance 0.4 0.2 0.0 0.25 $\begin{array}{ll} {1.00} & {0.25} \end{array}$ Fraction of intervention data
30
+
31
+ Figure 2: Mean Wasserstein distance of baseline algorithms and modifications of GRNBoost on the RPE dataset. Left panel: baseline algorithms-DCDI-G and GRNBoost. Right panel: GRNBoost modifications.
32
+
33
+ GRNBoost with intervention encoding. Our first modification involved adding information about interventions to GRNBoost. GRNBoost creates multiple regressors, each one predicting the expression value of a gene based on the expression values of the other genes. In its original form, GRNBoost treats all samples equally and has no notion of gene interventions. The first and simplest modification involved changing the expression value of perturbed genes to -100 (Figure 3, left panel). By doing so, our goal was to differentiate between interventions and naturally occurring zero-expression of a given gene. Since GRNBoost relies on regression trees, we did not worry about the concrete intervention encoding value, as long as it separated interventions from observational values. Hence we only tested the value -100 . The experimental results of this modification for the RPE dataset are presented in the right panel of Figure 2. As can be noticed, the intervention encoding strategy offered slightly better performance than the baseline GRNBoost.
34
+
35
+ A) Intervention encoding B) Expression and intervention flag
36
+
37
+ <table><tr><td colspan="4">C) Only intervention flag</td></tr><tr><td>gene X</td><td>gene Y</td><td>intervention</td><td>gene Z (target)</td></tr><tr><td>1.00</td><td>3.00</td><td>none</td><td>2.00</td></tr><tr><td>0.00</td><td>1.00</td><td>gene X</td><td>1.00</td></tr><tr><td>2.00</td><td>1.00</td><td>gene Y</td><td>0.00</td></tr><tr><td colspan="3">✓</td><td/></tr><tr><td colspan="2">int_geneX int_geneY</td><td>gene Z (target)</td><td/></tr><tr><td>0</td><td>0</td><td>2.00</td><td/></tr><tr><td>1</td><td>0</td><td>1.00</td><td/></tr><tr><td>0</td><td>1</td><td>0.00</td><td/></tr></table>
38
+
39
+ <table><tr><td>gene X</td><td>gene Y</td><td>intervention</td><td>gene Z (target)</td></tr><tr><td>1.00</td><td>3.00</td><td>none</td><td>2.00</td></tr><tr><td>0.00</td><td>1.00</td><td>gene X</td><td>1.00</td></tr><tr><td>2.00</td><td>1.00</td><td>gene Y</td><td>0.00</td></tr></table>
40
+
41
+ 1.00 3.00 none 2.00
42
+
43
+ 1.00 1.00
44
+
45
+ 2.00 1.00 gene Y 0.00
46
+
47
+ gene X gene Y gene Z (target) gene X gene Y int_geneX int_geneY gene Z (target)
48
+
49
+ 1.00 3.00 2.00 1.00 3.00 0 0 2.00
50
+
51
+ -100.00 1.00 1.00 0.00 1.00 1 0 1.00
52
+
53
+ 2.00 -100.00 0.00 2.00 1.00 0 1 0.00
54
+
55
+ Figure 3: Schematic of data modifications performed to introduce intervention information to GRNBoost. Each table presents the dataset used to train one regressor to predict the expression of gene $\mathrm{Z}$ based on expression values of genes $\mathrm{X}$ and $\mathrm{Y}$ .
56
+
57
+ GRNBoost with intervention flag columns. The approach described in the previous paragraph has a downside in that the intervention encoding value -100 hides the true expression of the gene in the sample, thus removing some of the information from the dataset. Therefore, as our second modification, instead of replacing expression values, we have added a set of columns with binary flags determining whether a particular gene was perturbed in a given sample (Figure 3, center panel). Somewhat surprisingly, this strategy of extending the dataset performed worse than intervention encoding (Figure 2).
58
+
59
+ GRNBoost with only intervention flag columns. Since extending the dataset with more columns seemed to have added more noise, we also tried another strategy-one wherein we discarded the expression values altogether and left only intervention flags (Figure 3, right panel). This GRNBoost modification worked significantly better than the previous two (Figure 2). Since using only binary (one-hot) intervention flags to predict expression boils down to estimating the means for sub-populations of the dataset, we decide to test strategies that estimate mean expression directly.
60
+
61
+ Mean expression estimation. We measured the strength of causal relationship $X \rightarrow Y$ for every gene pair $X, Y$ , for which interventions on $X$ were available. To this end, we separately calculated for gene $Y$ its mean expression values ${\bar{Y}}_{O}$ and ${\bar{Y}}_{X}$ on the observational data and on the interventional data concerning perturbations of $X$ , respectively. The difference in means, $\left| {{\bar{Y}}_{O} - {\bar{Y}}_{X}}\right|$ , was used to measure the strength of the relationship, and then to sort gene pairs and select 1,000 pairs with the largest differences. It turns out that this simple approach, which is essentially a regression model of $Y$ on the intervention flag of $X$ , turned out to significantly outperform all previously tested strategies on the RPE dataset, as seen in Figure 4. We note that for mean difference estimation, we did not employ any gene expression threshold.
62
+
63
+ Algorithm GRNBoost GRNBoost exp+int GRNBoost int enc GRNBoost only flag Mean diff Mean diff Bayes 1.00 Mean Wasserstein distance 0.6 0.4 0.2 0.25 0.50 0.75
64
+
65
+ Figure 4: Comparison of all algorithms. Note that both mean difference methods (Mean diff, Mean diff Bayes) have practically the same performance.
66
+
67
+ Mean expression estimation with Bayesian correction. Since some of the interventions contained few samples, we decided to correct the mean expression value on the interventional data ${\bar{Y}}_{X}$ by employing a Bayesian estimator, treating ${\bar{Y}}_{O}$ as the prior mean, and the variance of $Y$ on the observational data, $\operatorname{Var}\left( {Y}_{O}\right)$ , as the prior variance. This effectively boils down to expressing the difference in means by ${c}_{XY}\left| {{\bar{Y}}_{O} - {\bar{Y}}_{X}}\right|$ , with Bayesian correction factor ${c}_{XY} = \frac{\operatorname{Var}\left( {Y}_{O}\right) }{\operatorname{Var}\left( {Y}_{O}\right) + \operatorname{Var}\left( {Y}_{X}\right) /{n}_{X}}$ , where $\operatorname{Var}\left( {Y}_{X}\right)$ is the variance of $Y$ on the interventional data concerning perturbations of $X$ , while ${n}_{X}$ is the number of samples in that intervention. Since ${c}_{XY} \leq 1$ and increasing with ${n}_{X}$ , this has the effect of discounting the mean differences for small interventional datasets. However, the Bayesian estimation brought only an insignificant improvement when compared with the previous approach, essentially returning an almost identical set of top 1,000 pairs (Figure 4).
68
+
69
+ Considering all of the experimental results (Supplementary Table 1) and the above analyses, our final submission consisted of omitting samples labeled as 'excluded', estimating the mean expression of genes for each intervention, and selecting the 1,000 gene pairs with the largest expression differences.
70
+
71
+ ## 3 Discussion
72
+
73
+ Our final submission consisted of a very simple algorithm that estimates the mean expression of genes in situations when a different gene is intervened upon. The reason why we have settled for such a simple method rather than a more elaborate one stems from three factors:
74
+
75
+ 1. the fact that this is a competition, not an exploratory analysis;
76
+
77
+ 2. the format of the training and testing data;
78
+
79
+ 3. the competition's evaluation metric.
80
+
81
+ The first factor is obvious: since we are participating in a competition, discovering new interesting causal gene relationships becomes less important than achieving the best performance according to the competition rules. During the explanatory analysis and tests of various approaches, we realized that every step which led to the performance improvements was essentially pulling a given method towards estimating the difference in means on the observational and the interventional data. Therefore, we eventually decided to use the mean estimation as the sole method for causal graph edge prediction. The second factor, the data format, required us to predict interactions only between genes that were present in the input data and which, in most cases, had interventions. Without expression data for genes without interventions, there was no reason to predict causality between unperturbed genes. Finally, for a well-behaved predictor, the value of the competition evaluation metric will decrease as the number of predicted edges increases; therefore, it was always optimal to predict as few gene interactions as the competition allowed.
82
+
83
+ The above factors made our submission much simpler, but also much less applicable to industry needs. To alleviate the above-mentioned issues, we believe it would be necessary to require contestants to predict causal relations between genes that have interventions as well as those pairs that are purely observational. For that to be possible, the input data should have more genes without any observations, and the algorithm should receive as input the pairs of genes it is going to be evaluated on. With such a setup, the organizers would be able to force predictions on observational genes from the training data and evaluate them based on held-out interventional data. By prespecifying, which gene pairs the algorithm is supposed to assess, the problem of predicting the smallest possible graph would also disappear. In general, gene pairs could be evaluated in three cross-validation or holdout settings (cv1, cv2, cv3) as proposed for synthetic lethality pairs by Wang et al. [5].
84
+
85
+ ## References
86
+
87
+ [1] Mathieu Chevalley, Yusuf Roohani, Arash Mehrjou, Jure Leskovec, and Patrick Schwab. Causalbench: A large-scale benchmark for network inference from single-cell perturbation data, 2022. arXiv:2210.17283.
88
+
89
+ [2] Philippe Brouillard, Sébastien Lachapelle, Alexandre Lacoste, Simon Lacoste-Julien, and Alexandre Drouin. Differentiable causal discovery from interventional data. Advances in Neural Information Processing Systems, 33:21865-21877, 2020.
90
+
91
+ [3] Vân Anh Huynh-Thu, Alexandre Irrthum, Louis Wehenkel, and Pierre Geurts. Inferring regulatory networks from expression data using tree-based methods. PloS one, 5(9):e12776, 2010.
92
+
93
+ [4] Aviad Tsherniak, Francisca Vazquez, Phil G Montgomery, Barbara A Weir, Gregory Kryukov, Glenn S Cowley, Stanley Gill, William F Harrington, Sasha Pantel, John M Krill-Burger, et al. Defining a cancer dependency map. Cell, 170(3):564-576, 2017.
94
+
95
+ [5] Shike Wang, Yimiao Feng, Xin Liu, Yong Liu, Min Wu, and Jie Zheng. NSF4SL: negative-sample-free contrastive learning for ranking synthetic lethal partner genes in human cancers. Bioinformatics, 38(S2):ii13- ii19, 2022.
96
+
97
+ ## A Supplementary data
98
+
99
+ Table 1: Experimental results of all the algorithms on the RPE dataset.
100
+
101
+ <table><tr><td>Algorithm</td><td>Fraction of intevention data</td><td>Mean Wasserstein distance</td></tr><tr><td>DCDI-G</td><td>0.25</td><td>0.1771</td></tr><tr><td>DCDI-G</td><td>0.50</td><td>0.1755</td></tr><tr><td>DCDI-G</td><td>0.75</td><td>0.1890</td></tr><tr><td>DCDI-G</td><td>1.00</td><td>0.1845</td></tr><tr><td>Mean diff</td><td>0.25</td><td>0.4697</td></tr><tr><td>Mean diff</td><td>0.50</td><td>0.6357</td></tr><tr><td>Mean diff</td><td>0.75</td><td>0.7541</td></tr><tr><td>Mean diff</td><td>1.00</td><td>0.8130</td></tr><tr><td>GRNBoost</td><td>0.25</td><td>0.1462</td></tr><tr><td>GRNBoost</td><td>0.50</td><td>0.1471</td></tr><tr><td>GRNBoost</td><td>0.75</td><td>0.1473</td></tr><tr><td>GRNBoost</td><td>1.00</td><td>0.1520</td></tr><tr><td>Mean diff Bayes</td><td>0.25</td><td>0.4699</td></tr><tr><td>Mean diff Bayes</td><td>0.50</td><td>0.6354</td></tr><tr><td>Mean diff Bayes</td><td>0.75</td><td>0.7542</td></tr><tr><td>Mean diff Bayes</td><td>1.00</td><td>0.8128</td></tr><tr><td>GRNBoost intervention encoding</td><td>0.25</td><td>0.1669</td></tr><tr><td>GRNBoost intervention encoding</td><td>0.50</td><td>0.1679</td></tr><tr><td>GRNBoost intervention encoding</td><td>0.75</td><td>0.1662</td></tr><tr><td>GRNBoost intervention encoding</td><td>1.00</td><td>0.1548</td></tr><tr><td>GRNBoost expression + intervention</td><td>0.25</td><td>0.1510</td></tr><tr><td>GRNBoost expression + intervention</td><td>0.50</td><td>0.1513</td></tr><tr><td>GRNBoost expression + intervention</td><td>0.75</td><td>0.1604</td></tr><tr><td>GRNBoost expression + intervention</td><td>1.00</td><td>0.1598</td></tr><tr><td>GRNBoost only intervention flag</td><td>0.25</td><td>0.3913</td></tr><tr><td>GRNBoost only intervention flag</td><td>0.50</td><td>0.4995</td></tr><tr><td>GRNBoost only intervention flag</td><td>0.75</td><td>0.5542</td></tr><tr><td>GRNBoost only intervention flag</td><td>1.00</td><td>0.5855</td></tr></table>
102
+
papers/GSK/GSK 2023/GSK 2023 CBC/Cx9B85IlEVR/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § CAUSALBENCH CHALLENGE: DIFFERENCES IN MEAN EXPRESSION
2
+
3
+ Marcin Kowiel Wojciech Kotlowski ${}^{1}$ Dariusz Brzezinski ${}^{1}$
4
+
5
+ ${}^{1}$ Institute of Computing Science, Poznan University of Technology
6
+
7
+ Abstract
8
+
9
+ In this write-up, we describe our solution to the 2023 CausalBench Challenge. We describe our approaches to preprocessing the data, parameterizations of DCDI and GRNBoost, and modifications to the baseline algorithms.
10
+
11
+ § 1 DATA PRE-PROCESSING AND POST-PROCESSING
12
+
13
+ Parallel to developing modifications of the baseline DCDI and GRNBoost algorithms, we considered modifications to the input and output data of these algorithms. In particular, we analyzed good initial values for the gene expression threshold and output graph size.
14
+
15
+ Gene expression threshold. The gene expression threshold is used to remove genes that have a non-zero expression in less than a user-defined fraction of the samples. The default value of 0.25 resulted in DCDI performance that was visibly worse than that reported in [1]. Therefore, we changed the default expression threshold to 0.5 and used this value in further experiments. Moreover, we omitted samples labeled as 'excluded'.
16
+
17
+ Output graph size. The challenge submissions are evaluated based on the mean Wasserstein distance between the expression distributions of connected pairs of nodes in the output graph. Seeing that not all pairs are equally important and methods such as GRNBoost rely on sorting pairs according to importance and then selecting only a subset of them using a threshold, we assumed that smaller graphs will be more likely to have a higher value of the mean Wasser-stein distance. To verify this hypothesis, we plotted the mean Wasserstein distance for GRNBoost graphs of different sizes. As can be noticed by looking at Figure 1, the mean Wasserstein distance indeed decreases as the number of edges in the graph grows. Although GRNBoost does not perfectly sort gene pairs according to differences between expression distributions, the results are still very good. Therefore, in further experiments, we have always limited the number of edges to 1,000, which was the smallest allowed output graph according to the competition rules.
18
+
19
+ < g r a p h i c s >
20
+
21
+ Figure 1: Mean Wasserstein distance for different sizes of GRNBoost output graphs.
22
+
23
+ § 2 TESTED APPROACHES
24
+
25
+ In this section, we will discuss the subsequent models we tested while preparing our final CausalBench Challenge submission.
26
+
27
+ DCDI and GRNBoost baselines. Before designing modifications, we ran experiments on DCDI [2] and GRNBoost [3] with different parameters. As mentioned in the previous section, we finally settled for a gene expression threshold of 0.5 and an output graph consisting of 1,000 edges. We also tested different versions of DCDI-G (which offered better performance than DCDI-DSF). As can be seen in the left panel of Figure 2, our results on the RPE dataset [4] are in accordance with those presented in [1], i.e., DCDI-G offers the best performance followed by GRNBoost. These results served as a reference point for our modifications.
28
+
29
+ < g r a p h i c s >
30
+
31
+ Figure 2: Mean Wasserstein distance of baseline algorithms and modifications of GRNBoost on the RPE dataset. Left panel: baseline algorithms-DCDI-G and GRNBoost. Right panel: GRNBoost modifications.
32
+
33
+ GRNBoost with intervention encoding. Our first modification involved adding information about interventions to GRNBoost. GRNBoost creates multiple regressors, each one predicting the expression value of a gene based on the expression values of the other genes. In its original form, GRNBoost treats all samples equally and has no notion of gene interventions. The first and simplest modification involved changing the expression value of perturbed genes to -100 (Figure 3, left panel). By doing so, our goal was to differentiate between interventions and naturally occurring zero-expression of a given gene. Since GRNBoost relies on regression trees, we did not worry about the concrete intervention encoding value, as long as it separated interventions from observational values. Hence we only tested the value -100 . The experimental results of this modification for the RPE dataset are presented in the right panel of Figure 2. As can be noticed, the intervention encoding strategy offered slightly better performance than the baseline GRNBoost.
34
+
35
+ A) Intervention encoding B) Expression and intervention flag
36
+
37
+ max width=
38
+
39
+ 4|c|C) Only intervention flag
40
+
41
+ 1-4
42
+ gene X gene Y intervention gene Z (target)
43
+
44
+ 1-4
45
+ 1.00 3.00 none 2.00
46
+
47
+ 1-4
48
+ 0.00 1.00 gene X 1.00
49
+
50
+ 1-4
51
+ 2.00 1.00 gene Y 0.00
52
+
53
+ 1-4
54
+ 3|c|✓ X
55
+
56
+ 1-4
57
+ 2|c|int_geneX int_geneY gene Z (target) X
58
+
59
+ 1-4
60
+ 0 0 2.00 X
61
+
62
+ 1-4
63
+ 1 0 1.00 X
64
+
65
+ 1-4
66
+ 0 1 0.00 X
67
+
68
+ 1-4
69
+
70
+ max width=
71
+
72
+ gene X gene Y intervention gene Z (target)
73
+
74
+ 1-4
75
+ 1.00 3.00 none 2.00
76
+
77
+ 1-4
78
+ 0.00 1.00 gene X 1.00
79
+
80
+ 1-4
81
+ 2.00 1.00 gene Y 0.00
82
+
83
+ 1-4
84
+
85
+ 1.00 3.00 none 2.00
86
+
87
+ 1.00 1.00
88
+
89
+ 2.00 1.00 gene Y 0.00
90
+
91
+ gene X gene Y gene Z (target) gene X gene Y int_geneX int_geneY gene Z (target)
92
+
93
+ 1.00 3.00 2.00 1.00 3.00 0 0 2.00
94
+
95
+ -100.00 1.00 1.00 0.00 1.00 1 0 1.00
96
+
97
+ 2.00 -100.00 0.00 2.00 1.00 0 1 0.00
98
+
99
+ Figure 3: Schematic of data modifications performed to introduce intervention information to GRNBoost. Each table presents the dataset used to train one regressor to predict the expression of gene $\mathrm{Z}$ based on expression values of genes $\mathrm{X}$ and $\mathrm{Y}$ .
100
+
101
+ GRNBoost with intervention flag columns. The approach described in the previous paragraph has a downside in that the intervention encoding value -100 hides the true expression of the gene in the sample, thus removing some of the information from the dataset. Therefore, as our second modification, instead of replacing expression values, we have added a set of columns with binary flags determining whether a particular gene was perturbed in a given sample (Figure 3, center panel). Somewhat surprisingly, this strategy of extending the dataset performed worse than intervention encoding (Figure 2).
102
+
103
+ GRNBoost with only intervention flag columns. Since extending the dataset with more columns seemed to have added more noise, we also tried another strategy-one wherein we discarded the expression values altogether and left only intervention flags (Figure 3, right panel). This GRNBoost modification worked significantly better than the previous two (Figure 2). Since using only binary (one-hot) intervention flags to predict expression boils down to estimating the means for sub-populations of the dataset, we decide to test strategies that estimate mean expression directly.
104
+
105
+ Mean expression estimation. We measured the strength of causal relationship $X \rightarrow Y$ for every gene pair $X,Y$ , for which interventions on $X$ were available. To this end, we separately calculated for gene $Y$ its mean expression values ${\bar{Y}}_{O}$ and ${\bar{Y}}_{X}$ on the observational data and on the interventional data concerning perturbations of $X$ , respectively. The difference in means, $\left| {{\bar{Y}}_{O} - {\bar{Y}}_{X}}\right|$ , was used to measure the strength of the relationship, and then to sort gene pairs and select 1,000 pairs with the largest differences. It turns out that this simple approach, which is essentially a regression model of $Y$ on the intervention flag of $X$ , turned out to significantly outperform all previously tested strategies on the RPE dataset, as seen in Figure 4. We note that for mean difference estimation, we did not employ any gene expression threshold.
106
+
107
+ < g r a p h i c s >
108
+
109
+ Figure 4: Comparison of all algorithms. Note that both mean difference methods (Mean diff, Mean diff Bayes) have practically the same performance.
110
+
111
+ Mean expression estimation with Bayesian correction. Since some of the interventions contained few samples, we decided to correct the mean expression value on the interventional data ${\bar{Y}}_{X}$ by employing a Bayesian estimator, treating ${\bar{Y}}_{O}$ as the prior mean, and the variance of $Y$ on the observational data, $\operatorname{Var}\left( {Y}_{O}\right)$ , as the prior variance. This effectively boils down to expressing the difference in means by ${c}_{XY}\left| {{\bar{Y}}_{O} - {\bar{Y}}_{X}}\right|$ , with Bayesian correction factor ${c}_{XY} = \frac{\operatorname{Var}\left( {Y}_{O}\right) }{\operatorname{Var}\left( {Y}_{O}\right) + \operatorname{Var}\left( {Y}_{X}\right) /{n}_{X}}$ , where $\operatorname{Var}\left( {Y}_{X}\right)$ is the variance of $Y$ on the interventional data concerning perturbations of $X$ , while ${n}_{X}$ is the number of samples in that intervention. Since ${c}_{XY} \leq 1$ and increasing with ${n}_{X}$ , this has the effect of discounting the mean differences for small interventional datasets. However, the Bayesian estimation brought only an insignificant improvement when compared with the previous approach, essentially returning an almost identical set of top 1,000 pairs (Figure 4).
112
+
113
+ Considering all of the experimental results (Supplementary Table 1) and the above analyses, our final submission consisted of omitting samples labeled as 'excluded', estimating the mean expression of genes for each intervention, and selecting the 1,000 gene pairs with the largest expression differences.
114
+
115
+ § 3 DISCUSSION
116
+
117
+ Our final submission consisted of a very simple algorithm that estimates the mean expression of genes in situations when a different gene is intervened upon. The reason why we have settled for such a simple method rather than a more elaborate one stems from three factors:
118
+
119
+ 1. the fact that this is a competition, not an exploratory analysis;
120
+
121
+ 2. the format of the training and testing data;
122
+
123
+ 3. the competition's evaluation metric.
124
+
125
+ The first factor is obvious: since we are participating in a competition, discovering new interesting causal gene relationships becomes less important than achieving the best performance according to the competition rules. During the explanatory analysis and tests of various approaches, we realized that every step which led to the performance improvements was essentially pulling a given method towards estimating the difference in means on the observational and the interventional data. Therefore, we eventually decided to use the mean estimation as the sole method for causal graph edge prediction. The second factor, the data format, required us to predict interactions only between genes that were present in the input data and which, in most cases, had interventions. Without expression data for genes without interventions, there was no reason to predict causality between unperturbed genes. Finally, for a well-behaved predictor, the value of the competition evaluation metric will decrease as the number of predicted edges increases; therefore, it was always optimal to predict as few gene interactions as the competition allowed.
126
+
127
+ The above factors made our submission much simpler, but also much less applicable to industry needs. To alleviate the above-mentioned issues, we believe it would be necessary to require contestants to predict causal relations between genes that have interventions as well as those pairs that are purely observational. For that to be possible, the input data should have more genes without any observations, and the algorithm should receive as input the pairs of genes it is going to be evaluated on. With such a setup, the organizers would be able to force predictions on observational genes from the training data and evaluate them based on held-out interventional data. By prespecifying, which gene pairs the algorithm is supposed to assess, the problem of predicting the smallest possible graph would also disappear. In general, gene pairs could be evaluated in three cross-validation or holdout settings (cv1, cv2, cv3) as proposed for synthetic lethality pairs by Wang et al. [5].
papers/GSK/GSK 2023/GSK 2023 CBC/TOaPl9tXlmD/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,545 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # LEARNING GENE REGULATORY NETWORKS UNDER FEW ROOT CAUSES ASSUMPTION.
2
+
3
+ Anonymous authors
4
+
5
+ Paper under double-blind review
6
+
7
+ ## Abstract
8
+
9
+ We present a novel directed acyclic graph (DAG) learning method based on a causal form of Fourier-sparsity. Our ideas connect a theory of causal Fourier analysis with data generated by a structural equation model (SEM). We show that data generated by linear SEMs can be characterized in the Fourier domain as having a dense spectrum of root causes with random coefficients. We then propose the new problem of learning DAGs from data with sparse spectra (Fourier-sparsity) or, equivalently, few root causes. We provide proofs of identifiability in the new setting and, moreover, show that the true DAG is the global minimizer of the ${L}^{0}$ -norm of the approximated spectra. Our method is applied to the CausalBench Challenge showing superior performance over the baselines.
10
+
11
+ ## 1 INTRODUCTION
12
+
13
+ For causality, Fourier-sparsity translates into the data being generated from a few spectral coefficients or (in a sense being defined) few root causes. To formally analyze this we express the linear SEM equation based on the recently proposed theory of causal Fourier analysis on DAGs (Seifert et al. 2022b;a). We first show that data produced by linear structural equation models (SEMs) in prior work can be viewed as having a dense, random Fourier spectrum of causes and then extend them to include case where the data is sparse in the Fourier domain, i.e., has few root causes. It is worth noting that for prior, classical forms of Fourier transforms, Fourier-sparsity has played a significant role, including for the discrete Fourier transform (DFT) (Hassanieh, 2018), the discrete cosine transform (DCT) where it enables JPEG compression (Wallace, 1991), and the Walsh-Hadamard transform (WHT) for estimating set functions Stobbe & Krause (2012); Amrollahi et al. (2019).
14
+
15
+ Contributions. Towards this competition we provide the following contributions.
16
+
17
+ - We analyze linear SEMs in the Fourier domain and show that they yield dense spectra. We then pose the new assumption of data generated from a few root causes.
18
+
19
+ - For DAG learning from data with few root causes assumption, we propose a novel, linear DAG learning method, called MÖBIUS , based on the minimization of the ${L}^{1}$ -norm of the approximated spectrum. We provide theoretical guarantees for our method.
20
+
21
+ - We evaluate our method on the CausalBench dataset and show that MÖBIUS offers improvement over prior DAG learning methods.
22
+
23
+ ## 2 MOTIVATION
24
+
25
+ Consider a DAG $\mathcal{G} = \left( {V, E}\right)$ with $\left| V\right| = d$ vertices, $E$ the set of directed edges, and no self-loops. We say $j$ is a parent of $i$ whenever $\left( {j, i}\right) \in E$ while $j$ is an ancestor of $i$ means there is a path from $j$ to $i$ . The vertices are sorted topologically and we set accordingly $V = \{ 1,2,\ldots , d\}$ . Further, we assume a weighted adjacency matrix $\mathbf{A} = {\left( {a}_{ij}\right) }_{i, j \in V}$ of the graph, where ${a}_{ij} = 0$ if there is no edge.
26
+
27
+ Linear SEM. A data matrix $\mathbf{X} \in {\mathbf{R}}^{n \times d}$ consisting of $n$ signals (as rows) of dimension $d$ indexed by the DAG $\mathcal{G}$ satisfies a linear SEM (Shimizu et al.,2006; Zheng et al.,2018; Ng et al.,2020) if
28
+
29
+ $$
30
+ \mathbf{X} = \mathbf{{XA}} + \mathbf{N}. \tag{1}
31
+ $$
32
+
33
+ where the matrix $\mathbf{N}$ consists of independent random noise samples. Isolating $\mathbf{X}$ in (I) yields:
34
+
35
+ $$
36
+ \mathbf{X} = \mathbf{N}{\left( \mathbf{I} - \mathbf{A}\right) }^{-1} = \mathbf{N}\left( {\mathbf{I} + \mathbf{A} + \ldots + {\mathbf{A}}^{\left( d - 1\right) }}\right) = \mathbf{N}\left( {\mathbf{I} + \overline{\mathbf{A}}}\right) \tag{2}
37
+ $$
38
+
39
+ where $\overline{\mathbf{A}} = \mathbf{A} + {\mathbf{A}}^{2} + \ldots + {\mathbf{A}}^{d - 1}$ denotes the polynomial FW weighted transitive closure of the adjacency matrix. Eq. (2) can be viewed as the close form solution of Eq. (1).
40
+
41
+ Causal Fourier transform. Eq. (2) connects with the recent causal Fourier analysis framework of Seifert et al. (2022b). Consider the linear equation between the observed signal $\mathrm{s}$ and its causes $\mathrm{c}$
42
+
43
+ $$
44
+ \mathbf{s} = \left( {\mathbf{I} + {\overline{\mathbf{A}}}^{T}}\right) \mathbf{c}. \tag{3}
45
+ $$
46
+
47
+ Seifert et al. (2022b) argue that $\mathbf{c}$ can be interpreted as a form of spectrum of $\mathbf{s}$ . This is done by providing a suitable notion of shift and associated shift-equivariant convolution whose eigenvectors are the columns of $\mathbf{I} + {\overline{\mathbf{A}}}^{T}$ , following the algebraic theory of constructing Fourier analyses by Püschel & Moura (2006; 2008).
48
+
49
+ Fourier-sparse data. Combining Eq. (2) and (3) naturally leads to the idea of assuming a more general linear SEM model, which in addition to the random noise term contains an explicit term corresponding to the root causes. The equation of generating data $\mathbf{X} \in {\mathbb{R}}^{n \times d}$ becomes
50
+
51
+ $$
52
+ \mathbf{X} = \left( {\mathbf{C} + {\mathbf{N}}_{f}}\right) \left( {\mathbf{I} + \overline{\mathbf{A}}}\right) + {\mathbf{N}}_{s}. \tag{4}
53
+ $$
54
+
55
+ The matrix $\mathbf{C} \in {\mathbb{R}}^{n \times d}$ represents the root causes and ${\mathbf{N}}_{f},{\mathbf{N}}_{s} \in {\mathbb{R}}^{n \times d}$ the random noises for the frequency and signal domain, respectively. Approximate Fourier-sparsity or few root causes means that only a few coefficients in $\mathbf{C}$ are non-zero and the values of ${\mathbf{N}}_{f},{\mathbf{N}}_{s}$ have negligible magnitude.
56
+
57
+ Example. Consider the first $n$ Fibonacci numbers. The recurrence equation generating the sequence can be viewed as a linear SEM where each term depends on the two predecessors. Unrolling or solving this recurrence shows, equivalently, that all numbers only depend on the first 2 . These two are the root causes and yield the setting of Eq. (4) with $\mathbf{C}$ sparse, having only the first two values nonzero, and ${\mathbf{N}}_{f} = {\mathbf{N}}_{s} = \mathbf{0}$ . For general linear SEMs this recurrence-solving yields Eq. (2) from Eq. (1). Doing so yields our novel setting of Eq. (4) which captures the situation where some nodes in the DAG insert spikes of values that then percolate through the DAG as determined by the edge weights. But not exactly as captured by ${\mathbf{N}}_{f}$ and not exactly measurable as captured by ${\mathbf{N}}_{s}$ .
58
+
59
+ ## 3 OUR METHOD
60
+
61
+ Theoretical Guarantees. First we show that the novel setting based on the assumption of generation via few root causes is identifiable and then we define a discrete optimization problem that is guaranteed to find the true DAG under the assumption of having enough data.
62
+
63
+ Theorem 3.1. Assume data generated via the extended linear SEM Eq. (4). We assume that the spectra $\mathbf{C}$ are independent random variables taking uniform values from $\left\lbrack {0,1}\right\rbrack$ with probability $p$ , and are $= 0$ with probability $1 - p$ . Then Eq. (4) translates into a linear SEM with non-Gaussian noise and thus $\mathbf{A}$ identifiable due to (Shimizu et al. 2006).
64
+
65
+ Given the data $\mathbf{X}$ we propose the following optimization problem to retrieve the DAG structure:
66
+
67
+ $$
68
+ \mathop{\min }\limits_{{\mathbf{A} \in {\mathbb{R}}^{d \times d}}}\parallel \mathbf{X} - \mathbf{{XA}}{\parallel }_{0}\;\text{ s.t. }\;\mathbf{A}\text{ is acyclic. } \tag{5}
69
+ $$
70
+
71
+ Theorem 3.2. Consider a DAG with weighted adjacency matrix $\mathbf{A}$ . Suppose that $\widehat{\mathbf{A}}$ is the optimal solution of the optimization problem (5) where the number $n$ of data rows in $\mathbf{X}$ satisfies
72
+
73
+ $$
74
+ n \geq \frac{{2}^{{3d} - 2}d\left( {d - 1}\right) }{{\left( 1 - \delta \right) }^{2}{p}^{k}{\left( 1 - p\right) }^{d - k}} \tag{6}
75
+ $$
76
+
77
+ where $k = \lfloor {dp}\rfloor$ and
78
+
79
+ $$
80
+ \delta \geq \frac{1}{\sqrt{2}}\max \left\{ {\frac{1}{{p}^{k}{\left( 1 - p\right) }^{d - k}\left( \begin{matrix} \frac{1}{d} \\ k \end{matrix}\right) }\sqrt{\ln \left( \frac{1}{\epsilon }\right) },\left( \begin{matrix} d \\ k \end{matrix}\right) \sqrt{\ln \left( \frac{\left( \begin{matrix} d \\ k \end{matrix}\right) }{\epsilon }\right) }}\right\} . \tag{7}
81
+ $$
82
+
83
+ Then with probability ${\left( 1 - \epsilon \right) }^{2},\mathbf{A}$ is the global minimizer of (5), namely $\widehat{\mathbf{A}} = \mathbf{A}$ .
84
+
85
+ ![01963a41-6e19-7856-82cd-6975b38cb416_2_306_230_1179_245_0.jpg](images/01963a41-6e19-7856-82cd-6975b38cb416_2_306_230_1179_245_0.jpg)
86
+
87
+ Figure 1: Evaluation of performance on the Datasets K562 and RPE1 (Replogle et al., 2022) based on CausalBench (Chevalley et al., 2022) framework. The Wasserstein distance metric (higher is better) is computed for our method in comparison to some implemented baselines.
88
+
89
+ MÖBIUS . Our method is formed as the continuous relaxation of the discrete optimization problem (5). We substitute the ${L}^{0}$ -norm from (5) with its convex approximation (Ramirez et al. 2013), the ${L}^{1}$ -norm. The acyclicity is then captured with the continuous constraint $h\left( \mathbf{A}\right) = \operatorname{tr}\left( {e}^{\mathbf{A} \odot \mathbf{A}}\right) - d$ from (Zheng et al.,2018). Finally, we use $R\left( \mathbf{A}\right) = \lambda \parallel \mathbf{A}{\parallel }_{1}$ as the sparsity regularizer for the adjacency matrix and our final continuous optimization problem is formulated as
90
+
91
+ $$
92
+ \mathop{\min }\limits_{{\mathbf{A} \in {\mathbb{R}}^{d \times d}}}\frac{1}{2n}\parallel \mathbf{X} - \mathbf{{XA}}{\parallel }_{1} + \lambda \parallel \mathbf{A}{\parallel }_{1}\;\text{ s.t. }\;h\left( \mathbf{A}\right) = 0. \tag{8}
93
+ $$
94
+
95
+ We call this method MÖBIUS due to the fact that the Fourier transform of (3) in the causal setting coincides with the weighted Möbius transform for posets (Seifert et al., 2022b).
96
+
97
+ Handling interventions. The gene expression data provided by the CausalBench framework can contain interventions, either for all genes or for a fraction of them. An intervention assigns a value to a gene which is independent to the expression data of its predecessors. Mathematically, the linear SEM adopting the intervention scheme is formulated with the following equation
98
+
99
+ $$
100
+ \mathbf{X} = \mathbf{{XAM}} + \mathbf{N}\text{.} \tag{9}
101
+ $$
102
+
103
+ $\mathbf{M}$ is an intervention mask which has rows with ones, except on the position $i$ where it has 0 when the intervention acts on gene $i$ . In that case the gene is initialized with noise according to Eq. (9) or more generally with some spectral value together with noise as captured by Eq. (4). Given that the positions of the interventions in the dataset are known the optimization problem becomes
104
+
105
+ $$
106
+ \mathop{\min }\limits_{{\mathbf{A} \in {\mathbb{R}}^{d \times d}}}\frac{1}{2n}\parallel \mathbf{X} - \mathbf{{XAM}}{\parallel }_{1} + \lambda \parallel \mathbf{A}{\parallel }_{1}\;\text{ s.t. }\;h\left( \mathbf{A}\right) = 0. \tag{10}
107
+ $$
108
+
109
+ ## 4 CONTEST EVALUATION
110
+
111
+ Our method appears to work competitively in synthetic data generated with a few root causes and also in the gene regulatory network dataset by Sachs et al. (2005), as shown in the appendix. In Fig. 1 we present our performance on the gene-gene interaction network benchmark provided by Chevalley et al. (2022). Our method seems to perform better than the implemented baselines and also exhibits an upward trend which indicates that it benefits from interventions.
112
+
113
+ Implementational details. For our method, we construct a PyTorch model consisting of a linear layer which represents the weighted adjacency matrix $\mathbf{A}$ . Then given the data $\mathbf{X}$ , processed in batches, and the interventional positions masking matrix $\mathbf{M}$ we train our model with Adam optimizer with learning rate $\lambda = {10}^{-3}$ , to minimize the loss defined by Eq. (10). The final adjacency matrix is thresholded at 0.035 which experimentally showed to result into more than thousands of edges.
114
+
115
+ ## 5 CONCLUSION
116
+
117
+ We presented a new perspective on linear SEMs motivated by a recently proposed causal Fourier analysis for DAGs. Mathematically, this perspective translates (or solves) the recurrence describing the SEM into an invertible linear transformation that takes as input a chosen Fourier spectrum of values, thus called root causes, to produce the data as output. Prior data generation for linear SEMs assumed a dense, random spectrum. In this paper we adapted the novel scenario of data generated from few root causes, to reconstruct the gene-gene interactome. Our assumption seems to performs well in this setting and possibly gives new insights for the data generation of gene expression data.
118
+
119
+ REFERENCES
120
+
121
+ Andisheh Amrollahi, Amir Zandieh, Michael Kapralov, and Andreas Krause. Efficiently learning Fourier sparse set functions. Advances in Neural Information Processing Systems, 32, 2019.
122
+
123
+ Mathieu Chevalley, Yusuf Roohani, Arash Mehrjou, Jure Leskovec, and Patrick Schwab. Causalbench: A large-scale benchmark for network inference from single-cell perturbation data. arXiv preprint arXiv:2210.17283, 2022.
124
+
125
+ Yinghua Gao, Li Shen, and Shu-Tao Xia. DAG-GAN: Causal structure learning with generative adversarial nets. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3320-3324. IEEE, 2021.
126
+
127
+ Haitham Hassanieh. The Sparse Fourier Transform: Theory and Practice, volume 19. Association for Computing Machinery and Morgan and Claypool, 2018.
128
+
129
+ Sébastien Lachapelle, Philippe Brouillard, Tristan Deleu, and Simon Lacoste-Julien. Gradient-based neural dag learning. arXiv preprint arXiv:1906.02226, 2019.
130
+
131
+ Ignavier Ng, AmirEmad Ghassami, and Kun Zhang. On the role of sparsity and dag constraints for learning linear dags. Advances in Neural Information Processing Systems, 33:17943-17954, 2020.
132
+
133
+ M. Püschel and J. M. F. Moura. Algebraic signal processing theory: Foundation and 1-D time. IEEE Trans. Signal Process., 56(8):3572-3585, 2008.
134
+
135
+ Markus Püschel and José MF Moura. Algebraic signal processing theory. arXiv preprint cs/0612077, 2006.
136
+
137
+ Carlos Ramirez, Vladik Kreinovich, and Miguel Argaez. Why ${\ell }_{1}$ is a good approximation to ${\ell }_{0}$ : A Geometric Explanation. Journal of Uncertain Systems, 7(3):203-207, 2013.
138
+
139
+ Alexander Reisach, Christof Seiler, and Sebastian Weichwald. Beware of the simulated dag! causal discovery benchmarks may be easy to game. Advances in Neural Information Processing Systems, 34:27772-27784, 2021.
140
+
141
+ Joseph M Replogle, Reuben A Saunders, Angela N Pogson, Jeffrey A Hussmann, Alexander Lenail, Alina Guna, Lauren Mascibroda, Eric J Wagner, Karen Adelman, Gila Lithwick-Yanai, et al. Mapping information-rich genotype-phenotype landscapes with genome-scale perturb-seq. Cell, 185(14):2559-2575, 2022.
142
+
143
+ Karen Sachs, Omar Perez, Dana Pe'er, Douglas A Lauffenburger, and Garry P Nolan. Causal protein-signaling networks derived from multiparameter single-cell data. Science, 308(5721):523-529, 2005.
144
+
145
+ Bastian Seifert, Chris Wendler, and Markus Püschel. Causal Fourier Analysis on Directed Acyclic Graphs and Posets. arXiv preprint arXiv:2209.07970, 2022a.
146
+
147
+ Bastian Seifert, Chris Wendler, and Markus Püschel. Learning Fourier-Sparse Functions on DAGs. In ICLR2022 Workshop on the Elements of Reasoning: Objects, Structure and Causality, 2022b.
148
+
149
+ Shohei Shimizu, Patrik O. Hoyer, Aapo Hyvärinen, and Antti Kerminen. A Linear Non-Gaussian Acyclic Model for Causal Discovery. Journal of Machine Learning Research, 7(72):2003-2030, 2006.URLhttp://jmlr.org/papers/v7/shimizu06a.html
150
+
151
+ Peter Stobbe and Andreas Krause. Learning Fourier sparse set functions. In Artificial Intelligence and Statistics, pp. 1125-1133. PMLR, 2012.
152
+
153
+ Gregory K. Wallace. The JPEG Still Picture Compression Standard. Commun. ACM, 34(4):30-44, apr 1991. ISSN 0001-0782. doi: 10.1145/103085.103089. URL https://doi.org/10.1145/103085.103089
154
+
155
+ Yue Yu, Jie Chen, Tian Gao, and Mo Yu. DAG-GNN: DAG structure learning with graph neural networks. In International Conference on Machine Learning, pp. 7154-7163. PMLR, 2019.
156
+
157
+ Xun Zheng, Bryon Aragam, Pradeep K Ravikumar, and Eric P Xing. Dags with no tears: Continuous optimization for structure learning. Advances in Neural Information Processing Systems, 31, 2018.
158
+
159
+ Table 1: SHD metric (lower is better) for learning DAGs with 40 nodes and 80 edges. Each row is an experiment. The first row is the default, whose settings are in the blue column. In each other row, exactly one default parameter is changed (current value). The last five columns correspond to prior algorithms. The best results are shown bold.
160
+
161
+ <table><tr><td/><td>Hyperparameter</td><td>Default value</td><td>Current value</td><td>Varsortability</td><td>MÖBIUS (ours)</td><td>GOLEM-EV</td><td>sortnregress</td><td>NOTEARS</td><td>DAG-NoCurl</td><td>GES</td></tr><tr><td>1.</td><td>Default settings</td><td/><td/><td>0.94</td><td>${0.00} \pm {0.00}$</td><td>${3.10} \pm {2.55}$</td><td>${100.30} \pm {22.55}$</td><td>${3.10} \pm {1.87}$</td><td>${34.90} \pm {7.26}$</td><td>${73.30} \pm {15.13}$</td></tr><tr><td>2.</td><td>Graph type</td><td>Erdös-Renyi</td><td>Scale-free</td><td>0.99</td><td>${0.00} \pm {0.00}$</td><td>${2.30} \pm {1.27}$</td><td>${52.20} \pm {18.24}$</td><td>${2.30} \pm {1.49}$</td><td>${17.00} \pm {11.53}$</td><td>${61.70} \pm {12.04}$</td></tr><tr><td>3.</td><td>Edges / Vertices</td><td>2</td><td>3</td><td>0.97</td><td>${0.00} \pm {0.00}$</td><td>${6.30} \pm {3.07}$</td><td>${158.90} \pm {30.91}$</td><td>${15.10} \pm {8.19}$</td><td>${58.00} \pm {10.83}$</td><td>${162.50} \pm {16.12}$</td></tr><tr><td>4.</td><td>Larger weights in $\mathbf{A}$</td><td>(0.4,0.8)</td><td>(0.5,2)</td><td>0.99</td><td>${9.20} \pm {14.88}$</td><td>${0.70} \pm {0.64}$</td><td>${102.00} \pm {34.06}$</td><td>${11.00} \pm {13.56}$</td><td>${30.40} \pm {11.21}$</td><td>${107.90} \pm {20.95}$</td></tr><tr><td>5.</td><td>High sparsity in $\mathbf{C}$</td><td>$p = {0.3}$</td><td>$p = {0.1}$</td><td>0.96</td><td>${0.00} \pm {0.00}$</td><td>${11.30} \pm {4.47}$</td><td>${94.00} \pm {17.52}$</td><td>${15.40} \pm {2.33}$</td><td>${36.20} \pm {7.82}$</td><td>${73.50} \pm {10.76}$</td></tr><tr><td>6.</td><td>Low sparsity in $\mathbf{C}$</td><td>$p = {0.3}$</td><td>$p = {0.6}$</td><td>0.95</td><td>${60.20} \pm {9.43}$</td><td>${2.40} \pm {1.43}$</td><td>${96.60} \pm {24.19}$</td><td>${3.10} \pm {2.43}$</td><td>${41.00} \pm {10.03}$</td><td>${70.60} \pm {14.94}$</td></tr><tr><td>7.</td><td>${\mathrm{N}}_{f},{\mathrm{\;N}}_{s}$ deviation</td><td>$\sigma = {0.01}$</td><td>$\sigma = {0.1}$</td><td>0.97</td><td>${0.10} \pm {0.30}$</td><td>${4.10} \pm {3.75}$</td><td>${93.70} \pm {22.07}$</td><td>${3.70} \pm {1.95}$</td><td>${32.70} \pm {7.66}$</td><td>${76.10} \pm {16.98}$</td></tr><tr><td>8.</td><td>${\mathrm{N}}_{f},{\mathrm{N}}_{s}$ distribution</td><td>Gaussian</td><td>Gumbel</td><td>0.96</td><td>${0.00} \pm {0.00}$</td><td>${4.40} \pm {2.01}$</td><td>${86.00} \pm {27.25}$</td><td>${5.50} \pm {2.50}$</td><td>${40.00} \pm {13.65}$</td><td>${74.70} \pm {18.88}$</td></tr><tr><td>9.</td><td>Signal noise</td><td>${\mathrm{N}}_{f} \neq 0,{\mathrm{\;N}}_{s} = 0$</td><td>${\mathrm{N}}_{f} = 0,{\mathrm{\;N}}_{s} \neq 0$</td><td>0.96</td><td>${0.00} \pm {0.00}$</td><td>${4.00} \pm {3.19}$</td><td>${88.30} \pm {19.79}$</td><td>${4.90} \pm {2.98}$</td><td>${32.50} \pm {6.31}$</td><td>${72.80} \pm {12.90}$</td></tr><tr><td>10</td><td>Full Noise</td><td>${\mathrm{N}}_{f} \neq 0,{\mathrm{N}}_{s} = 0$</td><td>${\mathbf{N}}_{f} \neq 0,{\mathbf{N}}_{s} \neq 0$</td><td>0.94</td><td>${0.00} \pm {0.00}$</td><td>${5.00} \pm {3.03}$</td><td>${103.60} \pm {27.87}$</td><td>${4.60} \pm {2.83}$</td><td>${29.70} \pm {8.47}$</td><td>${70.40} \pm {17.45}$</td></tr><tr><td>11.</td><td>Standardization</td><td>No</td><td>Yes</td><td>0.50</td><td>${84.90} \pm {8.53}$</td><td>${68.50} \pm {5.66}$</td><td>${243.90} \pm {27.33}$</td><td>${81.40} \pm {4.80}$</td><td>${102.30} \pm {6.63}$</td><td>${68.90} \pm {14.14}$</td></tr><tr><td>12.</td><td>Samples</td><td>n=400</td><td>$n = {20}$</td><td>0.90</td><td>${252.80} \pm {13.47}$</td><td>${219.60} \pm {32.63}$</td><td>error</td><td>${125.20} \pm {12.81}$</td><td>${378.10} \pm {28.11}$</td><td>${264.60} \pm {19.49}$</td></tr><tr><td>13.</td><td>Fixed support</td><td>No</td><td>Yes</td><td>0.80</td><td>${106.00} \pm {9.20}$</td><td>${161.80} \pm {43.09}$</td><td>${208.90} \pm {45.16}$</td><td>${83.00} \pm {3.26}$</td><td>${287.90} \pm {50.23}$</td><td>${74.90} \pm {12.33}$</td></tr></table>
162
+
163
+ ## A EXPERIMENTS
164
+
165
+ Data generating process and defaults. In the blue column of Table 1 we report the default settings for our experiment explained next. We generate a random Erdös-Renyi graph with $d = {40}$ nodes and assign edge directions to make it a DAG as in (Zheng et al., 2018). The number of edges is set to 80 , so the ratio of edges to vertices is 2 . The entries of the weighted adjacency matrix are sampled uniformly at random from $\left( {-b, - a}\right) \cup \left( {a, b}\right)$ , where $a = {0.4}$ and $b = {0.8}$ . Following (Zheng et al., 2018; Ng et al., 2020) the result of the adjacency matrix approximation is post-processed for each algorithm by including only edges with absolute weight larger than a threshold $\omega = {0.3}$ . Next, the sparse DAG spectra $\mathbf{C}$ are instantiated by setting each entry either to some random uniform value from(0,1)with probability $p = {0.3}$ or to 0 with probability $1 - p = {0.7}$ (as in Theorem 3.1, thus, also the support will vary between spectra). The data matrix $\mathbf{X}$ is computed according to (4), using Gaussian noise ${\mathbf{N}}_{f},{\mathbf{N}}_{s}$ of standard deviation $\sigma = {0.01}$ . By default we set ${\mathbf{N}}_{s} = \mathbf{0}$ , to ensure identifiability based on Theorem 3.1, and consider ${\mathbf{N}}_{s} \neq 0$ in a variant next. The low standard deviation ensures that the data are approximately Fourier-sparse. Finally, we do not standardize (scaling for variance $= 1$ ) the data, and $\mathbf{X}$ contains $n = {400}$ samples (rows).
166
+
167
+ Experiment 1: Different application scenarios. Table 1 compares MÖBIUS to five prior algorithms using the SHD metric. Every row corresponds to a different experiment that alters one particular hyperparameter of the default setting, which is the first row with values as explained above and in the blue column. For example, the second row only changes the graph type from Erdös-Renyi to scale-free, while keeping all other settings. Besides SHD, we also computed the SID and TPR metrics, shown in Appendix B. Note that in the last row, fixed support means that in every execution of the experiment to generate we first fix a random support for the spectra, and then we assign values to these entries uniformly from $\left\lbrack {0,1}\right\rbrack$ .
168
+
169
+ Since the graphs have 80 edges, an SHD $\ll {80}$ can be considered as good and beyond 80 can be considered a failure. Overall, MÖBIUS achieves very low SHD compared to other methods and often even computes the exact solution, whereas none of the others does so.
170
+
171
+ First, we examine scenarios that alter the DAG configuration. For the default settings (row 1), for scale-free graphs (row 2), and for more edges (row 3) MÖBIUS performs best and perfectly detects all edges. As in (Zheng et al., 2018; Ng et al., 2020) we consider different weights for the adjacency matrix to examine the sensitivity of the weight scale (row 4). We see that, larger weight bounds affect our performance slightly and our method comes now second after GOLEM.
172
+
173
+ Next, we consider changing hyperparameters that affect the spectra and the data. Higher sparsity (row 5) keeps the performance perfect, whereas lower sparsity (row 6) degrades it significantly as expected. In contrast, imposing high sparsity is harmful for example for GOLEM and NOTEARS. Higher standard deviation in the spectral noise (row 7) decreases performance overall, but MÖBIUS still performs near optimal. Changing to Gumbel noise (row 8) keeps the perfect performance. Applying noise only in the spectrum or both in signal and spectrum (rows 9 and 10) mostly maintains performance over all, including our perfect reconstruction. The standardization of data (row 11) is generally known to negatively affect algorithms with continuous objectives (Reisach et al., 2021) as is the case here. For a small number of samples (row 12) or fixed sparsity support (row 13) all methods fail. Overall, MÖBIUS achieves the best SHD in most scenarios, often even recovering the true DAG.
174
+
175
+ ![01963a41-6e19-7856-82cd-6975b38cb416_5_308_236_1183_430_0.jpg](images/01963a41-6e19-7856-82cd-6975b38cb416_5_308_236_1183_430_0.jpg)
176
+
177
+ Figure 2: Plots illustrating performance metrics SID and SHD (lower is better) on the default settings, when both ${\mathbf{N}}_{f},{\mathbf{N}}_{s}$ are non-zero, on 20 samples and on spectra with fixed support. Table 2: Performance on the real data (Sachs et al., 2005).
178
+
179
+ <table><tr><td/><td>SHD $\downarrow$</td><td>SID $\downarrow$</td><td>NNZ</td></tr><tr><td>MÖBIUS</td><td>15</td><td>45</td><td>16</td></tr><tr><td>NOTEARS</td><td>11</td><td>44</td><td>15</td></tr><tr><td>GOLEM-EV</td><td>21</td><td>43</td><td>19</td></tr><tr><td>GOLEM-NV</td><td>12</td><td>47</td><td>10</td></tr><tr><td>sortnregress</td><td>18</td><td>43</td><td>20</td></tr><tr><td>DAG-NoCurl</td><td>22</td><td>44</td><td>23</td></tr><tr><td>LiNGAM</td><td>20</td><td>54</td><td>20</td></tr><tr><td>GES</td><td>13</td><td>57</td><td>8</td></tr><tr><td>fGES</td><td>17</td><td>62</td><td>14</td></tr><tr><td>MMHC</td><td>17</td><td>62</td><td>16</td></tr><tr><td>CAM</td><td>12</td><td>55</td><td>10</td></tr></table>
180
+
181
+ Experiment 2: Varying the number of nodes. In this experiment we benchmark MÖBIUS when varying the number of vertices (and thus the number of edges) in the ground truth DAG for four interesting scenarios of Table 1: default (first row), full noise (row 10), low samples (row 12) and fixed support (row 13). The results are shown in Fig. 2 for the two metrics SHD and SID.
182
+
183
+ In the default settings of Fig. 2a, we see that MÖBIUS achieves optimal performance for each number of nodes. The reason is that the optimization problem (8) is perfectly suited for this setting with exact Fourier-sparsity and sparse DAG. The same holds in Fig. 2b with added noise in the signal domain. The last two experiments in Figs. 2c and 2d are most challenging: allowing only few data or fixing the support, which loses the identifiability guarantees from Section ??. For few samples MÖBIUS still achieves the best SID and SHD is close to the best by NOTEARS, but the numbers are still high to be considered a success. For fixed spectral support all algorithms practically fail to learn the DAG for any number of vertices.
184
+
185
+ Varsortability. In Table 1 we also include the varsortability for each different experimental setting. Our measurements for Erdös-Renyi graphs (all rows except row 2) are typically $2 - 5\%$ lower than the measurements reported in (Reisach et al., 2021, Appendix G.1) for linear SEMs, which shows that our experimental setting is not trivial for DAG learning. This can also be deduced by the performance of sortnregress, which overall fails. Note again that for fixed sparsity support (last row), all the methods fail and varsortability is very low. Therefore, in this scenario our data generating process poses a hard problem for DAG learning.
186
+
187
+ Real data. We also execute our method on the causal protein-signaling network provided by (Sachs et al., 2005). The dataset consists of 7466 12 samples from a network with 11 nodes that represent proteins and 17 edges showing the interaction between them. Even though this network is relatively small, the task of learning it is considered difficult and has been a common benchmark for several prior DAG learning methods Ng et al. (2020); Gao et al. (2021); Yu et al. (2019); Zheng et al. (2018). We report the performance metrics for all methods in Table 2. It is not clear whether the assumption of Fourier-sparsity holds in this case.
188
+
189
+ ---
190
+
191
+ ${}^{1}$ As Lachapelle et al. (2019) we only use the first 853 samples.
192
+
193
+ ${}^{2}$ Differences between our results and what is reported in the literature are due to different choice of hyperpa-rameters and to the use of 853 samples, while others might utilize the full dataset.
194
+
195
+ ---
196
+
197
+ The best SID is achieved by GOLEM and sortnregress, which, however, have higher SHD. NOTEARS has the best SHD equal to 11, which is slightly better than ours which is 15 . Overall, MÖBIUS performs reasonably well, achieving the closest number of edges to the real one 16 and a with SHD equal to 15 and SID equal to 45.
198
+
199
+ ## B ADDITIONAL EXPERIMENTAL RESULTS
200
+
201
+ In addition to Fig. 2 we include the plots of Fig. 3 for the 4 experiments of interest that vary the number of vertices of the ground truth DAG. The plots additionally contain the metrics TPR, NNZ and NMSE of the weighted approximation of the adjacency matrix with respect to the true one. The plots also include methods LiNGAM, fGES, MMHC and CAM.
202
+
203
+ ![01963a41-6e19-7856-82cd-6975b38cb416_6_304_776_1175_989_0.jpg](images/01963a41-6e19-7856-82cd-6975b38cb416_6_304_776_1175_989_0.jpg)
204
+
205
+ Figure 3: Plots illustrating performance metrics SHD, SID (lower is better), TPR (higher is better) and NMSE (lower is better) on the default settings, when both noises are non-zero, on 20 samples and on spectra with fixed support.
206
+
207
+ ## C GLOBAL MINIMIZER
208
+
209
+ Theorem C.1. Consider $\mathbf{A}$ be the weighted adjacency matrix of the true DAG and assume $\widehat{\mathbf{A}}$ is the optimal solution of the optimization problem
210
+
211
+ $$
212
+ \mathop{\min }\limits_{{\mathbf{A} \in {\mathbb{R}}^{d \times d}}}\parallel \mathbf{X} - \mathbf{{XA}}{\parallel }_{0}\;\text{ s.t. }\;h\left( \mathbf{A}\right) = 0 \tag{11}
213
+ $$
214
+
215
+ We assume that the data $\mathbf{X}$ are generated from spectra $\mathbf{C}$ , where each of their entry is non-zero independently with propability $p$ , via the equation
216
+
217
+ $$
218
+ \mathbf{X} = \mathbf{C}\left( {\mathbf{I} + \overline{\mathbf{A}}}\right) \tag{12}
219
+ $$
220
+
221
+ Assume that the number of data $n$ satisfies
222
+
223
+ $$
224
+ n \geq \frac{{2}^{{3d} - 2}d\left( {d - 1}\right) }{{\left( 1 - \delta \right) }^{2}{p}^{k}{\left( 1 - p\right) }^{d - k}} \tag{13}
225
+ $$
226
+
227
+ where $k = \lfloor {dp}\rfloor$ and
228
+
229
+ $$
230
+ \delta \geq \max \left\{ {\frac{1}{{p}^{k}{\left( 1 - p\right) }^{d - k}\left( \begin{array}{l} d \\ k \end{array}\right) }\sqrt{\frac{1}{2}\ln \left( \frac{1}{\epsilon }\right) },\left( \begin{array}{l} d \\ k \end{array}\right) \sqrt{\frac{1}{2}\ln \left( \frac{\left( \begin{array}{l} d \\ k \end{array}\right) }{\epsilon }\right) }}\right\} . \tag{14}
231
+ $$
232
+
233
+ Then with probability ${\left( 1 - \epsilon \right) }^{2}$ the solution to the optimization problem coincides with the true DAG, namely $\widehat{\mathbf{A}} = \mathbf{A}$ .
234
+
235
+ Remark C.2. The reason we choose $k = \lfloor {dp}\rfloor$ is that because of the Bernoulli trials for the non-zero spectral values, the expected value of the cardinality of the support will be exactly $\left\lfloor {dp}\right\rfloor$ . Thus we expect to have more concentration on that value of $k$ which is something we desire, since in the proof we need all possible patterns with support equal to $k$ .
236
+
237
+ We begin the proof with some important observations and definitions.
238
+
239
+ Lemma C.3. If $\overline{\overline{\mathbf{A}}} = \overline{\mathbf{A}}$ then $\widehat{\mathbf{A}} = \mathbf{A}$ .
240
+
241
+ Proof. We have that
242
+
243
+ $$
244
+ \mathbf{I} + \overline{\overline{\mathbf{A}}} = \mathbf{I} + \overline{\mathbf{A}} \Leftrightarrow {\left( \mathbf{I} + \overline{\overline{\mathbf{A}}}\right) }^{-1} = {\left( \mathbf{I} + \overline{\mathbf{A}}\right) }^{-1} \Leftrightarrow \left( {\mathbf{I} - \overline{\mathbf{A}}}\right) = \left( {\mathbf{I} - \mathbf{A}}\right) \Leftrightarrow \widehat{\mathbf{A}} = \mathbf{A} \tag{15}
245
+ $$
246
+
247
+ Definition C.4. We define $\widehat{\mathbf{C}} = \mathbf{X} - \mathbf{X}\widehat{\mathbf{A}}$ the spectra corresponding to the optimal adjacency matrix. Definition C.5. Let $S \subset \{ 1,2,3,\ldots , d\}$ be a set of indices. We say that a spectrum $\mathbf{c}$ has support $S$ if ${c}_{i} = 0$ for $i \in \left\lbrack d\right\rbrack \smallsetminus S$ .
248
+
249
+ Definition C.6. For a given support $S$ we consider the set $R \subset \left\lbrack n\right\rbrack$ of the rows of $\mathbf{C}$ that have support $S$ . Then, ${\mathbf{C}}_{R},{\widehat{\mathbf{C}}}_{R}$ denote the submatrices consisting of the rows with indices in $R$ .
250
+
251
+ Lemma C.7. For any row subset $R \subset \left\lbrack n\right\rbrack$ we have that $\operatorname{rank}\left( {\widehat{\mathbf{C}}}_{R}\right) = \operatorname{rank}\left( {\mathbf{C}}_{R}\right)$
252
+
253
+ Proof. We have that
254
+
255
+ $$
256
+ \widehat{\mathbf{C}}\left( {\mathbf{I} + \overline{\overline{\mathbf{A}}}}\right) = \mathbf{X} = \mathbf{C}\left( {\mathbf{I} + \overline{\mathbf{A}}}\right) \tag{16}
257
+ $$
258
+
259
+ Therefore, since both $\mathbf{A},\widehat{\mathbf{A}}$ are acyclic and $\mathbf{I} + \overline{\overline{\mathbf{A}}},\mathbf{I} + \overline{\mathbf{A}}$ are invertible we have that
260
+
261
+ $$
262
+ {\widehat{\mathbf{C}}}_{R}\left( {\mathbf{I} + \overline{\overline{\mathbf{A}}}}\right) = {\mathbf{C}}_{R}\left( {\mathbf{I} + \overline{\mathbf{A}}}\right) \Leftrightarrow \operatorname{rank}\left( {\widehat{\mathbf{C}}}_{R}\right) = \operatorname{rank}\left( {\mathbf{C}}_{R}\right) \tag{17}
263
+ $$
264
+
265
+ Lemma C.8. For any row subset $R \subset \left\lbrack n\right\rbrack$ of spectra $\mathbf{C}$ with the same support $S$ such that $\left| S\right| = k$ and $\left| R\right| = r \geq k$ the nonzero columns of ${\mathbf{C}}_{R}$ are linearly independent with probability 1 and therefore $\operatorname{rank}\left( {\mathbf{C}}_{R}\right) = k$ .
266
+
267
+ Proof. assume ${\mathbf{c}}_{1},\ldots ,{\mathbf{c}}_{k}$ are the non-zero columns of ${\mathbf{C}}_{R}$ . Then each ${\mathbf{c}}_{i}$ is a vector of dimension $r$ whose each entry is sampled uniformly at random in the range $\left\lbrack {{0.5},1}\right\rbrack$ . Given any $k - 1$ vectors from ${\mathbf{c}}_{1},\ldots ,{\mathbf{c}}_{k}$ their linear span can at most make a subspace of ${\left\lbrack {0.5},1\right\rbrack }^{r}$ of dimension at most $k - 1$ . However, since every ${\mathbf{c}}_{i}$ is sampled uniformly at random from ${\left\lbrack {0.5},1\right\rbrack }^{r}$ and $r \geq k > k - 1$ the probability that ${\mathbf{c}}_{i}$ lies in the linear span of the other $k - 1$ vectors has measure 0 . Therefore, the required result holds with probability 1 .
268
+
269
+ Lemma C.9. With probability ${\left( 1 - \varepsilon \right) }^{2}$ for every different support $S$ with $\left| S\right| = k$ there are rows $R$ such that ${\mathbf{C}}_{R}$ have support $S$ and
270
+
271
+ $$
272
+ \left| R\right| > {2}^{{3d} - 2}d\left( {d - 1}\right) \tag{18}
273
+ $$
274
+
275
+ Proof. Set $l = {2}^{{3d} - 2}d\left( {d - 1}\right)$ and $K = \frac{l\left( \begin{matrix} d \\ k \end{matrix}\right) }{\left( 1 - \delta \right) }$ . Also set $N$ be the random variable representing the number of spectra $\mathbf{c}$ with $\left| {\operatorname{supp}\left( \mathbf{c}\right) }\right| = k$ and ${N}_{i}$ the number of repetitions of the $i$ -th $k$ -support pattern, $i = 1,\ldots ,\left( \begin{array}{l} d \\ k \end{array}\right)$ .
276
+
277
+ We first use conditional probability.
278
+
279
+ $$
280
+ \mathbb{P}\left( {N \geq K \cap {N}_{i} \geq l,\forall i}\right) = \mathbb{P}\left( {{N}_{i} \geq l,\forall i \mid N \geq k}\right) \mathbb{P}\left( {N \geq K}\right) \tag{19}
281
+ $$
282
+
283
+ Now we will show that $\mathbb{P}\left( {N \geq K}\right) \geq \left( {1 - \epsilon }\right)$ and $\mathbb{P}\left( {{N}_{i} \geq l,\forall i \mid N \geq k}\right) \geq \left( {1 - \epsilon }\right)$ using Chernoff bounds. From Hoeffding inequality we get:
284
+
285
+ $$
286
+ \mathbb{P}\left( {N \leq \left( {1 - \delta }\right) \mu }\right) \leq {e}^{-2{\delta }^{2}{\mu }^{2}/{n}^{2}} \tag{20}
287
+ $$
288
+
289
+ where $\mu = n{p}^{k}{\left( 1 - p\right) }^{d - k}\left( \begin{array}{l} d \\ k \end{array}\right)$ is the expected value of $N,\delta \geq \frac{1}{{p}^{k}{\left( 1 - p\right) }^{d - k}\left( \begin{array}{l} d \\ k \end{array}\right) }\sqrt{\frac{1}{2}\ln \left( \frac{1}{\epsilon }\right) }$ and $n \geq \frac{1}{1 - \delta }\frac{K}{{p}^{k}{\left( 1 - p\right) }^{d - k}\left( \begin{array}{l} d \\ k \end{array}\right) }$ so
290
+
291
+ $$
292
+ \mathbb{P}\left( {N \leq K}\right) \leq \mathbb{P}\left( {N \leq \left( {1 - \delta }\right) \mu }\right) \leq {e}^{-2{\delta }^{2}{\mu }^{2}/{n}^{2}} \leq \varepsilon \Leftrightarrow \mathbb{P}\left( {N \geq K}\right) \geq 1 - \epsilon \tag{21}
293
+ $$
294
+
295
+ Now given that we have at least $K$ spectra with support exactly $k$ , we will show that, on exactly $K$ such spectra, each different pattern will appear at least $l$ times with probability $\left( {1 - \epsilon }\right)$ . From the
296
+
297
+ union bound
298
+
299
+ $$
300
+ \mathbb{P}\left( {\mathop{\bigcup }\limits_{i}{N}_{i} \leq l}\right) \leq \mathop{\sum }\limits_{i}\mathbb{P}\left( {{N}_{i} \leq l}\right) \tag{22}
301
+ $$
302
+
303
+ So we need to show that $\mathbb{P}\left( {{N}_{i} \leq l}\right) \leq \frac{\epsilon }{\left( \begin{matrix} d \\ k \end{matrix}\right) }$ . We use again the Hoeffding inequality.
304
+
305
+ $$
306
+ \mathbb{P}\left( {{N}_{i} \leq \left( {1 - \delta }\right) \mu }\right) \leq {e}^{-2{\delta }^{2}{\mu }^{2}/{K}^{2}} \tag{23}
307
+ $$
308
+
309
+ where the expected value is $\mu = \frac{K}{\left( \begin{array}{l} d \\ k \end{array}\right) } = \frac{l}{1 - \delta }$ since now the probability is uniform over all possible $k$ -sparsity patterns. Given $\delta \geq \left( \begin{matrix} d \\ k \end{matrix}\right) \sqrt{\frac{1}{2}\ln \left( \frac{\left( \begin{matrix} d \\ k \end{matrix}\right) }{\epsilon }\right) }$ we derive
310
+
311
+ $$
312
+ \mathbb{P}\left( {{N}_{i} \leq l}\right) = \mathbb{P}\left( {{N}_{i} \leq \left( {1 - \delta }\right) \mu }\right) \leq {e}^{-2{\delta }^{2}{\mu }^{2}/{K}^{2}} \leq \frac{\epsilon }{\left( \begin{array}{l} d \\ k \end{array}\right) } \tag{24}
313
+ $$
314
+
315
+ The result follows.
316
+
317
+ Lemma C.10. Assume that the data number $n$ is large enough so that for every different support $S$ we have the rows $R$ such that ${\mathbf{C}}_{R}$ have support $S$ and $\left| R\right| > {2}^{d}l$ where
318
+
319
+ $$
320
+ l > {2}^{{2d} - 2}d\left( {d - 1}\right) = {2}^{d}\mathop{\sum }\limits_{{k = 2}}^{d}\left( \begin{array}{l} d \\ k \end{array}\right) k\left( {k - 1}\right) \tag{25}
321
+ $$
322
+
323
+ Then there exist rows $\widehat{R}$ such that $\left| \widehat{R}\right| > k,{\mathbf{C}}_{\widehat{R}}$ has support $S$ and ${\widehat{\mathbf{C}}}_{\widehat{R}}$ has support $\widehat{S}$ with $\left| \widehat{S}\right| = k$ . Moreover, both the non-zero columns of ${\mathbf{C}}_{\widehat{R}},{\widehat{\mathbf{C}}}_{\widehat{R}}$ form two linearly independent set of vectors. In words, this means that the data are many enough so we can always find pairs of corresponding spectral submatrices with each consisting of $k$ non-zero columns.
324
+
325
+ Proof. According to the assumption there exists a row-submatrix ${\mathbf{C}}_{R}$ where all rows have support $R$ and $\left| R\right| > {2}^{d}l$ . Group the spectra ${\widehat{\mathbf{C}}}_{R}$ into all different supports, which are ${2}^{d}$ in number. Take any such group ${R}^{\prime } \subset R$ of rows of $\widehat{\mathbf{C}}$ that all have the same support ${S}^{\prime }$ . If $\left| {R}^{\prime }\right| \geq k$ then from Lemma C. 7 $\operatorname{rank}\left( {\widehat{\mathbf{C}}}_{{R}^{\prime }}\right) = \operatorname{rank}\left( {\mathbf{C}}_{{R}^{\prime }}\right) = k$ so ${\widehat{\mathbf{C}}}_{{R}^{\prime }}$ will have at least $k$ non-zero columns, and since the support is fixed, it will have at least $k$ non-zero elements at each row, which means at least as many non-zeros as ${\mathbf{C}}_{{R}^{\prime }}$ . Therefore ${\widehat{\mathbf{C}}}_{{R}^{\prime }}$ can only have less non-zero elements if $\left| {R}^{\prime }\right| < k$ , and in that case ${\mathbf{C}}_{{R}^{\prime }}$ has at most $k\left( {k - 1}\right)$ more elements. If we count for all $k = 1,\ldots , d$ all different supports of ${\widehat{\mathbf{C}}}_{{R}^{\prime }}$ for all possible supports of ${\mathbf{C}}_{R}$ this gives that $\widehat{\mathbf{C}}$ can have at most $\mathop{\sum }\limits_{{k = 2}}^{d}\left( \begin{array}{l} d \\ k \end{array}\right) {2}^{d}k\left( {k - 1}\right)$ less non-zero elements compared to $\mathbf{C}$ .
326
+
327
+ Due to the pigeonhole principle, there exists $\widehat{R} \subset R$ and $\left| \widehat{R}\right| > l$ with ${\widehat{\mathbf{C}}}_{\widehat{R}}$ all having the same support $\widehat{S}$ , not necessarily equal to $S$ . According to our previous explanation we need to have at least $k$ non-zero columns in ${\widehat{\mathbf{C}}}_{\widehat{R}}$ . If we had $k + 1$ columns then this would give $l$ more non-zero elements, but
328
+
329
+ $$
330
+ l > {2}^{{2d} - 2}d\left( {d - 1}\right) = {2}^{d}d\left( {d - 1}\right) {2}^{d - 2} = {2}^{d}\mathop{\sum }\limits_{{k = 0}}^{{d - 2}}\left( \begin{array}{l} d - 2 \\ k - 2 \end{array}\right) d\left( {d - 1}\right) = {2}^{d}\mathop{\sum }\limits_{{k = 2}}^{d}\left( \begin{array}{l} d \\ k \end{array}\right) k\left( {k - 1}\right) \tag{26}
331
+ $$
332
+
333
+ So then $\parallel \widehat{\mathbf{C}}{\parallel }_{0} > \parallel \mathbf{C}\parallel$ which is a contradiction due to the optimality of $\widehat{\mathbf{A}}$ . Therefore, ${\widehat{\mathbf{C}}}_{\widehat{R}}$ has exactly $k$ non-zero columns which necessarily need to be linearly independent in order to have $\operatorname{rank}\left( {\widehat{\mathbf{C}}}_{\widehat{R}}\right) = \operatorname{rank}\left( {\mathbf{C}}_{\widehat{R}}\right) = k$ as necessary. The linear independence of the columns of ${\mathbf{C}}_{\widehat{R}}$ follows from Lemma C. 8 since $l \gg k$ .
334
+
335
+ Definition C.11. A pair ${\mathbf{C}}_{R},{\widehat{\mathbf{C}}}_{R}$ constructed according to Lemma C.10 that have fixed support $S,\widehat{S}$ respectively with cardinality $k$ each and $\left| R\right|$ large enough so that $\operatorname{rank}\left( {\widehat{\mathbf{C}}}_{\widehat{R}}\right) = \operatorname{rank}\left( {\mathbf{C}}_{\widehat{R}}\right) = k$ , will be called a $k$ -pair of submatrices.
336
+
337
+ Remark C.12. For notational simplicity we will drop the index $R$ whenever the choice of the rows according to the sparsity pattern $S$ is defined in the context.
338
+
339
+ The complete Theorem 3.2 follows after combining the following two propositions.
340
+
341
+ Proposition C. 13. If the data $\mathbf{X}$ are indexed such that $\mathbf{A}$ is upper triangular, then so is $\widehat{\mathbf{A}}$ .
342
+
343
+ Remark C.14. Notice that $\mathbf{A}$ is upper triangular if and only if $\overline{\mathbf{A}}$ is upper triangular. This holds as the polynomial $1 + x + {x}^{2} + \ldots + {x}^{d - 2}$ doesn’t have real roots. Thus we only need to show that ${\bar{a}}_{ji} = 0$ for all $i < j$ .
344
+
345
+ Before we proceed to the proof of Prop. C. 13 we first prove the following helpful lemma.
346
+
347
+ Lemma C.15. Consider $\mathbf{C},\widehat{\mathbf{C}}$ a $k$ -pair. Also let $\mathbf{X}$ be the corresponding submatrix of data. If ${\mathbf{X}}_{ : , i} = \mathbf{0}$ then
348
+
349
+ $$
350
+ {\widehat{\mathbf{C}}}_{ : , i} = \mathbf{0}\text{ and }{\overline{\widehat{a}}}_{ji} = 0\;\forall j \in \operatorname{supp}\left( \widehat{\mathbf{C}}\right) \tag{27}
351
+ $$
352
+
353
+ Proof.
354
+
355
+ $$
356
+ \mathbf{0} = {\mathbf{X}}_{ : , i} = \mathop{\sum }\limits_{{j = 1}}^{d}{\overline{\widehat{a}}}_{ji}{\widehat{\mathbf{C}}}_{ : , j} + {\mathbf{C}}_{ : , i} = \mathop{\sum }\limits_{{j \in \operatorname{supp}\left( \widehat{\mathbf{C}}\right) }}{\overline{\widehat{a}}}_{ji}{\widehat{\mathbf{C}}}_{ : , j} + {\mathbf{C}}_{ : , i} \tag{28}
357
+ $$
358
+
359
+ If ${\mathbf{C}}_{ : , i} \neq \mathbf{0}$ then $i \in \operatorname{supp}\left( \widehat{\mathbf{C}}\right)$ , and the expression above constitutes a linear combination of the support columns with not all coefficients non-zero. This contradicts Lemma C. 8. Thus ${\mathbf{C}}_{ : , i} \neq \mathbf{0}$ and ${\overline{\widehat{a}}}_{ji} = 0\;\forall j \in \operatorname{supp}\left( \widehat{\mathbf{C}}\right) .$ We are now ready to prove Prop. C. 13.
360
+
361
+ Proof. We first choose a $k$ -pair $\mathbf{C},\widehat{\mathbf{C}}$ such that the support $S$ of $\mathbf{C}$ is concetrated to the last $k$ columns, namely ${\mathbf{C}}_{ : , i} = 0$ for $i = 1,\ldots , d - k$ . Then the corresponding data submatrix $\mathbf{X}$ will necessarily have the same sparsity pattern since the values are computed according to predecessors, which in our case lie in smaller indices. Thus ${\mathbf{X}}_{ : , i} = \mathbf{0}.\;\forall i = 1,\ldots , d - k$ and according to Lemma C. 15 we get that
362
+
363
+ $$
364
+ {\overline{\widehat{a}}}_{ji} = 0\text{, for all}1 \leq i \leq d - k\text{and}d - k + 1 \leq j \leq d\text{.} \tag{29}
365
+ $$
366
+
367
+ We notice that the desired condition is fullfilled for $i = d - k$ , i.e. it has no non-zero influence from a node with larger index. We now prove the same sequentially for $i = d - k - 1, d - k - 2,\ldots ,1$ by moving each time left the leftmost index of the support $S$ . We prove the following statement by induction:
368
+
369
+ $$
370
+ P\left( l\right) : {\overline{\widehat{a}}}_{ji} = 0\text{ for }l < j, i < j \leq d - k \tag{30}
371
+ $$
372
+
373
+ We know that $P\left( {d - k}\right)$ is true and $P\left( 1\right)$ gives the required relation for all indices $i = 1,\ldots , d - k$ . Now we assume that $P\left( l\right)$ holds. If we pick $k$ -pair $\mathbf{C},\widehat{\mathbf{C}}$ such that $\mathbf{C}$ has support $S = \{ l, d -$ $k + 2, d - k + 3\ldots , d\}$ . Then ${\mathbf{X}}_{ : , i} = \mathbf{0}$ for all $i < l$ which with Lemma C. 15 gives ${\mathbf{C}}_{ : , i} = \mathbf{0}$ for $i < l$ which means supp $\left( \widehat{\mathbf{C}}\right) \subset \{ l, l + 1,\ldots , d\}$ and ${\bar{a}}_{ji} = 0$ for $j \in \operatorname{supp}\left( \widehat{\mathbf{C}}\right)$ . Note, that by the induction hypothesis we have that ${\overline{\widehat{a}}}_{jl} = 0$ for all $l < j$ . However it is true that ${\mathbf{X}}_{ : , l} = {\mathbf{C}}_{ : , l}$ and also
374
+
375
+ $$
376
+ {\mathbf{X}}_{ : , l} = \mathop{\sum }\limits_{{j \in \operatorname{supp}\left( \widehat{\mathbf{C}}\right) }}^{d}{\overline{\widehat{a}}}_{jl}{\widehat{\mathbf{C}}}_{ : , j} + {\widehat{\mathbf{C}}}_{ : , l} = {\widehat{\mathbf{C}}}_{ : , l} \tag{31}
377
+ $$
378
+
379
+ Therefore $l \in \operatorname{supp}\left( \widehat{\mathbf{C}}\right)$ and thus ${\bar{a}}_{li} = 0$ for all $i < l$ which combined with $P\left( l\right)$ gives
380
+
381
+ $$
382
+ {\overline{\widehat{a}}}_{ji} = 0\text{for}l - 1 < j, i < j, i \leq d - k \tag{32}
383
+ $$
384
+
385
+ which is exactly $P\left( {l - 1}\right)$ and the induction is complete.
386
+
387
+ Now it remains to show that ${\overline{\widetilde{a}}}_{ji} = 0$ for $d - k + 1 \leq i \leq d$ and $i < j$ . We will again proceed constructively using induction. This time we will sequentially choose support $S$ that is concentrated on the last $k + 1$ columns and at each step we will move the zero column one index to the right. For $l = d - k + 1,\ldots , d$ let’s define:
388
+
389
+ $$
390
+ Q\left( l\right) : {\overline{\widehat{a}}}_{jl} = 0\text{for}l < j\text{and}{\overline{\widehat{a}}}_{jl} = {\bar{a}}_{jl}\text{for}d - k \leq j < l \tag{33}
391
+ $$
392
+
393
+ First we show the base case $Q\left( {d - k + 1}\right)$ . For this we choose a $k$ -pair $\mathbf{C},\widehat{\mathbf{C}}$ such that $\mathbf{C}$ has support $S = \{ d - k, d - k + 2, d - k + 3,\ldots , d\}$ . It is true that ${\mathbf{X}}_{ : , i} = \mathbf{0}$ for $i = 1,\ldots , d - k - 1$ hence ${\widehat{\mathbf{C}}}_{ : , i} = \mathbf{0}$ for $i \leq d - k - 1$ and therefore the node $d - k$ doesn’t have any non-zero parents, since also ${\overline{\widetilde{a}}}_{j\left( {d - k}\right) } = 0$ for $d - k < j$ from previous claim. Therefore ${\widehat{\mathbf{C}}}_{ : , d - k} = {\mathbf{X}}_{ : , d - k} = {\mathbf{C}}_{ : , d - k}$ . Also, for $l = d - k + 1$
394
+
395
+ $$
396
+ {\mathbf{X}}_{ : , l} = {\bar{a}}_{d - k, l}{\mathbf{C}}_{ : , d - k} = {\bar{a}}_{d - k, l}{\widehat{\mathbf{C}}}_{ : , d - k} \tag{34}
397
+ $$
398
+
399
+ The equation from $\widehat{\mathbf{A}}$ gives:
400
+
401
+ $$
402
+ {\mathbf{X}}_{ : , l} = \mathop{\sum }\limits_{{j = d - k}}^{d}{\overline{\widehat{a}}}_{jl}{\widehat{\mathbf{C}}}_{ : , j} + {\widehat{\mathbf{C}}}_{ : , l} \Rightarrow \left( {{\bar{a}}_{d - k, l} - {\overline{\widehat{a}}}_{d - k, l}}\right) {\widehat{\mathbf{C}}}_{ : , d - k} + \mathop{\sum }\limits_{{j = d - k + 1}}^{d}{\overline{\widehat{a}}}_{jl}{\widehat{\mathbf{C}}}_{ : , j} + {\widehat{\mathbf{C}}}_{ : , l} = \mathbf{0} \tag{35}
403
+ $$
404
+
405
+ From the linear Independence Lemma C. 8 of the support of $\widehat{\mathbf{C}}$ we necessarily need to have ${\widehat{\mathbf{C}}}_{ : , l} = \mathbf{0}$ , ${\bar{a}}_{d - k, l} = {\overline{\widehat{a}}}_{d - k, l}$ and ${\overline{\widehat{a}}}_{jl} = 0$ for $l < j$ which gives the base case.
406
+
407
+ For the rest of the induction we proceed in a similar manner. We assume with strong induction that all $Q\left( {d - k + 1}\right) ,\ldots , Q\left( l\right)$ are true and proceed to prove $Q\left( {l + 1}\right)$ . Given these assumptions we have that
408
+
409
+ $$
410
+ {\overline{\widehat{a}}}_{ji} = {\bar{a}}_{ji}\text{for}d - k \leq j < i, d - k \leq i \leq l\text{and}{\overline{\widehat{a}}}_{ji} = 0\text{for}i < j, d - k \leq i \leq l \tag{36}
411
+ $$
412
+
413
+ Consider spectral support $S = \{ d - k, d - k + 1,\ldots , l, l + 2,\ldots , d\}$ (the $\left( {l + 1}\right)$ -th column is0) for the $k$ -pair $\mathbf{C},\widehat{\mathbf{C}}$ . Then we have the equations:
414
+
415
+ $$
416
+ {\mathbf{X}}_{ : , d - k} = {\mathbf{C}}_{ : , d - k} = {\widehat{\mathbf{C}}}_{ : , d - k} \tag{37}
417
+ $$
418
+
419
+ $$
420
+ {\mathbf{X}}_{ : , d - k + 1} = {\bar{a}}_{d - k, d - k + 1}{\mathbf{C}}_{ : , d - k} + {\mathbf{C}}_{ : , d - k + 1} = {\overline{\widetilde{a}}}_{d - k, d - k + 1}{\widehat{\mathbf{C}}}_{ : , d - k} + {\widehat{\mathbf{C}}}_{ : , d - k + 1} \Rightarrow {\widehat{\mathbf{C}}}_{ : , d - k + 1} = {\mathbf{C}}_{ : , d - k + 1}
421
+ $$
422
+
423
+ (38)
424
+
425
+ $\vdots$(39)
426
+
427
+ $$
428
+ {\mathbf{X}}_{ : , l} = \mathop{\sum }\limits_{{j = d - k}}^{{l - 1}}{\bar{a}}_{jl}{\mathbf{C}}_{ : , j} + {\mathbf{C}}_{ : , l} = \mathop{\sum }\limits_{{j = d - k}}^{{l - 1}}{\bar{\widehat{a}}}_{jl}{\widehat{\mathbf{C}}}_{ : , j} + {\widehat{\mathbf{C}}}_{ : , l} \Rightarrow {\widehat{\mathbf{C}}}_{ : , l} = {\mathbf{C}}_{ : , l} \tag{40}
429
+ $$
430
+
431
+ Where we used the linear independence lemma and sequentially proved that the spectral columns up to $l$ are equal. The equation for the $\left( {l + 1}\right)$ -th column now becomes:
432
+
433
+ $$
434
+ {\mathbf{X}}_{ : , l + 1} = \mathop{\sum }\limits_{{j = d - k}}^{l}{\bar{a}}_{j, l + 1}{\mathbf{C}}_{ : , j} + {\mathbf{C}}_{ : , l} = \mathop{\sum }\limits_{{j = d - k}}^{d}{\bar{\widehat{a}}}_{j, l + 1}{\widehat{\mathbf{C}}}_{ : , j} + {\widehat{\mathbf{C}}}_{ : , l + 1} \tag{41}
435
+ $$
436
+
437
+ $$
438
+ \Leftrightarrow \mathop{\sum }\limits_{{j = d - k}}^{d}\left( {{\overline{\bar{a}}}_{j, l + 1} - {\bar{a}}_{j, l + 1}}\right) {\widehat{\mathbf{C}}}_{ : , j} + {\widehat{\mathbf{C}}}_{ : , l + 1} = \mathbf{0} \Rightarrow \left\{ \begin{array}{l} {\overline{\bar{a}}}_{j, l + 1} = 0\text{ for }l + 1 < j \\ {\overline{\bar{a}}}_{j, l + 1} = {\bar{a}}_{j, l + 1}\text{ for }j < l + 1 \\ {\widehat{\mathbf{C}}}_{ : , l + 1} = \mathbf{0} \end{array}\right. \tag{42}
439
+ $$
440
+
441
+ where the last set of equalities follows from linear independence. This concludes the induction and the proof.
442
+
443
+ To complete the proof of Theorem 3.2 it remains to show the following proposition.
444
+
445
+ Proposition C.16. If both $\mathbf{A},\widehat{\mathbf{A}}$ are upper triangular then $\widehat{\mathbf{A}} = \mathbf{A}$ .
446
+
447
+ For our proof we use the following definition.
448
+
449
+ Definition C.17. We denote by ${\mathcal{P}}_{k}$ the set of all $k$ -pairs ${\mathbf{C}}_{R},{\widehat{\mathbf{C}}}_{R}$ for all possible support patterns.
450
+
451
+ Now we proceed to the proof.
452
+
453
+ Proof. We will show equivalently that $\overline{\overline{\mathbf{A}}} = \overline{\mathbf{A}}$ using two inductions. First we show for $l = 1,\ldots , k$ the following statement.
454
+
455
+ $P\left( l\right)$ : For all $k$ -pairs $\mathbf{C},\widehat{\mathbf{C}}$ in ${\mathcal{P}}_{k}$ the first $l$ non-zero columns ${\mathbf{C}}_{ : ,{i}_{1}},{\mathbf{C}}_{ : ,{i}_{2}},\ldots ,{\mathbf{C}}_{ : ,{i}_{l}}$ and ${\widehat{\mathbf{C}}}_{ : ,\widehat{{i}_{1}}},{\widehat{\mathbf{C}}}_{ : ,\widehat{{i}_{2}}},\ldots ,{\widehat{\mathbf{C}}}_{ : ,\widehat{{i}_{l}}}$ are in the same positions, i.e. ${i}_{j} = \widehat{{i}_{j}}$ and
456
+
457
+ - either they are respectively equal ${\mathbf{C}}_{ : ,{i}_{j}} = {\widehat{\mathbf{C}}}_{ : ,{i}_{j}}$
458
+
459
+ - or ${\mathbf{C}}_{ : ,{i}_{l}}$ is in the last possible index, namely ${i}_{l} = d - \left( {l - 1}\right)$
460
+
461
+ For the base case $P\left( 1\right)$ , consider a $k$ -pair $\mathbf{C},\widehat{\mathbf{C}}$ and let ${i}_{1}$ be the position of the first non-zero spectral column of $\mathbf{C}$ . Then ${\mathbf{X}}_{ : , i} = \mathbf{0}$ for $i < {i}_{1}$ and therefore from Lemma $\underline{\mathbf{C}}{.15}{\widehat{\mathbf{C}}}_{ : , i} = \mathbf{0}$ for $i < {i}_{1}$ . Hence
462
+
463
+ $$
464
+ {\mathbf{X}}_{ : ,{i}_{1}} = \mathop{\sum }\limits_{{j < {i}_{1}}}{\overline{\widehat{a}}}_{j{i}_{1}}{\widehat{\mathbf{C}}}_{ : , j} + {\widehat{\mathbf{C}}}_{ : ,{i}_{1}} = {\widehat{\mathbf{C}}}_{ : ,{i}_{1}} \tag{43}
465
+ $$
466
+
467
+ Therefore ${\widehat{\mathbf{C}}}_{ : ,{i}_{1}} = {\mathbf{C}}_{ : ,{i}_{1}}$ and we proved $P\left( 1\right)$ , by satisfying both the positioning and the first requirement.
468
+
469
+ Assuming now that $P\left( l\right)$ holds, we will show $P\left( {l + 1}\right)$ . Take any $k$ -pair of ${\mathcal{P}}_{k}$ which we denote by $\mathbf{C},\widehat{\mathbf{C}}$ . Then, if ${\mathbf{C}}_{ : ,{i}_{l}}$ is in the last possible position, then necessarily ${\mathbf{C}}_{ : ,{i}_{l + 1}}$ is in the last possible position. Moreover, from the induction hypothesis the first $l$ spectral columns are in the same positions. Therefore in the same manner, ${\widehat{\mathbf{C}}}_{ : ,{i}_{l}}$ is in the last position and ${\widehat{\mathbf{C}}}_{ : ,{i}_{l + 1}}$ , too. This case fulfills the desired statement.
470
+
471
+ If ${\mathbf{C}}_{ : ,{i}_{l}}$ is not in the last position, then from induction hypothesis, the first $l$ spectral columns are equal. If ${\mathbf{C}}_{ : ,{i}_{l + 1}}$ is in the last position and the same holds ${\widehat{\mathbf{C}}}_{ : ,{\widehat{i}}_{l + 1}}$ , the requirement is satisfied. Otherwise ${\widehat{i}}_{l + 1} < {i}_{l + 1}$ and the equation for ${\widehat{i}}_{l + 1}$ gives:
472
+
473
+ $$
474
+ {\mathbf{X}}_{ : ,{\widehat{i}}_{l + 1}} = \mathop{\sum }\limits_{{j = 1}}^{l}{\overline{\bar{a}}}_{{i}_{j},{\widehat{i}}_{l + 1}}{\widehat{\mathbf{C}}}_{ : ,{i}_{j}} + {\widehat{\mathbf{C}}}_{ : ,{\widehat{i}}_{l + 1}} = \mathop{\sum }\limits_{{j = 1}}^{l}{\bar{a}}_{{i}_{j},{\widehat{i}}_{l + 1}}{\mathbf{C}}_{ : ,{i}_{j}} + \mathbf{0} \Leftrightarrow \mathop{\sum }\limits_{{j = 1}}^{l}\left( {{\overline{\bar{a}}}_{{i}_{j},{\widehat{i}}_{l + 1}} - {\bar{a}}_{{i}_{j},{\widehat{i}}_{l + 1}}}\right) {\widehat{\mathbf{C}}}_{ : ,{i}_{j}} + {\widehat{\mathbf{C}}}_{ : ,{\widehat{i}}_{l + 1}} = \mathbf{0}
475
+ $$
476
+
477
+ (44)
478
+
479
+ According to linear independence Lemma C. 8 we necessarily derive ${\widehat{\mathbf{C}}}_{ : ,{\widehat{i}}_{l + 1}} = \mathbf{0}$ , absurd. Thus
480
+
481
+ ${\widehat{i}}_{l + 1} = {i}_{l + 1}$ and the induction statement is fullfilled in that case.
482
+
483
+ It remains to consider the case where ${\mathbf{C}}_{ : ,{i}_{l + 1}}$ is not in the last position. Since, ${i}_{l + 1}$ is not the last position there exists a $k$ -pair ${\mathbf{C}}^{\prime },{\widehat{\mathbf{C}}}^{\prime }$ such that the column ${i}_{l + 1}$ is zero and the $\left( {l + 1}\right)$ - spectral column of ${\mathbf{C}}^{\prime }$ lies at ${i}_{l + 1}^{\prime } > {i}_{l + 1}$ . The equation at ${i}_{l + 1}$ for ${\mathbf{X}}^{\prime }$ gives:
484
+
485
+ $$
486
+ {\mathbf{X}}_{ : ,{i}_{l + 1}}^{\prime } = \mathop{\sum }\limits_{{j = 1}}^{l}{\bar{a}}_{{i}_{j},{i}_{l + 1}}{\mathbf{C}}_{ : ,{i}_{j}}^{\prime } = \mathop{\sum }\limits_{{j = 1}}^{l}{\bar{\widehat{a}}}_{{i}_{j},{\widehat{i}}_{l + 1}}{\widehat{\mathbf{C}}}_{ : ,{i}_{j}}^{\prime } + {\widehat{\mathbf{C}}}_{ : ,{\widehat{i}}_{l + 1}}^{\prime } \tag{45}
487
+ $$
488
+
489
+ For the pair ${\mathbf{C}}^{\prime },{\widehat{\mathbf{C}}}^{\prime }$ the induction hypothesis holds, thus we derive:
490
+
491
+ $$
492
+ \mathop{\sum }\limits_{{j = 1}}^{l}\left( {{\overline{\widehat{a}}}_{{i}_{j},{\widehat{i}}_{l + 1}} - {\bar{a}}_{{i}_{j},{i}_{l + 1}}}\right) {\widehat{\mathbf{C}}}_{ : ,{i}_{j}}^{\prime } + {\widehat{\mathbf{C}}}_{ : ,{\widehat{i}}_{l + 1}}^{\prime } = \mathbf{0} \tag{46}
493
+ $$
494
+
495
+ Therefore, Lemma C. 8 gives ${\bar{a}}_{{i}_{j},{\widehat{i}}_{l + 1}} = {\bar{a}}_{{i}_{j},{i}_{l + 1}}$ for all $j = 1,\ldots , l$ and returning back to the equation for ${\mathbf{X}}_{ : ,{i}_{l + 1}}$ we derive:
496
+
497
+ $$
498
+ {\mathbf{X}}_{ : ,{i}_{l + 1}} = \mathop{\sum }\limits_{{j = 1}}^{l}{\bar{a}}_{{i}_{j},{i}_{l + 1}}{\mathbf{C}}_{ : ,{i}_{j}} + {\mathbf{C}}_{ : ,{i}_{l + 1}} = \mathop{\sum }\limits_{{j = 1}}^{l}{\overline{\widehat{a}}}_{{i}_{j},{\widehat{i}}_{l + 1}}{\widehat{\mathbf{C}}}_{ : ,{i}_{j}} + {\widehat{\mathbf{C}}}_{ : ,{\widehat{i}}_{l + 1}} \tag{47}
499
+ $$
500
+
501
+ $$
502
+ \overset{P\left( l\right) }{ = }\mathop{\sum }\limits_{{j = 1}}^{l}{\bar{a}}_{{i}_{j},{\widehat{i}}_{l + 1}}{\mathbf{C}}_{ : ,{i}_{j}} + {\widehat{\mathbf{C}}}_{ : ,{\widehat{i}}_{l + 1}} \tag{48}
503
+ $$
504
+
505
+ $$
506
+ \Rightarrow {\widehat{\mathbf{C}}}_{ : ,{\widehat{i}}_{l + 1}} = {\mathbf{C}}_{ : ,{\widehat{i}}_{l + 1}} \tag{49}
507
+ $$
508
+
509
+ which completes the induction step. Notice that $P\left( k\right)$ gives that for all $k$ -pairs, the $k$ spectral columns are in the same position. Given this fact we will now show that $\widehat{\mathbf{A}} = \mathbf{A}$ . We will prove by induction that for $l = k - 1,\ldots ,1,0$
510
+
511
+ $$
512
+ Q\left( l\right) : {\overline{\widehat{a}}}_{ij} = {\bar{a}}_{ij}\text{ for }1 \leq i < j < d - l \tag{50}
513
+ $$
514
+
515
+ To prove $Q\left( {k - 1}\right)$ we choose all the $k$ -pairs $\mathbf{C},\widehat{\mathbf{C}}$ , such that ${\mathbf{C}}_{ : ,{i}_{2}}$ is in the last possible position, ${i}_{2} = d - k + 2$ . Then for ${i}_{1} \leq d - k$ the columns ${\mathbf{C}}_{ : ,{i}_{1}},{\widehat{\mathbf{C}}}_{ : ,{i}_{1}}$ lie at the same position and are equal. Choosing ${i}_{1} = i$ and computing the equation for ${\mathbf{X}}_{ : , j}$ where $i < j \leq d - k + 1$ gives:
516
+
517
+ $$
518
+ {\mathbf{X}}_{ : , j} = {\bar{a}}_{ij}{\mathbf{C}}_{ : , i} = {\overline{\widehat{a}}}_{ij}{\widehat{\mathbf{C}}}_{ : , i} = {\overline{\widehat{a}}}_{ij}{\mathbf{C}}_{ : , i} \tag{51}
519
+ $$
520
+
521
+ Therefore ${\overline{\bar{a}}}_{ij} = {\bar{a}}_{ij}$ for all $1 \leq i < j \leq d - \left( {k - 1}\right)$ and $Q\left( {k - 1}\right)$ is satisfied. Next, assume that $Q\left( {k - l}\right)$ is true. We want to show $Q\left( {k - l - 1}\right)$ . Similarly to the base case we consider all $k$ -pairs such that ${\mathbf{C}}_{ : , l + 2}$ lies in its last possible position ${i}_{l + 2} = d - k + l + 2$ , and ${i}_{l + 1} \leq d - k + l$ . Since the $\left( {l + 1}\right)$ -th column is not in the last position, from the previous induction we have that:
522
+
523
+ $$
524
+ \left\{ \begin{array}{l} {\widehat{\mathbf{C}}}_{ : ,{i}_{1}} = {\mathbf{C}}_{ : ,{i}_{1}} \\ {\widehat{\mathbf{C}}}_{ : ,{i}_{2}} = {\mathbf{C}}_{ : ,{i}_{2}} \\ \vdots \\ {\widehat{\mathbf{C}}}_{ : ,{i}_{l + 1}} = {\mathbf{C}}_{ : ,{i}_{l + 1}} \end{array}\right. \tag{52}
525
+ $$
526
+
527
+ The equation for $d - k + l + 1$ gives:
528
+
529
+ $$
530
+ {\mathbf{X}}_{ : , d - k + l + 1} = \mathop{\sum }\limits_{{j = 1}}^{{l + 1}}{\bar{a}}_{{i}_{j}, d - k + l + 1}{\mathbf{C}}_{ : ,{i}_{j}} \tag{53}
531
+ $$
532
+
533
+ $$
534
+ = \mathop{\sum }\limits_{{j = 1}}^{{l + 1}}{\overline{\widehat{a}}}_{{i}_{j}, d - k + l + 1}{\widehat{\mathbf{C}}}_{ : ,{i}_{j}} \tag{54}
535
+ $$
536
+
537
+ $$
538
+ = \mathop{\sum }\limits_{{j = 1}}^{{l + 1}}{\overline{\widehat{a}}}_{{i}_{j}, d - k + l + 1}{\mathbf{C}}_{ : ,{i}_{j}} \tag{55}
539
+ $$
540
+
541
+ $$
542
+ \overset{\text{ Lemma C. 8 }}{ \rightarrow }{\overline{\widehat{a}}}_{{i}_{j}, d - k + l + 1} = {\bar{a}}_{{i}_{j}, d - k + l + 1} \tag{56}
543
+ $$
544
+
545
+ By choosing all such possible $k$ -pairs the indices ${i}_{j}$ span all possibilities $1 \leq i < d - k + l + 1$ . Combining this with $Q\left( {k - l}\right)$ we get that $Q\left( {k - l - 1}\right)$ is true and the desired result follows.
papers/GSK/GSK 2023/GSK 2023 CBC/TOaPl9tXlmD/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § LEARNING GENE REGULATORY NETWORKS UNDER FEW ROOT CAUSES ASSUMPTION.
2
+
3
+ Anonymous authors
4
+
5
+ Paper under double-blind review
6
+
7
+ § ABSTRACT
8
+
9
+ We present a novel directed acyclic graph (DAG) learning method based on a causal form of Fourier-sparsity. Our ideas connect a theory of causal Fourier analysis with data generated by a structural equation model (SEM). We show that data generated by linear SEMs can be characterized in the Fourier domain as having a dense spectrum of root causes with random coefficients. We then propose the new problem of learning DAGs from data with sparse spectra (Fourier-sparsity) or, equivalently, few root causes. We provide proofs of identifiability in the new setting and, moreover, show that the true DAG is the global minimizer of the ${L}^{0}$ -norm of the approximated spectra. Our method is applied to the CausalBench Challenge showing superior performance over the baselines.
10
+
11
+ § 1 INTRODUCTION
12
+
13
+ For causality, Fourier-sparsity translates into the data being generated from a few spectral coefficients or (in a sense being defined) few root causes. To formally analyze this we express the linear SEM equation based on the recently proposed theory of causal Fourier analysis on DAGs (Seifert et al. 2022b;a). We first show that data produced by linear structural equation models (SEMs) in prior work can be viewed as having a dense, random Fourier spectrum of causes and then extend them to include case where the data is sparse in the Fourier domain, i.e., has few root causes. It is worth noting that for prior, classical forms of Fourier transforms, Fourier-sparsity has played a significant role, including for the discrete Fourier transform (DFT) (Hassanieh, 2018), the discrete cosine transform (DCT) where it enables JPEG compression (Wallace, 1991), and the Walsh-Hadamard transform (WHT) for estimating set functions Stobbe & Krause (2012); Amrollahi et al. (2019).
14
+
15
+ Contributions. Towards this competition we provide the following contributions.
16
+
17
+ * We analyze linear SEMs in the Fourier domain and show that they yield dense spectra. We then pose the new assumption of data generated from a few root causes.
18
+
19
+ * For DAG learning from data with few root causes assumption, we propose a novel, linear DAG learning method, called MÖBIUS, based on the minimization of the ${L}^{1}$ -norm of the approximated spectrum. We provide theoretical guarantees for our method.
20
+
21
+ * We evaluate our method on the CausalBench dataset and show that MÖBIUS offers improvement over prior DAG learning methods.
22
+
23
+ § 2 MOTIVATION
24
+
25
+ Consider a DAG $\mathcal{G} = \left( {V,E}\right)$ with $\left| V\right| = d$ vertices, $E$ the set of directed edges, and no self-loops. We say $j$ is a parent of $i$ whenever $\left( {j,i}\right) \in E$ while $j$ is an ancestor of $i$ means there is a path from $j$ to $i$ . The vertices are sorted topologically and we set accordingly $V = \{ 1,2,\ldots ,d\}$ . Further, we assume a weighted adjacency matrix $\mathbf{A} = {\left( {a}_{ij}\right) }_{i,j \in V}$ of the graph, where ${a}_{ij} = 0$ if there is no edge.
26
+
27
+ Linear SEM. A data matrix $\mathbf{X} \in {\mathbf{R}}^{n \times d}$ consisting of $n$ signals (as rows) of dimension $d$ indexed by the DAG $\mathcal{G}$ satisfies a linear SEM (Shimizu et al.,2006; Zheng et al.,2018; Ng et al.,2020) if
28
+
29
+ $$
30
+ \mathbf{X} = \mathbf{{XA}} + \mathbf{N}. \tag{1}
31
+ $$
32
+
33
+ where the matrix $\mathbf{N}$ consists of independent random noise samples. Isolating $\mathbf{X}$ in (I) yields:
34
+
35
+ $$
36
+ \mathbf{X} = \mathbf{N}{\left( \mathbf{I} - \mathbf{A}\right) }^{-1} = \mathbf{N}\left( {\mathbf{I} + \mathbf{A} + \ldots + {\mathbf{A}}^{\left( d - 1\right) }}\right) = \mathbf{N}\left( {\mathbf{I} + \overline{\mathbf{A}}}\right) \tag{2}
37
+ $$
38
+
39
+ where $\overline{\mathbf{A}} = \mathbf{A} + {\mathbf{A}}^{2} + \ldots + {\mathbf{A}}^{d - 1}$ denotes the polynomial FW weighted transitive closure of the adjacency matrix. Eq. (2) can be viewed as the close form solution of Eq. (1).
40
+
41
+ Causal Fourier transform. Eq. (2) connects with the recent causal Fourier analysis framework of Seifert et al. (2022b). Consider the linear equation between the observed signal $\mathrm{s}$ and its causes $\mathrm{c}$
42
+
43
+ $$
44
+ \mathbf{s} = \left( {\mathbf{I} + {\overline{\mathbf{A}}}^{T}}\right) \mathbf{c}. \tag{3}
45
+ $$
46
+
47
+ Seifert et al. (2022b) argue that $\mathbf{c}$ can be interpreted as a form of spectrum of $\mathbf{s}$ . This is done by providing a suitable notion of shift and associated shift-equivariant convolution whose eigenvectors are the columns of $\mathbf{I} + {\overline{\mathbf{A}}}^{T}$ , following the algebraic theory of constructing Fourier analyses by Püschel & Moura (2006; 2008).
48
+
49
+ Fourier-sparse data. Combining Eq. (2) and (3) naturally leads to the idea of assuming a more general linear SEM model, which in addition to the random noise term contains an explicit term corresponding to the root causes. The equation of generating data $\mathbf{X} \in {\mathbb{R}}^{n \times d}$ becomes
50
+
51
+ $$
52
+ \mathbf{X} = \left( {\mathbf{C} + {\mathbf{N}}_{f}}\right) \left( {\mathbf{I} + \overline{\mathbf{A}}}\right) + {\mathbf{N}}_{s}. \tag{4}
53
+ $$
54
+
55
+ The matrix $\mathbf{C} \in {\mathbb{R}}^{n \times d}$ represents the root causes and ${\mathbf{N}}_{f},{\mathbf{N}}_{s} \in {\mathbb{R}}^{n \times d}$ the random noises for the frequency and signal domain, respectively. Approximate Fourier-sparsity or few root causes means that only a few coefficients in $\mathbf{C}$ are non-zero and the values of ${\mathbf{N}}_{f},{\mathbf{N}}_{s}$ have negligible magnitude.
56
+
57
+ Example. Consider the first $n$ Fibonacci numbers. The recurrence equation generating the sequence can be viewed as a linear SEM where each term depends on the two predecessors. Unrolling or solving this recurrence shows, equivalently, that all numbers only depend on the first 2 . These two are the root causes and yield the setting of Eq. (4) with $\mathbf{C}$ sparse, having only the first two values nonzero, and ${\mathbf{N}}_{f} = {\mathbf{N}}_{s} = \mathbf{0}$ . For general linear SEMs this recurrence-solving yields Eq. (2) from Eq. (1). Doing so yields our novel setting of Eq. (4) which captures the situation where some nodes in the DAG insert spikes of values that then percolate through the DAG as determined by the edge weights. But not exactly as captured by ${\mathbf{N}}_{f}$ and not exactly measurable as captured by ${\mathbf{N}}_{s}$ .
58
+
59
+ § 3 OUR METHOD
60
+
61
+ Theoretical Guarantees. First we show that the novel setting based on the assumption of generation via few root causes is identifiable and then we define a discrete optimization problem that is guaranteed to find the true DAG under the assumption of having enough data.
62
+
63
+ Theorem 3.1. Assume data generated via the extended linear SEM Eq. (4). We assume that the spectra $\mathbf{C}$ are independent random variables taking uniform values from $\left\lbrack {0,1}\right\rbrack$ with probability $p$ , and are $= 0$ with probability $1 - p$ . Then Eq. (4) translates into a linear SEM with non-Gaussian noise and thus $\mathbf{A}$ identifiable due to (Shimizu et al. 2006).
64
+
65
+ Given the data $\mathbf{X}$ we propose the following optimization problem to retrieve the DAG structure:
66
+
67
+ $$
68
+ \mathop{\min }\limits_{{\mathbf{A} \in {\mathbb{R}}^{d \times d}}}\parallel \mathbf{X} - \mathbf{{XA}}{\parallel }_{0}\;\text{ s.t. }\;\mathbf{A}\text{ is acyclic. } \tag{5}
69
+ $$
70
+
71
+ Theorem 3.2. Consider a DAG with weighted adjacency matrix $\mathbf{A}$ . Suppose that $\widehat{\mathbf{A}}$ is the optimal solution of the optimization problem (5) where the number $n$ of data rows in $\mathbf{X}$ satisfies
72
+
73
+ $$
74
+ n \geq \frac{{2}^{{3d} - 2}d\left( {d - 1}\right) }{{\left( 1 - \delta \right) }^{2}{p}^{k}{\left( 1 - p\right) }^{d - k}} \tag{6}
75
+ $$
76
+
77
+ where $k = \lfloor {dp}\rfloor$ and
78
+
79
+ $$
80
+ \delta \geq \frac{1}{\sqrt{2}}\max \left\{ {\frac{1}{{p}^{k}{\left( 1 - p\right) }^{d - k}\left( \begin{matrix} \frac{1}{d} \\ k \end{matrix}\right) }\sqrt{\ln \left( \frac{1}{\epsilon }\right) },\left( \begin{matrix} d \\ k \end{matrix}\right) \sqrt{\ln \left( \frac{\left( \begin{matrix} d \\ k \end{matrix}\right) }{\epsilon }\right) }}\right\} . \tag{7}
81
+ $$
82
+
83
+ Then with probability ${\left( 1 - \epsilon \right) }^{2},\mathbf{A}$ is the global minimizer of (5), namely $\widehat{\mathbf{A}} = \mathbf{A}$ .
84
+
85
+ < g r a p h i c s >
86
+
87
+ Figure 1: Evaluation of performance on the Datasets K562 and RPE1 (Replogle et al., 2022) based on CausalBench (Chevalley et al., 2022) framework. The Wasserstein distance metric (higher is better) is computed for our method in comparison to some implemented baselines.
88
+
89
+ MÖBIUS . Our method is formed as the continuous relaxation of the discrete optimization problem (5). We substitute the ${L}^{0}$ -norm from (5) with its convex approximation (Ramirez et al. 2013), the ${L}^{1}$ -norm. The acyclicity is then captured with the continuous constraint $h\left( \mathbf{A}\right) = \operatorname{tr}\left( {e}^{\mathbf{A} \odot \mathbf{A}}\right) - d$ from (Zheng et al.,2018). Finally, we use $R\left( \mathbf{A}\right) = \lambda \parallel \mathbf{A}{\parallel }_{1}$ as the sparsity regularizer for the adjacency matrix and our final continuous optimization problem is formulated as
90
+
91
+ $$
92
+ \mathop{\min }\limits_{{\mathbf{A} \in {\mathbb{R}}^{d \times d}}}\frac{1}{2n}\parallel \mathbf{X} - \mathbf{{XA}}{\parallel }_{1} + \lambda \parallel \mathbf{A}{\parallel }_{1}\;\text{ s.t. }\;h\left( \mathbf{A}\right) = 0. \tag{8}
93
+ $$
94
+
95
+ We call this method MÖBIUS due to the fact that the Fourier transform of (3) in the causal setting coincides with the weighted Möbius transform for posets (Seifert et al., 2022b).
96
+
97
+ Handling interventions. The gene expression data provided by the CausalBench framework can contain interventions, either for all genes or for a fraction of them. An intervention assigns a value to a gene which is independent to the expression data of its predecessors. Mathematically, the linear SEM adopting the intervention scheme is formulated with the following equation
98
+
99
+ $$
100
+ \mathbf{X} = \mathbf{{XAM}} + \mathbf{N}\text{ . } \tag{9}
101
+ $$
102
+
103
+ $\mathbf{M}$ is an intervention mask which has rows with ones, except on the position $i$ where it has 0 when the intervention acts on gene $i$ . In that case the gene is initialized with noise according to Eq. (9) or more generally with some spectral value together with noise as captured by Eq. (4). Given that the positions of the interventions in the dataset are known the optimization problem becomes
104
+
105
+ $$
106
+ \mathop{\min }\limits_{{\mathbf{A} \in {\mathbb{R}}^{d \times d}}}\frac{1}{2n}\parallel \mathbf{X} - \mathbf{{XAM}}{\parallel }_{1} + \lambda \parallel \mathbf{A}{\parallel }_{1}\;\text{ s.t. }\;h\left( \mathbf{A}\right) = 0. \tag{10}
107
+ $$
108
+
109
+ § 4 CONTEST EVALUATION
110
+
111
+ Our method appears to work competitively in synthetic data generated with a few root causes and also in the gene regulatory network dataset by Sachs et al. (2005), as shown in the appendix. In Fig. 1 we present our performance on the gene-gene interaction network benchmark provided by Chevalley et al. (2022). Our method seems to perform better than the implemented baselines and also exhibits an upward trend which indicates that it benefits from interventions.
112
+
113
+ Implementational details. For our method, we construct a PyTorch model consisting of a linear layer which represents the weighted adjacency matrix $\mathbf{A}$ . Then given the data $\mathbf{X}$ , processed in batches, and the interventional positions masking matrix $\mathbf{M}$ we train our model with Adam optimizer with learning rate $\lambda = {10}^{-3}$ , to minimize the loss defined by Eq. (10). The final adjacency matrix is thresholded at 0.035 which experimentally showed to result into more than thousands of edges.
114
+
115
+ § 5 CONCLUSION
116
+
117
+ We presented a new perspective on linear SEMs motivated by a recently proposed causal Fourier analysis for DAGs. Mathematically, this perspective translates (or solves) the recurrence describing the SEM into an invertible linear transformation that takes as input a chosen Fourier spectrum of values, thus called root causes, to produce the data as output. Prior data generation for linear SEMs assumed a dense, random spectrum. In this paper we adapted the novel scenario of data generated from few root causes, to reconstruct the gene-gene interactome. Our assumption seems to performs well in this setting and possibly gives new insights for the data generation of gene expression data.
papers/GSK/GSK 2023/GSK 2023 CBC/Wf0QRYUkhwV/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CATRAN: ULTRA-LIGHT NEURAL NETWORK FOR PREDICTING GENE-GENE INTERACTIONS FROM SINGLE-CELL DATA
2
+
3
+ Anonymous authors
4
+
5
+ Paper under double-blind review
6
+
7
+ ## Abstract
8
+
9
+ Part of the difficulty of learning a gene-regulatory network from expression data is related to the fact that edges in such a network represent different interactions with a different effect size. Therefore modeling gene associations requires learning an individual function for each pair of interacting genes. This may greatly inflate the number of parameters in a model and lead to insufficient generalization. In this paper we propose a method for gene regulatory network inference, called CaTran (Causal Transformer), which avoids explicitly learning pairwise relation between genes, which allows it to significantly reduce the size of the model. The key feature of this approach is learning for each gene a low dimensional embedding and then using a self-attention mechanism to estimate its relation to other genes. Our method is applicable for both observational data and data with interventions. For the latter it implements a differentiable gene importance test and forces attention values to be in accordance with it. Because the gene regulatory network in CaTran is learned as a soft adjacency matrix, it allows sampling graphs with arbitrary number of edges based on a set threshold. Comparison of these graphs with the gene networks from databases showed that even for large graphs the edges are predicted with high precision.
10
+
11
+ ## 1 MODEL DESCRIPTION
12
+
13
+ Here we present our solution to CausalBench challenge (Chevalley et al., 2022).
14
+
15
+ ### 1.1 DATA PREPROCESSING
16
+
17
+ Our experiments have shown that any additional preprocessing of the data at most yields no increase in model performance as compared to running it using raw counts. We have tried different methods of normalization including scanpy standard pipeline (Wolf et al., 2018) and CLR transform (Stoeckius et al., 2017). The decrease in performance is likely to be associated with the spurious correlation patterns which arise in data after normalization. Then we also tried imputing data using various techniques such as MAGIC (Dijk et al., 2018) and SVD based imputation (replacing zero values with inverse SVD transform). The ineffectiveness of applying these methods indicates the importance of zeros in data as a biological signal for predicting gene regulatory networks (Jiang et al., 2022).
18
+
19
+ ### 1.2 TRAINING OBJECTIVE
20
+
21
+ CaTran is built upon the DCDI framework (Brouillard et al., 2020) simultaneously simplifying it and regularizing its behavior. From its predecessor, CaTran inherits the basic outline of the learning objective. CaTran does not directly optimize inference of gene interactions but instead solves gene expression prediction tasks. And in the end it uses some of the model parameters as a proxy for gene interaction scores. Unlike DCDI, however, CaTran does not encode these scores explicitly as a learnable adjacency matrix but computes them using self-attention mechanism. Another key distinction of CaTran from DCDI is that instead of modeling distribution of gene expression it directly predicts the expression of genes in a minibatch. We did this because the expression of genes in single-cell data does not follow any parametric probability distribution.
22
+
23
+ The model is trained using mini-batches which include a subset of cells and a subset of genes. The typical size of a mini-batch is 2048 cells and 500 genes. If the number of genes is less than 500 genes, then the mini-batch includes all genes. Using large batches with more than 1000 genes resulted in decreased performance. In each mini-batch a fraction of genes sampled randomly is perturbed by shuffling values between selected genes. Initially, we tried zeroing out these genes but the new strategy yielded better results. We also experimented with augmenting different fractions of the input and established empirically that hiding expression of as much as ${80}\%$ genes leads to better results. Overall, this strategy is reminiscent of how the language models such as language transformers are trained (Devlin et al., 2019).
24
+
25
+ The objective of a neural network is to predict the true values of genes with augmented expression. And so its loss function consists of three terms, two of which correspond to this task. The model separately calculates the predicted expression of augmented genes and genes with unaugmented expression and calculated Huber loss (Gokcesu & Gokcesu, 2021). These two terms combined with different weights, correspondingly 0.7 and 0.3 . The second term is added to control that the model will not forget the true values of gene expression of genes with unaugmented expression. The choice of Huber loss rather than MSE is crucial to maintain high performance of the program because it reduces the effect of outliers.
26
+
27
+ To optimize the given objective we used Lion optimizer(Yazdani & Jolai, 2016). We compared it to AdamW(Loshchilov & Hutter, 2019) and found it more preferable. By default we use it for 25 epochs with low learning rate (0.001) and weight decay (0.05). The model weights are initialized with values from normal distribution with zero mean and the standard deviation of 0.001 . The choice of this initialization strategy was dictated by the fact that we used SiLU as an activation function (Elfwing et al., 2017).
28
+
29
+ ### 1.3 CATRAN ARCHITECTURE
30
+
31
+ The guiding principle for implementing CaTran architecture (Figure 1A) was the idea that interactions between genes can be encoded in learnable gene embeddings. This helps to avoid learning these interactions explicitly. In contrast the original DCDI approach implements learning the whole adjacency matrix which is quadratic to the number of genes. Similarly, CellOracle trains a separate linear model for each network edge (Kamimoto et al., 2023). In our model we compress this information in low dimensional embeddings. The manual search indicated that the optimal embedding size is 40 . Though it is a very robust hyperparameter and its alterations do not affect the performance of the program dramatically. Using embeddings also allows to reduce the number of genes used in a mini-batch.
32
+
33
+ CaTran next uses these embeddings to estimate interactions between genes using self-attention. We tried its different implementations and in the end came up with the following structure. The embeddings are passed to a linear layer which uses the same weights to transform each embedding, then the matrix of dot products between these embeddings is estimated. Then we apply softmax to this matrix along the dimension which conceptually represents the incoming edges in a gene regulatory network. The resulting scores approximate gene interaction scores. We tried binarizing this matrix based on a selected threshold as proposed by DCDI but it led to a drop in performance. This indicates that gene-gene connectivity on its own is not enough to model gene interactions.
34
+
35
+ After attention weights have been estimated, the model modifies embeddings by adding to them gene expression values. They are then passed through two linear layers with batch normalization layers between them but without activation. This empirically led to better results, which can be related to the issues with numerical instability, since the embeddings are initialized with very small values. Then modified embeddings are passed to an attention block (Figure 1B), which updates each gene embedding based on embeddings of other genes using the precomputed attention weights. They are then passed to batch normalization layer, linear layer, another batch normalization layer and in the end to a non-linear activation function. Finally, the output of the attention block is passed to two linear layers which produce the output.
36
+
37
+ ![01963a40-8ff3-73c4-a999-efca81c4047a_2_340_232_1114_691_0.jpg](images/01963a40-8ff3-73c4-a999-efca81c4047a_2_340_232_1114_691_0.jpg)
38
+
39
+ Figure 1: Figure 1. Schematic of CaTran.
40
+
41
+ A The basic outline of the model. B Schematic of the attention block.
42
+
43
+ ### 1.4 INTERVENTIONAL LOSS
44
+
45
+ Though CaTran is able to make accurate predictions in observational regime, its true power is achieved when used with interventional data. To make use of the knowledge about perturbed genes, we introduce interventional loss term into our model. Its purpose is to make attention weights follow importance scores calculated based on the analysis of associations between a perturbed gene and all other genes. The idea behind these scores is inspired by Wu et al., (2023). In essence its premise is that if one gene is associated with another, then its expression should help the model to make accurate predictions. And so for each perturbed gene in a mini-batch we estimate an error of gene expression predictions within cells where this gene is active and within cells where it was turned off, then we take a ratio of these two error estimates, subtract one and transform using sigmoid:
46
+
47
+ $$
48
+ {Sigmoid}\left( {\frac{{Huber}({{non}\_ {inter}}{v}_{ - }{pred},{{non}\_ {inter}}{v}_{ - }{true})}{{Huber}({inter}{v}_{ - }{pred},{inter}{v}_{ - }{true})/)} - 1}\right)
49
+ $$
50
+
51
+ . Then we penalize attention coefficients using Huber loss if they deviate from the importance scores.
52
+
53
+ ### 1.5 RETRIEVING AN ADJACENCY MATRIX
54
+
55
+ In the end of the training CaTran learns embeddings, which can be used to predict an association between genes. In order to get an adjacency matrix, we calculate pairwise dot products between embeddings and then transform them using softmax. Finally, CaTran ranks all edges in this soft adjacency matrix by their attention weight and sets 1 to top 1000 genes and 0 to the rest. One can also easily sample graphs with an arbitrary number of edges.
56
+
57
+ CaTran outputs directed graphs, however unlike DCDI it allows the existence of cycles in the graph. It was done intentionally, since as we noticed the biological networks do not conform to acyclicity constraints.
58
+
59
+ ### 1.6 IMPLEMENTATION DETAILS
60
+
61
+ The model is implemented using PyTorch and PyTorch Lightning. As it follows from the title of this paper our model uses comparatively few learnable parameters. The total number of parameters can be estimated using the following formula: ${9000} + {40} *$ number_of_genes.
62
+
63
+ ## 2 CITATIONS
64
+
65
+ Brouillard, P., Lachapelle, S., Lacoste, A., Lacoste-Julien, S., & Drouin, A. (2020). Differentiable Causal Discovery from Interventional Data (arXiv:2007.01754). arXiv. https://doi.org/10.48550/arXiv.2007.01754
66
+
67
+ Chevalley, M., Roohani, Y., Mehrjou, A., Leskovec, J., & Schwab, P. (2022). Causal-Bench: A Large-scale Benchmark for Network Inference from Single-cell Perturbation Data (arXiv:2210.17283). arXiv. https://doi.org/10.48550/arXiv.2210.17283
68
+
69
+ Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (arXiv:1810.04805). arXiv. https://doi.org/10.48550/arXiv.1810.04805
70
+
71
+ Dijk, D. van, Sharma, R., Nainys, J., Yim, K., Kathail, P., Carr, A. J., Burdziak, C., Moon, K. R., Chaffer, C. L., Pattabiraman, D., Bierie, B., Mazutis, L., Wolf, G., Krishnaswamy, S., & Pe'er, D. (2018). Recovering Gene Interactions from Single-Cell Data Using Data Diffusion. Cell, 174(3), 716-729.e27. https://doi.org/10.1016/j.cell.2018.05.061
72
+
73
+ Elfwing, S., Uchibe, E., & Doya, K. (2017). Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning (arXiv:1702.03118). arXiv. https://doi.org/10.48550/arXiv.1702.03118
74
+
75
+ Gokcesu, K., & Gokcesu, H. (2021). Generalized Huber Loss for Robust Learning and its Efficient Minimization for a Robust Statistics (arXiv:2108.12627). arXiv. http://arxiv.org/abs/2108.12627
76
+
77
+ Jiang, R., Sun, T., Song, D., & Li, J. J. (2022). Statistics or biology: The zero-inflation controversy about scRNA-seq data. Genome Biology, 23(1), 31. https://doi.org/10.1186/s13059-022-02601-5
78
+
79
+ Kamimoto, K., Stringa, B., Hoffmann, C. M., Jindal, K., Solnica-Krezel, L., & Morris, S. A. (2023). Dissecting cell identity via network inference and in silico gene perturbation. Nature, 614(7949), Article 7949. https://doi.org/10.1038/s41586-022-05688-9
80
+
81
+ Loshchilov, I., & Hutter, F. (2019). Decoupled Weight Decay Regularization (arXiv:1711.05101). arXiv. https://doi.org/10.48550/arXiv.1711.05101
82
+
83
+ Stoeckius, M., Hafemeister, C., Stephenson, W., Houck-Loomis, B., Chattopadhyay, P. K., Swerd-low, H., Satija, R., & Smibert, P. (2017). Simultaneous epitope and transcriptome measurement in single cells. Nature Methods, 14(9), Article 9. https://doi.org/10.1038/nmeth.4380
84
+
85
+ Wolf, F. A., Angerer, P., & Theis, F. J. (2018). SCANPY: Large-scale single-cell gene expression data analysis. Genome Biology, 19(1), 15. https://doi.org/10.1186/s13059-017-1382-0
86
+
87
+ Wu, A. P., Markovich, T., Berger, B., Hammerla, N., & Singh, R. (2023). Causally-guided Regularization of Graph Attention Improves Generalizability (arXiv:2210.10946). arXiv. http://arxiv.org/abs/2210.10946
88
+
89
+ Yazdani, M., & Jolai, F. (2016). Lion Optimization Algorithm (LOA): A nature-inspired metaheuristic algorithm. Journal of Computational Design and Engineering, 3(1), 24-36. https://doi.org/10.1016/j.jcde.2015.06.003
papers/GSK/GSK 2023/GSK 2023 CBC/Wf0QRYUkhwV/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § CATRAN: ULTRA-LIGHT NEURAL NETWORK FOR PREDICTING GENE-GENE INTERACTIONS FROM SINGLE-CELL DATA
2
+
3
+ Anonymous authors
4
+
5
+ Paper under double-blind review
6
+
7
+ § ABSTRACT
8
+
9
+ Part of the difficulty of learning a gene-regulatory network from expression data is related to the fact that edges in such a network represent different interactions with a different effect size. Therefore modeling gene associations requires learning an individual function for each pair of interacting genes. This may greatly inflate the number of parameters in a model and lead to insufficient generalization. In this paper we propose a method for gene regulatory network inference, called CaTran (Causal Transformer), which avoids explicitly learning pairwise relation between genes, which allows it to significantly reduce the size of the model. The key feature of this approach is learning for each gene a low dimensional embedding and then using a self-attention mechanism to estimate its relation to other genes. Our method is applicable for both observational data and data with interventions. For the latter it implements a differentiable gene importance test and forces attention values to be in accordance with it. Because the gene regulatory network in CaTran is learned as a soft adjacency matrix, it allows sampling graphs with arbitrary number of edges based on a set threshold. Comparison of these graphs with the gene networks from databases showed that even for large graphs the edges are predicted with high precision.
10
+
11
+ § 1 MODEL DESCRIPTION
12
+
13
+ Here we present our solution to CausalBench challenge (Chevalley et al., 2022).
14
+
15
+ § 1.1 DATA PREPROCESSING
16
+
17
+ Our experiments have shown that any additional preprocessing of the data at most yields no increase in model performance as compared to running it using raw counts. We have tried different methods of normalization including scanpy standard pipeline (Wolf et al., 2018) and CLR transform (Stoeckius et al., 2017). The decrease in performance is likely to be associated with the spurious correlation patterns which arise in data after normalization. Then we also tried imputing data using various techniques such as MAGIC (Dijk et al., 2018) and SVD based imputation (replacing zero values with inverse SVD transform). The ineffectiveness of applying these methods indicates the importance of zeros in data as a biological signal for predicting gene regulatory networks (Jiang et al., 2022).
18
+
19
+ § 1.2 TRAINING OBJECTIVE
20
+
21
+ CaTran is built upon the DCDI framework (Brouillard et al., 2020) simultaneously simplifying it and regularizing its behavior. From its predecessor, CaTran inherits the basic outline of the learning objective. CaTran does not directly optimize inference of gene interactions but instead solves gene expression prediction tasks. And in the end it uses some of the model parameters as a proxy for gene interaction scores. Unlike DCDI, however, CaTran does not encode these scores explicitly as a learnable adjacency matrix but computes them using self-attention mechanism. Another key distinction of CaTran from DCDI is that instead of modeling distribution of gene expression it directly predicts the expression of genes in a minibatch. We did this because the expression of genes in single-cell data does not follow any parametric probability distribution.
22
+
23
+ The model is trained using mini-batches which include a subset of cells and a subset of genes. The typical size of a mini-batch is 2048 cells and 500 genes. If the number of genes is less than 500 genes, then the mini-batch includes all genes. Using large batches with more than 1000 genes resulted in decreased performance. In each mini-batch a fraction of genes sampled randomly is perturbed by shuffling values between selected genes. Initially, we tried zeroing out these genes but the new strategy yielded better results. We also experimented with augmenting different fractions of the input and established empirically that hiding expression of as much as ${80}\%$ genes leads to better results. Overall, this strategy is reminiscent of how the language models such as language transformers are trained (Devlin et al., 2019).
24
+
25
+ The objective of a neural network is to predict the true values of genes with augmented expression. And so its loss function consists of three terms, two of which correspond to this task. The model separately calculates the predicted expression of augmented genes and genes with unaugmented expression and calculated Huber loss (Gokcesu & Gokcesu, 2021). These two terms combined with different weights, correspondingly 0.7 and 0.3 . The second term is added to control that the model will not forget the true values of gene expression of genes with unaugmented expression. The choice of Huber loss rather than MSE is crucial to maintain high performance of the program because it reduces the effect of outliers.
26
+
27
+ To optimize the given objective we used Lion optimizer(Yazdani & Jolai, 2016). We compared it to AdamW(Loshchilov & Hutter, 2019) and found it more preferable. By default we use it for 25 epochs with low learning rate (0.001) and weight decay (0.05). The model weights are initialized with values from normal distribution with zero mean and the standard deviation of 0.001 . The choice of this initialization strategy was dictated by the fact that we used SiLU as an activation function (Elfwing et al., 2017).
28
+
29
+ § 1.3 CATRAN ARCHITECTURE
30
+
31
+ The guiding principle for implementing CaTran architecture (Figure 1A) was the idea that interactions between genes can be encoded in learnable gene embeddings. This helps to avoid learning these interactions explicitly. In contrast the original DCDI approach implements learning the whole adjacency matrix which is quadratic to the number of genes. Similarly, CellOracle trains a separate linear model for each network edge (Kamimoto et al., 2023). In our model we compress this information in low dimensional embeddings. The manual search indicated that the optimal embedding size is 40 . Though it is a very robust hyperparameter and its alterations do not affect the performance of the program dramatically. Using embeddings also allows to reduce the number of genes used in a mini-batch.
32
+
33
+ CaTran next uses these embeddings to estimate interactions between genes using self-attention. We tried its different implementations and in the end came up with the following structure. The embeddings are passed to a linear layer which uses the same weights to transform each embedding, then the matrix of dot products between these embeddings is estimated. Then we apply softmax to this matrix along the dimension which conceptually represents the incoming edges in a gene regulatory network. The resulting scores approximate gene interaction scores. We tried binarizing this matrix based on a selected threshold as proposed by DCDI but it led to a drop in performance. This indicates that gene-gene connectivity on its own is not enough to model gene interactions.
34
+
35
+ After attention weights have been estimated, the model modifies embeddings by adding to them gene expression values. They are then passed through two linear layers with batch normalization layers between them but without activation. This empirically led to better results, which can be related to the issues with numerical instability, since the embeddings are initialized with very small values. Then modified embeddings are passed to an attention block (Figure 1B), which updates each gene embedding based on embeddings of other genes using the precomputed attention weights. They are then passed to batch normalization layer, linear layer, another batch normalization layer and in the end to a non-linear activation function. Finally, the output of the attention block is passed to two linear layers which produce the output.
36
+
37
+ < g r a p h i c s >
38
+
39
+ Figure 1: Figure 1. Schematic of CaTran.
40
+
41
+ A The basic outline of the model. B Schematic of the attention block.
42
+
43
+ § 1.4 INTERVENTIONAL LOSS
44
+
45
+ Though CaTran is able to make accurate predictions in observational regime, its true power is achieved when used with interventional data. To make use of the knowledge about perturbed genes, we introduce interventional loss term into our model. Its purpose is to make attention weights follow importance scores calculated based on the analysis of associations between a perturbed gene and all other genes. The idea behind these scores is inspired by Wu et al., (2023). In essence its premise is that if one gene is associated with another, then its expression should help the model to make accurate predictions. And so for each perturbed gene in a mini-batch we estimate an error of gene expression predictions within cells where this gene is active and within cells where it was turned off, then we take a ratio of these two error estimates, subtract one and transform using sigmoid:
46
+
47
+ $$
48
+ {Sigmoid}\left( {\frac{{Huber}({{non}\_ {inter}}{v}_{ - }{pred},{{non}\_ {inter}}{v}_{ - }{true})}{{Huber}({inter}{v}_{ - }{pred},{inter}{v}_{ - }{true})/)} - 1}\right)
49
+ $$
50
+
51
+ . Then we penalize attention coefficients using Huber loss if they deviate from the importance scores.
52
+
53
+ § 1.5 RETRIEVING AN ADJACENCY MATRIX
54
+
55
+ In the end of the training CaTran learns embeddings, which can be used to predict an association between genes. In order to get an adjacency matrix, we calculate pairwise dot products between embeddings and then transform them using softmax. Finally, CaTran ranks all edges in this soft adjacency matrix by their attention weight and sets 1 to top 1000 genes and 0 to the rest. One can also easily sample graphs with an arbitrary number of edges.
56
+
57
+ CaTran outputs directed graphs, however unlike DCDI it allows the existence of cycles in the graph. It was done intentionally, since as we noticed the biological networks do not conform to acyclicity constraints.
58
+
59
+ § 1.6 IMPLEMENTATION DETAILS
60
+
61
+ The model is implemented using PyTorch and PyTorch Lightning. As it follows from the title of this paper our model uses comparatively few learnable parameters. The total number of parameters can be estimated using the following formula: ${9000} + {40} *$ number_of_genes.
papers/GSK/GSK 2023/GSK 2023 CBC/gpDOOAOmMe/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # GSK.AI CAUSALBENCH CHALLENGE (ICLR 2023) REPORT SUBMISSION – BETTERBOOST
2
+
3
+ Anonymous authors
4
+
5
+ Paper under double-blind review
6
+
7
+ ## Abstract
8
+
9
+ The introduction of large-scale, genome-wide, single-cell perturbation datasets provides the chance to learn a full gene regulatory network in the relevant cell line. However, existing gene regulatory network inference methods either fail to scale or do not explicitly leverage the interventional nature of this data. In this work, we propose an algorithm that builds upon GRNBoost by adding an additional step that complements its performance in the presence of labeled, single-gene interventional data. Our method, BetterBoost, significantly outperforms the baseline on provided single-cell perturbation datasets when non-zero fractions of labeled interventions are available, demonstrating the efficacy of our approach for inferring gene regulatory networks from large-scale single-cell perturbation datasets.
10
+
11
+ ## 1 INTRODUCTION
12
+
13
+ The introduction of large-scale, genome-wide, single-cell perturbation datasets (Replogle et al., 2022; Dixit et al., 2016) provides the chance to learn a full gene regulatory network. However, existing methods for gene regulatory network inference either fail to scale (Brouillard et al., 2020; Sethuraman et al. 2023) or do not explicitly leverage the interventional nature of this data (Moerman et al., 2019; Passemiers et al., 2022). Methods that fail to scale often have algorithmic complexity issues, such as those that arise when computing the exponential of a large matrix. Conversely, some methods that can handle over 10,000 genes (Moerman et al., 2019; Passemiers et al., 2022) treat the dataset as observational, overlooking the valuable interventional data. Although incorporating interventional data can improve the predictive power of models that treat the dataset as observational, such models fail to exploit causal inference principles that can help identify causal relationships. For example, recent works (Peters et al., 2015; Arjovsky et al., 2019) have leveraged the invariance of the conditional distribution $P\left( {\mathbf{x} \mid \operatorname{Pa}\left( \mathbf{x}\right) }\right)$ for causal discovery. Here, $\operatorname{Pa}\left( \mathbf{x}\right)$ denotes the set of direct causal parents of $\mathbf{x}$ .
14
+
15
+ Among the scalable models that do not incorporate interventional data, we found that GRNBoost (Moerman et al., 2019) performed the best. GRNBoost defines the target gene's parents as the target’s most predictive genes using a prediction importance score ${G}_{i, j}$ from gene $i$ to gene $j$ . We adapted the GRNBoost score ${G}_{i, j}$ into a score ${B}_{i, j}$ in our proposed method, BetterBoost, which leverages interventional data in complement to observational data. The score ${B}_{i, j}$ reduces to ${G}_{i, j}$ when only observational data is available and improves as more interventional data becomes available.
16
+
17
+ BetterBoost assumes that if the dataset was generated by a causal model, the observed data's joint distribution can be factorized as:
18
+
19
+ $$
20
+ p\left( {{\mathbf{x}}_{1}\ldots {\mathbf{x}}_{G}}\right) = \mathop{\prod }\limits_{{i = 1}}^{G}p\left( {{\mathbf{x}}_{i} \mid \operatorname{Pa}\left( {\mathbf{x}}_{i}\right) }\right) . \tag{1}
21
+ $$
22
+
23
+ If a candidate gene is a parent of the target, it will be a good predictor for the target, as GRNBoost assumes. But with labeled, interventional data, one can attempt to identify the true causal parents of a given observed variable ${\mathbf{x}}_{i}$ by looking at the effects of interventions on the candidate parents of ${\mathbf{x}}_{i}$ . In particular, in a sample where a candidate parent gene is knocked down, the perturbed gene will only remain a good predictor for the target gene if it is a true causal parent of the target. Hence, if knocking down a candidate gene leads to a statistically significant prediction of the target gene,
24
+
25
+ it indicates strong evidence of a causal relationship directed from the candidate parent to the target gene. We leverage the impact of knocking down candidate genes in the prediction importance score of BetterBoost.
26
+
27
+ We find that BetterBoost performs significantly better than leading methods GRNBoost (Passemiers et al., 2022) and DCDI (Brouillard et al., 2020) on provided sample data according to the challenge metric, average Wasserstein distance. Below, we detail the proposed method and go over the preliminary results of BetterBoost and relevant baselines on sample datasets.
28
+
29
+ ## 2 METHODS
30
+
31
+ In this section, we restate the objective of the challenge and detail the algorithm, BetterBoost.
32
+
33
+ ### 2.1 OBJECTIVE
34
+
35
+ The considered single-cell perturbational datasets each consist of a matrix of UMI counts per cell, $\mathbf{X} \in {\mathbb{Z}}^{+N \times G}$ , and associated interventional labels, $\mathbf{s} \in \{$ unperturbed, unlabeled, $1,\ldots , G{\} }^{N}$ , for each cell. Note the interventions can only affect at most one gene, which can be achieved via high-precision CRISPRi technology (Larson et al.,2013). We denote the fraction of genes $g \in \left\lbrack G\right\rbrack$ with labeled interventional data as $\rho$ .
36
+
37
+ Since ground truth causal network data does not exist for these datasets, a proposed causal graph is evaluated by the average Wasserstein distance which is defined as follows: for each edge in the inferred causal graph $\left( {i, j}\right) \in \widehat{\mathcal{G}}$ , the Wasserstein distance is computed between the distribution of ${X}_{j}$ in the unperturbed data and in the subset of data where ${X}_{i}$ is perturbed. Therefore, the average Wasserstein distance can be written as:
38
+
39
+ $$
40
+ d\left( \widehat{\mathcal{G}}\right) \mathrel{\text{:=}} \frac{1}{\left| \widehat{\mathcal{G}}\right| }\mathop{\sum }\limits_{{\left( {i, j}\right) \in \widehat{\mathcal{G}}}}{W}_{1}\left( {p\left( {{\mathbf{x}}_{j} \mid \mathbf{s} = \text{ unperturbed }}\right) , p\left( {{\mathbf{x}}_{j} \mid \mathbf{s} = i}\right) }\right) \tag{2}
41
+ $$
42
+
43
+ where ${W}_{1}$ denotes the first Wasserstein distance between two distributions.
44
+
45
+ The space of valid causal graphs, $\widehat{\mathcal{G}}$ is constrained to $\{ \widehat{\mathcal{G}} : \left| \widehat{\mathcal{G}}\right| \geq {1000}\}$ , but otherwise can include cycles and disconnected components.
46
+
47
+ ### 2.2 ALGORITHM
48
+
49
+ We found GRNBoost to work the best in the observational case, i.e. no labeled interventional data, but fail to improve on this metric after adding strictly more information in the form of intervention labels. Thus, we developed a simple procedure for leveraging any available intervention labels. As previously mentioned, we assume that the true causal graph, $\mathcal{G}$ is a directed, acyclic graph (DAG), and therefore the joint distribution factorizes as in Equation 1. To identify if gene $j \in \left\lbrack G\right\rbrack$ is a strong candidate parent gene for a given target gene $i \in \left\lbrack G\right\rbrack$ , we look if $j$ is predictive of the target gene $i$ in the dataset formed by observational data and the interventional data on gene $j$ . For a true causal parent, we expect that when $j$ is knocked down, there will be a statistically significant shift in the distribution of observed UMIs of gene $i$ between observational and interventional data. Since we held no priors on the nature of causal effects, we chose to use the Kolmogorov-Smirnov (KS) test (Massey, 1951) to test these distributional shifts between observational and interventional data. Additionally, we used the Benjamini-Hochberg procedure to correct the p-values for multiple testing (Benjamini & Hochberg, 1995).
50
+
51
+ To formulate the new score used by BetterBoost to rank the impact of gene $i$ on gene $j$ , we write ${G}_{i, j}$ the predictive score of gene $i$ on gene $j$ computed by GRNBoost, and ${p}_{i, j}$ the Benjamini-Hochberg corrected KS test p-value of the impact of knocking down gene $i$ on gene $j$ . If no interventional data was available on $i$ , we set all p-values ${p}_{i, * }$ to 0.05, as to neither strongly accept nor reject hypotheses for these interactions. We then define the score ${B}_{i, j} = \left( {-{p}_{i, j},{G}_{i, j}}\right)$ that we sort from larger to smaller (in lexicographic order).
52
+
53
+ For some desired number of edges, $K$ , BetterBoost returns the ${K}_{B} \mathrel{\text{:=}} \min \left( {K, \mid \left\{ {\left( {i, j}\right) : {B}_{i, j}\lbrack 0\rbrack \geq }\right. }\right.$ $- {0.05}\} ))$ candidate edges with the smallest $H$ score and acceptable p-values. The ${K}_{B}$ candidate edges will have the smallest p-values for the KS test up to 0.05 , which can include gene pairs where no interventional data and hence no p-value was available. Since the p-values of these gene pairs were set to 0.05 , this ranking will favor in practice the edges of pairs with small p-values (obtained from combined interventional and observation data) followed by the edges with the highest GRNBoost scores ${G}_{i, j}$ (from observational data only). Typically, this results in more of the final edges being chosen by p-value than by GRNBoost score as more labeled interventional data becomes available.
54
+
55
+ Table 1: Average Wasserstein Distance of Methods on RPE1 Perturb-seq dataset
56
+
57
+ <table><tr><td>$\mathbf{{Method}}$</td><td>$\rho = 0$</td><td>$\rho = {0.25}$</td><td>$\rho = {0.5}$</td><td>$\rho = {0.75}$</td><td>$\rho = {1.0}$</td></tr><tr><td>DCDI</td><td>0.126</td><td>0.126</td><td>0.127</td><td>0.125</td><td>0.130</td></tr><tr><td>GRNBoost</td><td>0.115</td><td>0.106</td><td>0.106</td><td>0.106</td><td>0.106</td></tr><tr><td>GRNBoost-1000</td><td>0.151</td><td>0.147</td><td>0.146</td><td>0.146</td><td>0.145</td></tr><tr><td>BetterBoost</td><td>0.151</td><td>0.398</td><td>0.531</td><td>0.599</td><td>0.636</td></tr></table>
58
+
59
+ ## 3 RESULTS
60
+
61
+ We compared BetterBoost to the two suggested baseline methods, GRNBoost and DCDI, on the RPE1 perturbational data from (Replogle et al., 2022). The methods were evaluated with varying fractions of available labeled interventional data, ranging from 0.25 to 1.0 . In order to comply with the challenge requirements, we choose to return $K = {1000}$ edges for the challenge. By default, GRNBoost returns all edges with non-zero importance, so we additionally tested against a variant of GRNBoost that only returns the 1000 top importance edges.
62
+
63
+ We found that for every fraction of labeled interventional data, $\rho$ considered, BetterBoost improved significantly on the average Wasserstein metric. Additionally, we found that the improvement in the metric correlated perfectly with $\rho$ as shown in Table 1.
64
+
65
+ Remark: We haven't tuned DCDI; the reported results are from running the provided baseline.
66
+
67
+ ## 4 DISCUSSION
68
+
69
+ Our proposed method, BetterBoost, utilizes labeled interventional data to identify the true causal parents of a given observed variable by looking at the effects of interventions on candidate parents. BetterBoost significantly outperforms leading methods GRNBoost and DCDI on provided sample data according to the challenge metric, average Wasserstein distance. In conclusion, our results suggest that BetterBoost is a promising gene regulatory network inference method.
70
+
71
+ BetterBoost can be extended for future work to consider the invariance property of causal relationships mentioned previously. Currently, if a chain of strong causal effects exists, ${\mathbf{x}}_{i} \rightarrow {\mathbf{x}}_{j} \rightarrow {\mathbf{x}}_{k}$ , BetterBoost will likely assign an edge from ${\mathbf{x}}_{i} \rightarrow {\mathbf{x}}_{k}$ . However, if the interventional data on ${\mathbf{x}}_{j}$ is present and labeled, one can identify that an edge does not exist between ${\mathbf{x}}_{i}$ and ${\mathbf{x}}_{k}$ . This scenario also exposes a shortcoming of the average Wasserstein metric, which would not penalize the presence of such an edge in the inferred graph.
72
+
73
+ ## REFERENCES
74
+
75
+ Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. July 2019.
76
+
77
+ Yoav Benjamini and Yosef Hochberg. Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society Series B (Methodological), 57(1):289-300, 1995. doi: http://dx.doi.org/10.2307/2346101.URL http://dx.doi.org/ 10.2307/2346101
78
+
79
+ Philippe Brouillard, Sébastien Lachapelle, Alexandre Lacoste, Simon Lacoste-Julien, and Alexandre Drouin. Differentiable causal discovery from interventional data. July 2020.
80
+
81
+ Atray Dixit, Oren Parnas, Biyu Li, Jenny Chen, Charles P Fulco, Livnat Jerby-Arnon, Nemanja D Marjanovic, Danielle Dionne, Tyler Burks, Raktima Raychowdhury, Britt Adamson, Thomas M Norman, Eric S Lander, Jonathan S Weissman, Nir Friedman, and Aviv Regev. Perturb-Seq: Dissecting molecular circuits with scalable Single-Cell RNA profiling of pooled genetic screens. Cell, 167(7):1853-1866.e17, December 2016.
82
+
83
+ Matthew H Larson, Luke A Gilbert, Xiaowo Wang, Wendell A Lim, Jonathan S Weissman, and Lei S Qi. CRISPR interference (CRISPRi) for sequence-specific control of gene expression. Nat. Protoc., 8(11):2180-2196, November 2013.
84
+
85
+ F. J. Massey. The Kolmogorov-Smirnov test for goodness of fit. Journal of the American Statistical Association, 46(253):68-78, 1951.
86
+
87
+ Thomas Moerman, Sara Aibar Santos, Carmen Bravo González-Blas, Jaak Simm, Yves Moreau, Jan Aerts, and Stein Aerts. Grnboost2 and arboreto: efficient and scalable inference of gene regulatory networks. Bioinformatics, 35(12):2159-2161, 2019.
88
+
89
+ Antoine Passemiers, Yves Moreau, and Daniele Raimondi. Fast and accurate inference of gene regulatory networks through robust precision matrix estimation. Bioinformatics, 38(10):2802- 2809, May 2022.
90
+
91
+ Jonas Peters, Peter Bühlmann, and Nicolai Meinshausen. Causal inference using invariant prediction: identification and confidence intervals. January 2015.
92
+
93
+ Joseph M Replogle, Reuben A Saunders, Angela N Pogson, Jeffrey A Hussmann, Alexander Lenail, Alina Guna, Lauren Mascibroda, Eric J Wagner, Karen Adelman, Gila Lithwick-Yanai, Nika Ire-madze, Florian Oberstrass, Doron Lipson, Jessica L Bonnar, Marco Jost, Thomas M Norman, and Jonathan S Weissman. Mapping information-rich genotype-phenotype landscapes with genome-scale perturb-seq. May 2022.
94
+
95
+ Muralikrishnna G Sethuraman, Romain Lopez, Rahul Mohan, Faramarz Fekri, Tommaso Bian-calani, and Jan-Christian Hütter. NODAGS-Flow: Nonlinear cyclic causal structure learning. January 2023.
papers/GSK/GSK 2023/GSK 2023 CBC/gpDOOAOmMe/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § GSK.AI CAUSALBENCH CHALLENGE (ICLR 2023) REPORT SUBMISSION – BETTERBOOST
2
+
3
+ Anonymous authors
4
+
5
+ Paper under double-blind review
6
+
7
+ § ABSTRACT
8
+
9
+ The introduction of large-scale, genome-wide, single-cell perturbation datasets provides the chance to learn a full gene regulatory network in the relevant cell line. However, existing gene regulatory network inference methods either fail to scale or do not explicitly leverage the interventional nature of this data. In this work, we propose an algorithm that builds upon GRNBoost by adding an additional step that complements its performance in the presence of labeled, single-gene interventional data. Our method, BetterBoost, significantly outperforms the baseline on provided single-cell perturbation datasets when non-zero fractions of labeled interventions are available, demonstrating the efficacy of our approach for inferring gene regulatory networks from large-scale single-cell perturbation datasets.
10
+
11
+ § 1 INTRODUCTION
12
+
13
+ The introduction of large-scale, genome-wide, single-cell perturbation datasets (Replogle et al., 2022; Dixit et al., 2016) provides the chance to learn a full gene regulatory network. However, existing methods for gene regulatory network inference either fail to scale (Brouillard et al., 2020; Sethuraman et al. 2023) or do not explicitly leverage the interventional nature of this data (Moerman et al., 2019; Passemiers et al., 2022). Methods that fail to scale often have algorithmic complexity issues, such as those that arise when computing the exponential of a large matrix. Conversely, some methods that can handle over 10,000 genes (Moerman et al., 2019; Passemiers et al., 2022) treat the dataset as observational, overlooking the valuable interventional data. Although incorporating interventional data can improve the predictive power of models that treat the dataset as observational, such models fail to exploit causal inference principles that can help identify causal relationships. For example, recent works (Peters et al., 2015; Arjovsky et al., 2019) have leveraged the invariance of the conditional distribution $P\left( {\mathbf{x} \mid \operatorname{Pa}\left( \mathbf{x}\right) }\right)$ for causal discovery. Here, $\operatorname{Pa}\left( \mathbf{x}\right)$ denotes the set of direct causal parents of $\mathbf{x}$ .
14
+
15
+ Among the scalable models that do not incorporate interventional data, we found that GRNBoost (Moerman et al., 2019) performed the best. GRNBoost defines the target gene's parents as the target’s most predictive genes using a prediction importance score ${G}_{i,j}$ from gene $i$ to gene $j$ . We adapted the GRNBoost score ${G}_{i,j}$ into a score ${B}_{i,j}$ in our proposed method, BetterBoost, which leverages interventional data in complement to observational data. The score ${B}_{i,j}$ reduces to ${G}_{i,j}$ when only observational data is available and improves as more interventional data becomes available.
16
+
17
+ BetterBoost assumes that if the dataset was generated by a causal model, the observed data's joint distribution can be factorized as:
18
+
19
+ $$
20
+ p\left( {{\mathbf{x}}_{1}\ldots {\mathbf{x}}_{G}}\right) = \mathop{\prod }\limits_{{i = 1}}^{G}p\left( {{\mathbf{x}}_{i} \mid \operatorname{Pa}\left( {\mathbf{x}}_{i}\right) }\right) . \tag{1}
21
+ $$
22
+
23
+ If a candidate gene is a parent of the target, it will be a good predictor for the target, as GRNBoost assumes. But with labeled, interventional data, one can attempt to identify the true causal parents of a given observed variable ${\mathbf{x}}_{i}$ by looking at the effects of interventions on the candidate parents of ${\mathbf{x}}_{i}$ . In particular, in a sample where a candidate parent gene is knocked down, the perturbed gene will only remain a good predictor for the target gene if it is a true causal parent of the target. Hence, if knocking down a candidate gene leads to a statistically significant prediction of the target gene,
24
+
25
+ it indicates strong evidence of a causal relationship directed from the candidate parent to the target gene. We leverage the impact of knocking down candidate genes in the prediction importance score of BetterBoost.
26
+
27
+ We find that BetterBoost performs significantly better than leading methods GRNBoost (Passemiers et al., 2022) and DCDI (Brouillard et al., 2020) on provided sample data according to the challenge metric, average Wasserstein distance. Below, we detail the proposed method and go over the preliminary results of BetterBoost and relevant baselines on sample datasets.
28
+
29
+ § 2 METHODS
30
+
31
+ In this section, we restate the objective of the challenge and detail the algorithm, BetterBoost.
32
+
33
+ § 2.1 OBJECTIVE
34
+
35
+ The considered single-cell perturbational datasets each consist of a matrix of UMI counts per cell, $\mathbf{X} \in {\mathbb{Z}}^{+N \times G}$ , and associated interventional labels, $\mathbf{s} \in \{$ unperturbed, unlabeled, $1,\ldots ,G{\} }^{N}$ , for each cell. Note the interventions can only affect at most one gene, which can be achieved via high-precision CRISPRi technology (Larson et al.,2013). We denote the fraction of genes $g \in \left\lbrack G\right\rbrack$ with labeled interventional data as $\rho$ .
36
+
37
+ Since ground truth causal network data does not exist for these datasets, a proposed causal graph is evaluated by the average Wasserstein distance which is defined as follows: for each edge in the inferred causal graph $\left( {i,j}\right) \in \widehat{\mathcal{G}}$ , the Wasserstein distance is computed between the distribution of ${X}_{j}$ in the unperturbed data and in the subset of data where ${X}_{i}$ is perturbed. Therefore, the average Wasserstein distance can be written as:
38
+
39
+ $$
40
+ d\left( \widehat{\mathcal{G}}\right) \mathrel{\text{ := }} \frac{1}{\left| \widehat{\mathcal{G}}\right| }\mathop{\sum }\limits_{{\left( {i,j}\right) \in \widehat{\mathcal{G}}}}{W}_{1}\left( {p\left( {{\mathbf{x}}_{j} \mid \mathbf{s} = \text{ unperturbed }}\right) ,p\left( {{\mathbf{x}}_{j} \mid \mathbf{s} = i}\right) }\right) \tag{2}
41
+ $$
42
+
43
+ where ${W}_{1}$ denotes the first Wasserstein distance between two distributions.
44
+
45
+ The space of valid causal graphs, $\widehat{\mathcal{G}}$ is constrained to $\{ \widehat{\mathcal{G}} : \left| \widehat{\mathcal{G}}\right| \geq {1000}\}$ , but otherwise can include cycles and disconnected components.
46
+
47
+ § 2.2 ALGORITHM
48
+
49
+ We found GRNBoost to work the best in the observational case, i.e. no labeled interventional data, but fail to improve on this metric after adding strictly more information in the form of intervention labels. Thus, we developed a simple procedure for leveraging any available intervention labels. As previously mentioned, we assume that the true causal graph, $\mathcal{G}$ is a directed, acyclic graph (DAG), and therefore the joint distribution factorizes as in Equation 1. To identify if gene $j \in \left\lbrack G\right\rbrack$ is a strong candidate parent gene for a given target gene $i \in \left\lbrack G\right\rbrack$ , we look if $j$ is predictive of the target gene $i$ in the dataset formed by observational data and the interventional data on gene $j$ . For a true causal parent, we expect that when $j$ is knocked down, there will be a statistically significant shift in the distribution of observed UMIs of gene $i$ between observational and interventional data. Since we held no priors on the nature of causal effects, we chose to use the Kolmogorov-Smirnov (KS) test (Massey, 1951) to test these distributional shifts between observational and interventional data. Additionally, we used the Benjamini-Hochberg procedure to correct the p-values for multiple testing (Benjamini & Hochberg, 1995).
50
+
51
+ To formulate the new score used by BetterBoost to rank the impact of gene $i$ on gene $j$ , we write ${G}_{i,j}$ the predictive score of gene $i$ on gene $j$ computed by GRNBoost, and ${p}_{i,j}$ the Benjamini-Hochberg corrected KS test p-value of the impact of knocking down gene $i$ on gene $j$ . If no interventional data was available on $i$ , we set all p-values ${p}_{i, * }$ to 0.05, as to neither strongly accept nor reject hypotheses for these interactions. We then define the score ${B}_{i,j} = \left( {-{p}_{i,j},{G}_{i,j}}\right)$ that we sort from larger to smaller (in lexicographic order).
52
+
53
+ For some desired number of edges, $K$ , BetterBoost returns the ${K}_{B} \mathrel{\text{ := }} \min \left( {K, \mid \left\{ {\left( {i,j}\right) : {B}_{i,j}\lbrack 0\rbrack \geq }\right. }\right.$ $- {0.05}\} ))$ candidate edges with the smallest $H$ score and acceptable p-values. The ${K}_{B}$ candidate edges will have the smallest p-values for the KS test up to 0.05, which can include gene pairs where no interventional data and hence no p-value was available. Since the p-values of these gene pairs were set to 0.05, this ranking will favor in practice the edges of pairs with small p-values (obtained from combined interventional and observation data) followed by the edges with the highest GRNBoost scores ${G}_{i,j}$ (from observational data only). Typically, this results in more of the final edges being chosen by p-value than by GRNBoost score as more labeled interventional data becomes available.
54
+
55
+ Table 1: Average Wasserstein Distance of Methods on RPE1 Perturb-seq dataset
56
+
57
+ max width=
58
+
59
+ $\mathbf{{Method}}$ $\rho = 0$ $\rho = {0.25}$ $\rho = {0.5}$ $\rho = {0.75}$ $\rho = {1.0}$
60
+
61
+ 1-6
62
+ DCDI 0.126 0.126 0.127 0.125 0.130
63
+
64
+ 1-6
65
+ GRNBoost 0.115 0.106 0.106 0.106 0.106
66
+
67
+ 1-6
68
+ GRNBoost-1000 0.151 0.147 0.146 0.146 0.145
69
+
70
+ 1-6
71
+ BetterBoost 0.151 0.398 0.531 0.599 0.636
72
+
73
+ 1-6
74
+
75
+ § 3 RESULTS
76
+
77
+ We compared BetterBoost to the two suggested baseline methods, GRNBoost and DCDI, on the RPE1 perturbational data from (Replogle et al., 2022). The methods were evaluated with varying fractions of available labeled interventional data, ranging from 0.25 to 1.0 . In order to comply with the challenge requirements, we choose to return $K = {1000}$ edges for the challenge. By default, GRNBoost returns all edges with non-zero importance, so we additionally tested against a variant of GRNBoost that only returns the 1000 top importance edges.
78
+
79
+ We found that for every fraction of labeled interventional data, $\rho$ considered, BetterBoost improved significantly on the average Wasserstein metric. Additionally, we found that the improvement in the metric correlated perfectly with $\rho$ as shown in Table 1.
80
+
81
+ Remark: We haven't tuned DCDI; the reported results are from running the provided baseline.
82
+
83
+ § 4 DISCUSSION
84
+
85
+ Our proposed method, BetterBoost, utilizes labeled interventional data to identify the true causal parents of a given observed variable by looking at the effects of interventions on candidate parents. BetterBoost significantly outperforms leading methods GRNBoost and DCDI on provided sample data according to the challenge metric, average Wasserstein distance. In conclusion, our results suggest that BetterBoost is a promising gene regulatory network inference method.
86
+
87
+ BetterBoost can be extended for future work to consider the invariance property of causal relationships mentioned previously. Currently, if a chain of strong causal effects exists, ${\mathbf{x}}_{i} \rightarrow {\mathbf{x}}_{j} \rightarrow {\mathbf{x}}_{k}$ , BetterBoost will likely assign an edge from ${\mathbf{x}}_{i} \rightarrow {\mathbf{x}}_{k}$ . However, if the interventional data on ${\mathbf{x}}_{j}$ is present and labeled, one can identify that an edge does not exist between ${\mathbf{x}}_{i}$ and ${\mathbf{x}}_{k}$ . This scenario also exposes a shortcoming of the average Wasserstein metric, which would not penalize the presence of such an edge in the inferred graph.
papers/GSK/GSK 2023/GSK 2023 CBC/hFx9EUs320I/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # @CAUSALBENCH CHALLENGE 2023 - MINOR IM- PROVEMENTS TO THE DIFFERENTIABLE CAUSAL DIS- COVERY FROM INTERVENTIONAL DATA MODEL
2
+
3
+ Anonymous authors
4
+
5
+ Paper under double-blind review
6
+
7
+ ## Abstract
8
+
9
+ For the creation of new drugs, understanding how genes interact with one another is crucial. Researchers can find new potential drugs that could be utilised to treat diseases by looking at gene-gene interactions. Scale-based research on gene-gene interactions proved challenging in the past. It was necessary to measure the expression of thousands or even millions of genes in each individual cell. Recently, high-throughput sequencing technology has made it possible to detect gene expression at this level. These advances have led to the development of new methods for inferring causal gene-gene interactions. These methods use single-cell gene expression data to identify genes that are statistically associated with each other. However, it is difficult to ensure that these associations are causal, rather than simply correlated. So, the CausalBench Challenge seeks to improve our ability to understand the causal relationships between genes by advancing the state-of-the-art in inferring gene-gene networks from large-scale real-world perturbational single-cell datasets. This information can be used to develop new drugs and treatments for diseases. The main goal of this challenge is to improve one of two existing methods for inferring gene-gene networks from large-scale real-world perturbational single-cell datasets: GRNBoost or Causal Discovery from Interventional Data (DCDI). This paper will describe three small improvements to the DCDI baseline.
10
+
11
+ ## 1 INTRODUCTION
12
+
13
+ Causal inference is a fundamental problem in science. Experiments are conducted in all fields of research to understand the underlying causal dynamics of systems. This is motivated by the desire to take actions that induce a controlled change in a system. However, studying causality in real-world environments is often difficult because it generally requires either the ability to intervene and observe outcomes under both interventional and control conditions, or the use of strong and untestable assumptions that cannot be verified from observational data alone.
14
+
15
+ To address these problem, CaualBench Chevalley et al. (2022) was introduced. CausalBench is a comprehensive benchmark suite for evaluating network inference methods on perturbational single-cell RNA sequencing data. It includes two curated, openly available datasets with over 200,000 interventional samples each, a set of meaningful benchmark metrics, and baseline implementations of relevant state-of-the-art methods. The CausalBench challenge also provides two different baseline methods for inferring causal relationship: the GRNBoost Aibar et al. (2017) and the DCDI Brouillard et al. (2020), and proposed changing one of the algorithms to improve its performance. The GRNBoost is a method for inferring gene regulatory networks from observational data. It can be improved by using interventional data, and the DCDI is a method for inferring gene regulatory networks from interventional data. It can be improved by tuning its parameters and by using more data. In this work, I chose to modify the DCDI baseline and apply three small modifications to the algorithm that are introduced in section 2.
16
+
17
+ ## 2 METHODOLOGY
18
+
19
+ ### 2.1 GREEDY PARTITIONING ALGORITHM
20
+
21
+ In the baseline implementation of the DCDI, the genes were partitioned into random independent sub-graphs, since DCDI can't handle the full graph as it does not scale well in terms of number of nodes. This partitioning scheme sacrifices possible causal links between genes in different subgraphs to make the DCDI algorithm more tractable. So, to minimize the loss of any valid causal links, we need the partitioning algorithm to group the genes such that the genes in each sub-graph are related to each other as much as possible. The basic idea of the developed partitioning algorithm is to develop a measure of relationship between every pair of genes (adj in the algorithm below), then after initializing the sub-graphs with random genes, we divide the genes into partitions using a greedy algorithm, i.e. a gene is assigned to a sub-graph where it has the maximum possible relationship with all other genes.
22
+
23
+ The Greedy partitioning algorithm:
24
+
25
+ ---
26
+
27
+ #initialize the algorithm parameters
28
+
29
+ partition_length = int(len(indices) / self.gene_partition_sizes)
30
+
31
+ indices = list(range(len(gene_names)))
32
+
33
+ used = [False for i in range(len(indices))]
34
+
35
+ random.shuffle ( indices )
36
+
37
+ #initialize the adjacency matrix
38
+
39
+ adj = (expression_matrix > 0 ). astype ( int )
40
+
41
+ adj = normalize( adj , norm=' 12 ' , axis=0 )
42
+
43
+ adj = np.matmul( np.transpose( adj ), adj )
44
+
45
+ #initialize partitions with random genes
46
+
47
+ partitions = []
48
+
49
+ for i in range ( partition_length ):
50
+
51
+ partitions = partitions + [[indices[i]]
52
+
53
+ used[indices[i]] = True
54
+
55
+ #divide the genes into partitions
56
+
57
+ while not all (used):
58
+
59
+ for i in range ( partition_length ):
60
+
61
+ if all ( used ) :
62
+
63
+ break
64
+
65
+ max_dist , max_ind = -1, -1
66
+
67
+ for j in range(len(indices)):
68
+
69
+ if not used[indices[j]]:
70
+
71
+ dist $= 0$
72
+
73
+ for k in partitions[i]:
74
+
75
+ dist = dist + adj[k, j]
76
+
77
+ if dist > max_dist :
78
+
79
+ max_dist = dist
80
+
81
+ max_ind = indices[j]
82
+
83
+ partitions[i] = partitions[i] + [max_ind]
84
+
85
+ used[max_ind] = True
86
+
87
+ #return the partitions
88
+
89
+ return partitions
90
+
91
+ ---
92
+
93
+ ### 2.2 AUGMENTING THE DATA
94
+
95
+ In this work, the data is augmented to be the double of its original size. The augmentation algorithm is simple, you randomly select two samples with the same intervention, and average these two samples and add it as a new sample.
96
+
97
+ ### 2.3 The Deep Sigmoidal Flow Model Parameter Tuning
98
+
99
+ In the baseline model, the sigmoidal flow has two conditional layers with 15 dimensions each, and two flow layers with 10 dimensions each. However, it is a rule of thumb that each variable in a neural network needs 25 examples to be trained well and to produce similar results across multiple runs, so the dimensions of the conditional and flow layers are set according to a simple heuristic with upper and lower bounds. The heuristic: $X = \sqrt{\text{len(intervention)}/{25}/3/2/2/{1.5}}$ , and the dimension of the conditional layer is set to be $\min \left( {{18},\max \left( {5,\operatorname{round}\left( {{1.5} * X}\right) }\right) }\right)$ , and the dimension of the flow layer is set to be $\min \left( {{12},\max \left( {3,\operatorname{round}\left( X\right) }\right) }\right)$
100
+
101
+ ## 3 CONCLUSION AND FUTURE WORK
102
+
103
+ In this work, three minor improvements to the DCDI baseline were introduced: new partitioning algorithm for the genes, data augmentation scheme and parameter selection formulas for the deep sigmoidal flow model. These modifications improved the performance of the DCDI baseline on the public test set. For future work, different measure of relationships between genes can be explored in the partitioning algorithm, also, a tractable more optimal partitioning algorithm can also be derived other than the proposed greedy algorithm.
104
+
105
+ ## REFERENCES
106
+
107
+ Sara Aibar, Carmen Bravo González-Blas, Thomas Moerman, Vân Anh Huynh-Thu, Hana Imri-chova, Gert Hulselmans, Florian Rambow, Jean-Christophe Marine, Pierre Geurts, Jan Aerts, et al. Scenic: single-cell regulatory network inference and clustering. Nature methods, 14(11): 1083-1086, 2017.
108
+
109
+ Philippe Brouillard, Sébastien Lachapelle, Alexandre Lacoste, Simon Lacoste-Julien, and Alexandre Drouin. Differentiable causal discovery from interventional data. Advances in Neural Information Processing Systems, 33:21865-21877, 2020.
110
+
111
+ Mathieu Chevalley, Yusuf Roohani, Arash Mehrjou, Jure Leskovec, and Patrick Schwab. Causal-bench: A large-scale benchmark for network inference from single-cell perturbation data. arXiv preprint arXiv:2210.17283, 2022.
papers/GSK/GSK 2023/GSK 2023 CBC/hFx9EUs320I/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § @CAUSALBENCH CHALLENGE 2023 - MINOR IM- PROVEMENTS TO THE DIFFERENTIABLE CAUSAL DIS- COVERY FROM INTERVENTIONAL DATA MODEL
2
+
3
+ Anonymous authors
4
+
5
+ Paper under double-blind review
6
+
7
+ § ABSTRACT
8
+
9
+ For the creation of new drugs, understanding how genes interact with one another is crucial. Researchers can find new potential drugs that could be utilised to treat diseases by looking at gene-gene interactions. Scale-based research on gene-gene interactions proved challenging in the past. It was necessary to measure the expression of thousands or even millions of genes in each individual cell. Recently, high-throughput sequencing technology has made it possible to detect gene expression at this level. These advances have led to the development of new methods for inferring causal gene-gene interactions. These methods use single-cell gene expression data to identify genes that are statistically associated with each other. However, it is difficult to ensure that these associations are causal, rather than simply correlated. So, the CausalBench Challenge seeks to improve our ability to understand the causal relationships between genes by advancing the state-of-the-art in inferring gene-gene networks from large-scale real-world perturbational single-cell datasets. This information can be used to develop new drugs and treatments for diseases. The main goal of this challenge is to improve one of two existing methods for inferring gene-gene networks from large-scale real-world perturbational single-cell datasets: GRNBoost or Causal Discovery from Interventional Data (DCDI). This paper will describe three small improvements to the DCDI baseline.
10
+
11
+ § 1 INTRODUCTION
12
+
13
+ Causal inference is a fundamental problem in science. Experiments are conducted in all fields of research to understand the underlying causal dynamics of systems. This is motivated by the desire to take actions that induce a controlled change in a system. However, studying causality in real-world environments is often difficult because it generally requires either the ability to intervene and observe outcomes under both interventional and control conditions, or the use of strong and untestable assumptions that cannot be verified from observational data alone.
14
+
15
+ To address these problem, CaualBench Chevalley et al. (2022) was introduced. CausalBench is a comprehensive benchmark suite for evaluating network inference methods on perturbational single-cell RNA sequencing data. It includes two curated, openly available datasets with over 200,000 interventional samples each, a set of meaningful benchmark metrics, and baseline implementations of relevant state-of-the-art methods. The CausalBench challenge also provides two different baseline methods for inferring causal relationship: the GRNBoost Aibar et al. (2017) and the DCDI Brouillard et al. (2020), and proposed changing one of the algorithms to improve its performance. The GRNBoost is a method for inferring gene regulatory networks from observational data. It can be improved by using interventional data, and the DCDI is a method for inferring gene regulatory networks from interventional data. It can be improved by tuning its parameters and by using more data. In this work, I chose to modify the DCDI baseline and apply three small modifications to the algorithm that are introduced in section 2.
16
+
17
+ § 2 METHODOLOGY
18
+
19
+ § 2.1 GREEDY PARTITIONING ALGORITHM
20
+
21
+ In the baseline implementation of the DCDI, the genes were partitioned into random independent sub-graphs, since DCDI can't handle the full graph as it does not scale well in terms of number of nodes. This partitioning scheme sacrifices possible causal links between genes in different subgraphs to make the DCDI algorithm more tractable. So, to minimize the loss of any valid causal links, we need the partitioning algorithm to group the genes such that the genes in each sub-graph are related to each other as much as possible. The basic idea of the developed partitioning algorithm is to develop a measure of relationship between every pair of genes (adj in the algorithm below), then after initializing the sub-graphs with random genes, we divide the genes into partitions using a greedy algorithm, i.e. a gene is assigned to a sub-graph where it has the maximum possible relationship with all other genes.
22
+
23
+ The Greedy partitioning algorithm:
24
+
25
+ #initialize the algorithm parameters
26
+
27
+ partition_length = int(len(indices) / self.gene_partition_sizes)
28
+
29
+ indices = list(range(len(gene_names)))
30
+
31
+ used = [False for i in range(len(indices))]
32
+
33
+ random.shuffle ( indices )
34
+
35
+ #initialize the adjacency matrix
36
+
37
+ adj = (expression_matrix > 0 ). astype ( int )
38
+
39
+ adj = normalize( adj, norm=' 12 ', axis=0 )
40
+
41
+ adj = np.matmul( np.transpose( adj ), adj )
42
+
43
+ #initialize partitions with random genes
44
+
45
+ partitions = []
46
+
47
+ for i in range ( partition_length ):
48
+
49
+ partitions = partitions + [[indices[i]]
50
+
51
+ used[indices[i]] = True
52
+
53
+ #divide the genes into partitions
54
+
55
+ while not all (used):
56
+
57
+ for i in range ( partition_length ):
58
+
59
+ if all ( used ) :
60
+
61
+ break
62
+
63
+ max_dist, max_ind = -1, -1
64
+
65
+ for j in range(len(indices)):
66
+
67
+ if not used[indices[j]]:
68
+
69
+ dist $= 0$
70
+
71
+ for k in partitions[i]:
72
+
73
+ dist = dist + adj[k, j]
74
+
75
+ if dist > max_dist :
76
+
77
+ max_dist = dist
78
+
79
+ max_ind = indices[j]
80
+
81
+ partitions[i] = partitions[i] + [max_ind]
82
+
83
+ used[max_ind] = True
84
+
85
+ #return the partitions
86
+
87
+ return partitions
88
+
89
+ § 2.2 AUGMENTING THE DATA
90
+
91
+ In this work, the data is augmented to be the double of its original size. The augmentation algorithm is simple, you randomly select two samples with the same intervention, and average these two samples and add it as a new sample.
92
+
93
+ § 2.3 THE DEEP SIGMOIDAL FLOW MODEL PARAMETER TUNING
94
+
95
+ In the baseline model, the sigmoidal flow has two conditional layers with 15 dimensions each, and two flow layers with 10 dimensions each. However, it is a rule of thumb that each variable in a neural network needs 25 examples to be trained well and to produce similar results across multiple runs, so the dimensions of the conditional and flow layers are set according to a simple heuristic with upper and lower bounds. The heuristic: $X = \sqrt{\text{ len(intervention) }/{25}/3/2/2/{1.5}}$ , and the dimension of the conditional layer is set to be $\min \left( {{18},\max \left( {5,\operatorname{round}\left( {{1.5} * X}\right) }\right) }\right)$ , and the dimension of the flow layer is set to be $\min \left( {{12},\max \left( {3,\operatorname{round}\left( X\right) }\right) }\right)$
96
+
97
+ § 3 CONCLUSION AND FUTURE WORK
98
+
99
+ In this work, three minor improvements to the DCDI baseline were introduced: new partitioning algorithm for the genes, data augmentation scheme and parameter selection formulas for the deep sigmoidal flow model. These modifications improved the performance of the DCDI baseline on the public test set. For future work, different measure of relationships between genes can be explored in the partitioning algorithm, also, a tractable more optimal partitioning algorithm can also be derived other than the proposed greedy algorithm.
papers/GSK/GSK 2023/GSK 2023 CBC/hYT_pgTxjrR/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Challenge Report
2
+
3
+ Anonymous authors
4
+
5
+ Paper under double-blind review
6
+
7
+ ## Abstract
8
+
9
+ This brief report describes an approach to modify the graph inference function for the causal bench challenge which is based on the dataset described in Chevalley et al. (2022).
10
+
11
+ ## 1 METHOD OVERVIEW
12
+
13
+ We observe that for the DCDI algorithm the genes are partitioned in small groups and the algorithm is applied to those groups independently. Even for rather small number of genes, 500 say, and 50 nodes per partition element, the probability of a specific edge to be included in one partition is only ${10}\%$ . Thus a good choice of partitions can greatly increase the number of suitable candidate edges the DCDI algorithm can potentially find. Thus we constructed clusterings based on similarities of genes which might indicate closeness in the causal structure and therefore potentially graph edges. We then ran DCDI on the individual clusters.
14
+
15
+ We consider two suggestions to obtain the clustering. First we defined
16
+
17
+ $$
18
+ d\left( {k, l}\right) = 1 - \operatorname{corrcoef}\left( {{X}_{k},{X}_{l}}\right) \tag{1}
19
+ $$
20
+
21
+ where ${X}_{k}$ and ${X}_{k}$ are the expression of gene $k$ and $l$ . Then we used spectral clustering using $d$ . We fixed the average cluster size ${n}_{\text{avg }}$ and the maximal cluster size ${n}_{\max }$ and split too large clusters randomly in two subclusters. In addition we considered the mean shifts between environments by
22
+
23
+ $$
24
+ {\mu }_{k}^{\left( i\right) } = {\mathbb{E}}^{\left( i\right) }\left( {X}_{k}\right) - \mathbb{E}\left( {X}_{k}\right) \tag{2}
25
+ $$
26
+
27
+ where $i$ denotes interventional distribution $i$ . For each gene $k$ we thresholded
28
+
29
+ $$
30
+ {s}_{k}^{\left( i\right) } = \left\{ \begin{array}{ll} 1 & \text{ if }\left| {\mu }_{k}^{\left( i\right) }\right| \text{ is larger than }{90}\% \text{ of the }\left| {\mu }_{k}^{\left( j\right) }\right| , \\ 0 & \text{ else. } \end{array}\right. \tag{3}
31
+ $$
32
+
33
+ We define the similarity matrix ${S}_{kl} = \mathop{\sum }\limits_{i}{s}_{k}^{\left( i\right) }{s}_{l}^{\left( i\right) }$ . Then we construct partitions by randomly selecting a cluster seed (a randomly selected gene ${k}_{1}$ ) and set ${C}^{1} = \left\{ {k}_{1}\right\}$ and then greedily adding nodes ${k}_{n + 1}$ to the cluster ${C}^{n}$ such that ${k}_{n + 1} \in {\operatorname{argmax}}_{l}\mathop{\sum }\limits_{{{k}_{i} \in {C}^{n}}}{S}_{{k}_{i}l}$ until a fixed cluster size ${n}_{\text{avg }}$ is reached. This is repeated for 3 times. Again, the rationale is that genes whose expression levels change in a similar pattern for the provided interventional data might be close in the causal graph.
34
+
35
+ For the total of 4 partitions we run the DCDI algorithm and then threshold the edge probabilities for each run by
36
+
37
+ $$
38
+ p \rightarrow \operatorname{RELU}\left( {p - {.5}}\right) , \tag{4}
39
+ $$
40
+
41
+ i.e., we keep the information about edges with probability at least . 5 predicted by the algorithm. In the end we add up the thresholded edge probabilities over the four partitions (this favours edges that end up in the same cluster for several partitions which is intended). The 2000 edges with the highest aggregated probabilities are returned by the algorithm. We chose ${n}_{\text{avg }} = {30}$ as the average size or fixed size for the clusters and ${n}_{\max } = {50}$ as the maximal size after the script crashed for larger partition sizes.
42
+
43
+ All other parts of the inference function remained the same as in the provided code.
44
+
45
+ ## REFERENCES
46
+
47
+ Mathieu Chevalley, Yusuf Roohani, Arash Mehrjou, Jure Leskovec, and Patrick Schwab. Causal-bench: A large-scale benchmark for network inference from single-cell perturbation data. arXiv preprint arXiv:2210.17283, 2022.
papers/GSK/GSK 2023/GSK 2023 CBC/hYT_pgTxjrR/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § CHALLENGE REPORT
2
+
3
+ Anonymous authors
4
+
5
+ Paper under double-blind review
6
+
7
+ § ABSTRACT
8
+
9
+ This brief report describes an approach to modify the graph inference function for the causal bench challenge which is based on the dataset described in Chevalley et al. (2022).
10
+
11
+ § 1 METHOD OVERVIEW
12
+
13
+ We observe that for the DCDI algorithm the genes are partitioned in small groups and the algorithm is applied to those groups independently. Even for rather small number of genes, 500 say, and 50 nodes per partition element, the probability of a specific edge to be included in one partition is only ${10}\%$ . Thus a good choice of partitions can greatly increase the number of suitable candidate edges the DCDI algorithm can potentially find. Thus we constructed clusterings based on similarities of genes which might indicate closeness in the causal structure and therefore potentially graph edges. We then ran DCDI on the individual clusters.
14
+
15
+ We consider two suggestions to obtain the clustering. First we defined
16
+
17
+ $$
18
+ d\left( {k,l}\right) = 1 - \operatorname{corrcoef}\left( {{X}_{k},{X}_{l}}\right) \tag{1}
19
+ $$
20
+
21
+ where ${X}_{k}$ and ${X}_{k}$ are the expression of gene $k$ and $l$ . Then we used spectral clustering using $d$ . We fixed the average cluster size ${n}_{\text{ avg }}$ and the maximal cluster size ${n}_{\max }$ and split too large clusters randomly in two subclusters. In addition we considered the mean shifts between environments by
22
+
23
+ $$
24
+ {\mu }_{k}^{\left( i\right) } = {\mathbb{E}}^{\left( i\right) }\left( {X}_{k}\right) - \mathbb{E}\left( {X}_{k}\right) \tag{2}
25
+ $$
26
+
27
+ where $i$ denotes interventional distribution $i$ . For each gene $k$ we thresholded
28
+
29
+ $$
30
+ {s}_{k}^{\left( i\right) } = \left\{ \begin{array}{ll} 1 & \text{ if }\left| {\mu }_{k}^{\left( i\right) }\right| \text{ is larger than }{90}\% \text{ of the }\left| {\mu }_{k}^{\left( j\right) }\right| , \\ 0 & \text{ else. } \end{array}\right. \tag{3}
31
+ $$
32
+
33
+ We define the similarity matrix ${S}_{kl} = \mathop{\sum }\limits_{i}{s}_{k}^{\left( i\right) }{s}_{l}^{\left( i\right) }$ . Then we construct partitions by randomly selecting a cluster seed (a randomly selected gene ${k}_{1}$ ) and set ${C}^{1} = \left\{ {k}_{1}\right\}$ and then greedily adding nodes ${k}_{n + 1}$ to the cluster ${C}^{n}$ such that ${k}_{n + 1} \in {\operatorname{argmax}}_{l}\mathop{\sum }\limits_{{{k}_{i} \in {C}^{n}}}{S}_{{k}_{i}l}$ until a fixed cluster size ${n}_{\text{ avg }}$ is reached. This is repeated for 3 times. Again, the rationale is that genes whose expression levels change in a similar pattern for the provided interventional data might be close in the causal graph.
34
+
35
+ For the total of 4 partitions we run the DCDI algorithm and then threshold the edge probabilities for each run by
36
+
37
+ $$
38
+ p \rightarrow \operatorname{RELU}\left( {p - {.5}}\right) , \tag{4}
39
+ $$
40
+
41
+ i.e., we keep the information about edges with probability at least . 5 predicted by the algorithm. In the end we add up the thresholded edge probabilities over the four partitions (this favours edges that end up in the same cluster for several partitions which is intended). The 2000 edges with the highest aggregated probabilities are returned by the algorithm. We chose ${n}_{\text{ avg }} = {30}$ as the average size or fixed size for the clusters and ${n}_{\max } = {50}$ as the maximal size after the script crashed for larger partition sizes.
42
+
43
+ All other parts of the inference function remained the same as in the provided code.
papers/GSK/GSK 2023/GSK 2023 CBC/nB9zUwS2gpI/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A SUPERVISED LIGHTGBM-BASED APPROACH TO THE GSK.AI CAUSALBENCH CHALLENGE (ICLR 2023) TEAM GUANLAB REPORT SUBMISSIONS
2
+
3
+ Anonymous authors
4
+
5
+ Paper under double-blind review
6
+
7
+ ## Abstract
8
+
9
+ In this challenge, we transformed the task of detecting gene pairs with causal relationships into a supervised learning problem. We constructed a dataset for all gene pairs, with initial labels determined by gene expression correlations. A LightGBM model was trained and applied to the same data for prediction. The top 1001 pairs with the highest prediction scores were selected. In local experiments, this solution achieved a 0.3779 AUC score in the RPE1 data and a 0.3265 score in the K562 data.
10
+
11
+ ## 1 NOTATIONS
12
+
13
+ In addition to standard notations, we defined several custom notations listed below to describe the method more efficiently.
14
+
15
+ $\left\langle {{\mathbf{g}}_{i},{\mathbf{g}}_{j}}\right\rangle$ A directed gene pair from ${\mathbf{g}}_{i}$ to ${\mathbf{g}}_{j}$
16
+
17
+ ${M}_{{g}_{i},{g}_{j}}$ Select the rows for ${\mathbf{g}}_{i}$ and the columns for ${\mathbf{g}}_{j}$ from the expression matrix $\mathbf{M}$
18
+
19
+ ${\mu }_{M,0}$ The column-wise mean value of the expression matrix
20
+
21
+ ${\sigma }_{M,0}$ The column-wise standard deviation of the expression matrix
22
+
23
+ ## 2 METHODS
24
+
25
+ ### 2.1 CALCULATE THE CORRELATIONS
26
+
27
+ We calculated correlations for all possible gene pairs $\left\langle {{\mathbf{g}}_{i},{\mathbf{g}}_{j}}\right\rangle$ , where ${\mathbf{g}}_{i}$ and ${\mathbf{g}}_{j}$ belonged to the columns of the expression matrix ${M}_{k \times l}$ and $i \neq j$ . The input expression data were the concatenation of the interventional data $\left( {{M}_{{g}_{i},{g}_{i}},{M}_{{g}_{i},{g}_{i}}}\right)$ and the samples from the observational data $\left( {{\mathbf{M}}_{\text{non-targeting },{\mathbf{g}}_{i}},{\mathbf{M}}_{\text{non-targeting },{\mathbf{g}}_{j}}}\right)$ . The observational data samples had the same lengths as the interventional data. If ${\mathbf{g}}_{i}$ related cells were not present in the expression matrix due to partial selection, the input data would be ${\mathbf{M}}_{\text{non-targeting },{\mathbf{g}}_{i}}$ and ${\mathbf{M}}_{\text{non-targeting },{\mathbf{g}}_{j}}$ . The resulting correlation matrix was asymmetric and had the shape of(l, l).
28
+
29
+ ### 2.2 CONSTRUCT THE DATASET
30
+
31
+ The initial labels of gene pairs were determined using a correlation threshold $T$ . Pairs with correlation scores higher than 0.1 were labeled as positive samples. To generate the features, we first normalized the expression matrix using $\left( {\mathbf{M} - \overline{{\mu }_{\mathbf{M},0}}}\right) /\overline{{\sigma }_{\mathbf{M},0}}$ . For each gene pair $\left\langle {{\mathbf{g}}_{i},{\mathbf{g}}_{j}}\right\rangle$ , we extracted four features from the matrix: $\overline{{\mathbf{M}}_{\text{non-targeting },{g}_{i}}},\overline{{\mathbf{M}}_{\text{non-targeting },{g}_{j}}}$ (average observational expression of ${\mathbf{g}}_{i}$ and ${\mathbf{g}}_{j}$ ), and $\overline{{\mathbf{M}}_{{\mathbf{g}}_{i},{\mathbf{g}}_{i}}},\overline{{\mathbf{M}}_{{\mathbf{g}}_{i},{\mathbf{g}}_{j}}}$ (average intervened expression by ${\mathbf{g}}_{i}$ ). If ${\mathbf{g}}_{i}$ related
32
+
33
+ Table 1: LightGBM hyper-parameters
34
+
35
+ <table><tr><td>Parameter</td><td>Value</td></tr><tr><td>boosting_type</td><td>gbdt</td></tr><tr><td>objective</td><td>binary</td></tr><tr><td>metric</td><td>binary_logloss</td></tr><tr><td>num_leaves</td><td>5</td></tr><tr><td>max_depth</td><td>2</td></tr><tr><td>min_data_in_leaf</td><td>5</td></tr><tr><td>learning_rate</td><td>0.05</td></tr><tr><td>min_gain_to_split</td><td>0.01</td></tr><tr><td>num_iterations</td><td>1000</td></tr></table>
36
+
37
+ cells were missing in the expression matrix, the last two features would be 0 and NaN. The output dataset would have $l \times \left( {l - 1}\right)$ rows and 5 columns.
38
+
39
+ ### 2.3 TRAIN THE MODEL AND PREDICT
40
+
41
+ The LightGBM model was set up using the hyperparameters listed in Table 1 and trained on the entire dataset. Predictions were from applying the model to the same data used for training. We selected the top 1001 gene pairs with the highest prediction scores as our final outputs.
42
+
43
+ ## 3 EXPERIMENTS
44
+
45
+ To determine the details of training parameters, including methods for initializing positive samples ( $K$ and $T$ ), the number of negative samples(R), the number of output gene pairs(N), normalization methods, and ensembles, we established two stages of experiments on partial intervention data with one partial seed and five partial seeds.
46
+
47
+ $K$ and $T$ were parameters for selecting positive samples. We labeled the top $K$ correlated pairs or those with scores higher than $T$ as positive samples. In some experiments, we randomly selected $K \times R$ negative samples and trained the model alongside the positive ones. We also attempted to train multiple models for the ensemble by selecting different negative samples. The ensemble prediction scores were the averages from these models.
48
+
49
+ Evaluation scores were AUCs. In the first stage, we observed that top-performing methods might have controversial results in K562 and RPE1 and close scores (Table 2). These methods were selected for the second stage evaluation, where we determined the final submission (Table 3).
50
+
51
+ ## 4 DISCUSSION
52
+
53
+ In summary, we developed a supervised algorithm to solve the unsupervised gene causality prediction problem. Our experiments demonstrated the model's ability to learn the relationships that determined causalities from the expression data and correct false positive and false negative samples from initial labels. The model might benefit from the uncertainty of the initial labels, as including more moderately correlated pairs as positive samples could improve performance. We observed about 0.1 to 0.2 AUC score improvements compared to GRNBoost and DCDI baseline models, in which we also selected the top 1000 pairs as outputs.
54
+
55
+ We attempted to incorporate the correlation matrix into the baseline algorithms. Since GRNBoost had the highest Wasserstein scores when only considering observational data, we first selected 20,000 candidates with the highest feature importance scores from the model trained on observational data and chose 1000 based on correlation scores. However, this approach failed to surpass direct correlation usage. As the number of candidates in the first selection increased, performance approached the correlation results, suggesting that the GRNBoost model might not provide information beyond correlations.
56
+
57
+ Table 2: Performances of 1 partial seed data
58
+
59
+ <table><tr><td>K or T</td><td>$\mathbf{N}$</td><td>$\mathbf{R}$</td><td>Normalize</td><td>Ensemble</td><td>K562</td><td>RPE1</td></tr><tr><td colspan="5">Top 1000 absolute correlation (baseline)</td><td>0.2890</td><td>0.3397</td></tr><tr><td>500</td><td>1000</td><td>2</td><td>/</td><td>/</td><td>0.1861</td><td>0.3040</td></tr><tr><td>2000</td><td>1000</td><td>2</td><td>/</td><td>/</td><td>0.2393</td><td>0.3352</td></tr><tr><td>2000</td><td>1000</td><td>3</td><td>/</td><td>True</td><td>0.2561</td><td>0.3608</td></tr><tr><td>2000</td><td>2000</td><td>3</td><td>/</td><td>/</td><td>0.2278</td><td>0.2767</td></tr><tr><td>5000</td><td>1000</td><td>2</td><td>/</td><td>/</td><td>0.2524</td><td>0.3552</td></tr><tr><td>5000</td><td>1000</td><td>2</td><td>/</td><td>True</td><td>0.2614</td><td>0.3541</td></tr><tr><td>5000</td><td>1000</td><td>3</td><td>/</td><td>True</td><td>0.2635</td><td>0.3598</td></tr><tr><td>7000</td><td>1000</td><td>3</td><td>/</td><td>/</td><td>0.2684</td><td>0.3608</td></tr><tr><td>7000</td><td>1000</td><td>AllNeg</td><td>/</td><td>/</td><td>0.2826</td><td>0.3846</td></tr><tr><td>7000</td><td>1000</td><td>AllNeg</td><td>normalize</td><td>/</td><td>0.3023</td><td>0.3744</td></tr><tr><td>7000</td><td>1000</td><td>AllNeg</td><td>quantile</td><td>/</td><td>0.2843</td><td>0.3768</td></tr><tr><td>0.1</td><td>1000</td><td>AllNeg</td><td>normalize</td><td>/</td><td>0.3148</td><td>/</td></tr><tr><td>0.2</td><td>1000</td><td>AllNeg</td><td>normalize</td><td>/</td><td>0.3072</td><td>/</td></tr></table>
60
+
61
+ Table 3: Performances of 5 partial seeds data
62
+
63
+ <table><tr><td>K or T</td><td>$\mathbf{N}$</td><td>$\mathbf{R}$</td><td>Normalize</td><td>Ensemble</td><td>K562</td><td>RPE1</td></tr><tr><td colspan="5">Top 1000 absolute correlation (baseline)</td><td>0.2922</td><td>0.3255</td></tr><tr><td>5000</td><td>1000</td><td>AllNeg</td><td>/</td><td>/</td><td>0.2930</td><td>0.3672</td></tr><tr><td>5000</td><td>1000</td><td>AllNeg</td><td>normalize</td><td>/</td><td>0.3062</td><td>0.3655</td></tr><tr><td>5000</td><td>1000</td><td>AllNeg</td><td>quantile</td><td>/</td><td>0.2992</td><td>0.3632</td></tr><tr><td>7000</td><td>1000</td><td>AllNeg</td><td>/</td><td>/</td><td>0.2944</td><td>0.3659</td></tr><tr><td>0.1</td><td>1000</td><td>AllNeg</td><td>normalize</td><td>/</td><td>0.3265</td><td>0.3780</td></tr><tr><td>0.2</td><td>1000</td><td>AllNeg</td><td>normalize</td><td>/</td><td>0.3138</td><td>0.3614</td></tr></table>
64
+
65
+ For the DCDI algorithms, we tried replacing the initial adjacency matrix and the Gumbel adjacency matrix with knowledge from the correlation matrix. The improvement over the baseline was nearly 0.1 but still worse than directly using the correlation matrix. Additionally, the algorithm seemed vulnerable to node numbers. We were unable to increase gene numbers for each partition as the program reported overflow issues.
66
+
67
+ ## AUTHOR CONTRIBUTIONS
68
+
69
+ YG and KD design, implement the algorithm; write, and proofread the report.
papers/GSK/GSK 2023/GSK 2023 CBC/nB9zUwS2gpI/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § A SUPERVISED LIGHTGBM-BASED APPROACH TO THE GSK.AI CAUSALBENCH CHALLENGE (ICLR 2023) TEAM GUANLAB REPORT SUBMISSIONS
2
+
3
+ Anonymous authors
4
+
5
+ Paper under double-blind review
6
+
7
+ § ABSTRACT
8
+
9
+ In this challenge, we transformed the task of detecting gene pairs with causal relationships into a supervised learning problem. We constructed a dataset for all gene pairs, with initial labels determined by gene expression correlations. A LightGBM model was trained and applied to the same data for prediction. The top 1001 pairs with the highest prediction scores were selected. In local experiments, this solution achieved a 0.3779 AUC score in the RPE1 data and a 0.3265 score in the K562 data.
10
+
11
+ § 1 NOTATIONS
12
+
13
+ In addition to standard notations, we defined several custom notations listed below to describe the method more efficiently.
14
+
15
+ $\left\langle {{\mathbf{g}}_{i},{\mathbf{g}}_{j}}\right\rangle$ A directed gene pair from ${\mathbf{g}}_{i}$ to ${\mathbf{g}}_{j}$
16
+
17
+ ${M}_{{g}_{i},{g}_{j}}$ Select the rows for ${\mathbf{g}}_{i}$ and the columns for ${\mathbf{g}}_{j}$ from the expression matrix $\mathbf{M}$
18
+
19
+ ${\mu }_{M,0}$ The column-wise mean value of the expression matrix
20
+
21
+ ${\sigma }_{M,0}$ The column-wise standard deviation of the expression matrix
22
+
23
+ § 2 METHODS
24
+
25
+ § 2.1 CALCULATE THE CORRELATIONS
26
+
27
+ We calculated correlations for all possible gene pairs $\left\langle {{\mathbf{g}}_{i},{\mathbf{g}}_{j}}\right\rangle$ , where ${\mathbf{g}}_{i}$ and ${\mathbf{g}}_{j}$ belonged to the columns of the expression matrix ${M}_{k \times l}$ and $i \neq j$ . The input expression data were the concatenation of the interventional data $\left( {{M}_{{g}_{i},{g}_{i}},{M}_{{g}_{i},{g}_{i}}}\right)$ and the samples from the observational data $\left( {{\mathbf{M}}_{\text{ non-targeting },{\mathbf{g}}_{i}},{\mathbf{M}}_{\text{ non-targeting },{\mathbf{g}}_{j}}}\right)$ . The observational data samples had the same lengths as the interventional data. If ${\mathbf{g}}_{i}$ related cells were not present in the expression matrix due to partial selection, the input data would be ${\mathbf{M}}_{\text{ non-targeting },{\mathbf{g}}_{i}}$ and ${\mathbf{M}}_{\text{ non-targeting },{\mathbf{g}}_{j}}$ . The resulting correlation matrix was asymmetric and had the shape of(l, l).
28
+
29
+ § 2.2 CONSTRUCT THE DATASET
30
+
31
+ The initial labels of gene pairs were determined using a correlation threshold $T$ . Pairs with correlation scores higher than 0.1 were labeled as positive samples. To generate the features, we first normalized the expression matrix using $\left( {\mathbf{M} - \overline{{\mu }_{\mathbf{M},0}}}\right) /\overline{{\sigma }_{\mathbf{M},0}}$ . For each gene pair $\left\langle {{\mathbf{g}}_{i},{\mathbf{g}}_{j}}\right\rangle$ , we extracted four features from the matrix: $\overline{{\mathbf{M}}_{\text{ non-targeting },{g}_{i}}},\overline{{\mathbf{M}}_{\text{ non-targeting },{g}_{j}}}$ (average observational expression of ${\mathbf{g}}_{i}$ and ${\mathbf{g}}_{j}$ ), and $\overline{{\mathbf{M}}_{{\mathbf{g}}_{i},{\mathbf{g}}_{i}}},\overline{{\mathbf{M}}_{{\mathbf{g}}_{i},{\mathbf{g}}_{j}}}$ (average intervened expression by ${\mathbf{g}}_{i}$ ). If ${\mathbf{g}}_{i}$ related
32
+
33
+ Table 1: LightGBM hyper-parameters
34
+
35
+ max width=
36
+
37
+ Parameter Value
38
+
39
+ 1-2
40
+ boosting_type gbdt
41
+
42
+ 1-2
43
+ objective binary
44
+
45
+ 1-2
46
+ metric binary_logloss
47
+
48
+ 1-2
49
+ num_leaves 5
50
+
51
+ 1-2
52
+ max_depth 2
53
+
54
+ 1-2
55
+ min_data_in_leaf 5
56
+
57
+ 1-2
58
+ learning_rate 0.05
59
+
60
+ 1-2
61
+ min_gain_to_split 0.01
62
+
63
+ 1-2
64
+ num_iterations 1000
65
+
66
+ 1-2
67
+
68
+ cells were missing in the expression matrix, the last two features would be 0 and NaN. The output dataset would have $l \times \left( {l - 1}\right)$ rows and 5 columns.
69
+
70
+ § 2.3 TRAIN THE MODEL AND PREDICT
71
+
72
+ The LightGBM model was set up using the hyperparameters listed in Table 1 and trained on the entire dataset. Predictions were from applying the model to the same data used for training. We selected the top 1001 gene pairs with the highest prediction scores as our final outputs.
73
+
74
+ § 3 EXPERIMENTS
75
+
76
+ To determine the details of training parameters, including methods for initializing positive samples ( $K$ and $T$ ), the number of negative samples(R), the number of output gene pairs(N), normalization methods, and ensembles, we established two stages of experiments on partial intervention data with one partial seed and five partial seeds.
77
+
78
+ $K$ and $T$ were parameters for selecting positive samples. We labeled the top $K$ correlated pairs or those with scores higher than $T$ as positive samples. In some experiments, we randomly selected $K \times R$ negative samples and trained the model alongside the positive ones. We also attempted to train multiple models for the ensemble by selecting different negative samples. The ensemble prediction scores were the averages from these models.
79
+
80
+ Evaluation scores were AUCs. In the first stage, we observed that top-performing methods might have controversial results in K562 and RPE1 and close scores (Table 2). These methods were selected for the second stage evaluation, where we determined the final submission (Table 3).
81
+
82
+ § 4 DISCUSSION
83
+
84
+ In summary, we developed a supervised algorithm to solve the unsupervised gene causality prediction problem. Our experiments demonstrated the model's ability to learn the relationships that determined causalities from the expression data and correct false positive and false negative samples from initial labels. The model might benefit from the uncertainty of the initial labels, as including more moderately correlated pairs as positive samples could improve performance. We observed about 0.1 to 0.2 AUC score improvements compared to GRNBoost and DCDI baseline models, in which we also selected the top 1000 pairs as outputs.
85
+
86
+ We attempted to incorporate the correlation matrix into the baseline algorithms. Since GRNBoost had the highest Wasserstein scores when only considering observational data, we first selected 20,000 candidates with the highest feature importance scores from the model trained on observational data and chose 1000 based on correlation scores. However, this approach failed to surpass direct correlation usage. As the number of candidates in the first selection increased, performance approached the correlation results, suggesting that the GRNBoost model might not provide information beyond correlations.
87
+
88
+ Table 2: Performances of 1 partial seed data
89
+
90
+ max width=
91
+
92
+ K or T $\mathbf{N}$ $\mathbf{R}$ Normalize Ensemble K562 RPE1
93
+
94
+ 1-7
95
+ 5|c|Top 1000 absolute correlation (baseline) 0.2890 0.3397
96
+
97
+ 1-7
98
+ 500 1000 2 / / 0.1861 0.3040
99
+
100
+ 1-7
101
+ 2000 1000 2 / / 0.2393 0.3352
102
+
103
+ 1-7
104
+ 2000 1000 3 / True 0.2561 0.3608
105
+
106
+ 1-7
107
+ 2000 2000 3 / / 0.2278 0.2767
108
+
109
+ 1-7
110
+ 5000 1000 2 / / 0.2524 0.3552
111
+
112
+ 1-7
113
+ 5000 1000 2 / True 0.2614 0.3541
114
+
115
+ 1-7
116
+ 5000 1000 3 / True 0.2635 0.3598
117
+
118
+ 1-7
119
+ 7000 1000 3 / / 0.2684 0.3608
120
+
121
+ 1-7
122
+ 7000 1000 AllNeg / / 0.2826 0.3846
123
+
124
+ 1-7
125
+ 7000 1000 AllNeg normalize / 0.3023 0.3744
126
+
127
+ 1-7
128
+ 7000 1000 AllNeg quantile / 0.2843 0.3768
129
+
130
+ 1-7
131
+ 0.1 1000 AllNeg normalize / 0.3148 /
132
+
133
+ 1-7
134
+ 0.2 1000 AllNeg normalize / 0.3072 /
135
+
136
+ 1-7
137
+
138
+ Table 3: Performances of 5 partial seeds data
139
+
140
+ max width=
141
+
142
+ K or T $\mathbf{N}$ $\mathbf{R}$ Normalize Ensemble K562 RPE1
143
+
144
+ 1-7
145
+ 5|c|Top 1000 absolute correlation (baseline) 0.2922 0.3255
146
+
147
+ 1-7
148
+ 5000 1000 AllNeg / / 0.2930 0.3672
149
+
150
+ 1-7
151
+ 5000 1000 AllNeg normalize / 0.3062 0.3655
152
+
153
+ 1-7
154
+ 5000 1000 AllNeg quantile / 0.2992 0.3632
155
+
156
+ 1-7
157
+ 7000 1000 AllNeg / / 0.2944 0.3659
158
+
159
+ 1-7
160
+ 0.1 1000 AllNeg normalize / 0.3265 0.3780
161
+
162
+ 1-7
163
+ 0.2 1000 AllNeg normalize / 0.3138 0.3614
164
+
165
+ 1-7
166
+
167
+ For the DCDI algorithms, we tried replacing the initial adjacency matrix and the Gumbel adjacency matrix with knowledge from the correlation matrix. The improvement over the baseline was nearly 0.1 but still worse than directly using the correlation matrix. Additionally, the algorithm seemed vulnerable to node numbers. We were unable to increase gene numbers for each partition as the program reported overflow issues.
168
+
169
+ § AUTHOR CONTRIBUTIONS
170
+
171
+ YG and KD design, implement the algorithm; write, and proofread the report.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference/KP0u0nSIwOW/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,276 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # The Interestingness of Fonts
2
+
3
+ Category: Research
4
+
5
+ ![01963df8-aa07-7da3-874a-5bacd4ff3dc2_0_220_324_1355_235_0.jpg](images/01963df8-aa07-7da3-874a-5bacd4ff3dc2_0_220_324_1355_235_0.jpg)
6
+
7
+ Figure 1: The top row shows the 5 most interesting fonts among our 100 fonts, and the bottom row show the 5 least interesting fonts (with interestingness scores decreasing from left to right). This is for our X2 case (please see the text for more details). See Figure 7 for all 100 fonts.
8
+
9
+ ## Abstract
10
+
11
+ While the problem of interestingness has been studied in various domains, it has not been explored for fonts. We study the novel problem of font interestingness in this paper. We first collect data of font interestingness in two ways, and analyze the data to understand what makes a font interesting. We then learn functions to compute font interestingness scores in two ways. We show results of rankings of fonts from the most to least interesting, and demonstrate applications of interestingness-guided font visualization and interestingness-guided font search.
12
+
13
+ Index Terms: Computing methodologies-Computer graphics-Perception
14
+
15
+ ## 1 INTRODUCTION
16
+
17
+ While the problem of interestingness has been explored, in particular for images [16] and 3D shapes [20], it has not been explored for fonts. For fonts, while the attributes of fonts $\left\lbrack {{26},{33}}\right\rbrack$ have been studied, the problem of font interestingness has been under-explored. In this paper, we aim to develop an understanding of the interestingness of fonts. Although fonts have a $2\mathrm{D}$ representation, fonts are different from images. Images are usually colored representations of some scenes or objects, while fonts are grayscale, usually more sparse than images, and are the characterization of the letters and numbers used for text. In addition, we explore what features makes a font interesting, which are different from images in general.
18
+
19
+ Fonts have features that are worth studying. For example, sans-serif fonts have even width strokes and tend to more plain, so we hypothesize that these would be less interesting. On the other hand, serif fonts have thick and thin strokes, and we hypothesize that these would make such fonts more interesting. Script fonts are more fancy, elegant, personal and/or graceful, so we hypothesize that these would be more interesting.
20
+
21
+ To study the problem of font interestingness, we first collect data of font interestingness in two ways. First, we show participants one font at a time, and ask them to rate its interestingness. Second, we show participants pairs of fonts and ask them to judge which font of each pair is more interesting than the other.
22
+
23
+ We then analyze the collected data to understand what makes a font interesting. We ask more participants about various subjective features of the fonts, and check whether font interestingness is related to these subjective and qualitative features. We also compute various objective descriptors of the fonts, and check whether interestingness can be predicted by these descriptors.
24
+
25
+ We then learn font interestingness scores in two ways. First, with the data where we asked participants to rate each font, we learn a function that takes one font as input, and compute as output the font interestingness score. Second, with the data where we asked participants to rate pairs of fonts, we specify a loss term and use gradient descent to find a function that also takes one font as input, and compute as output its interestingness score.
26
+
27
+ Finally, we show results of rankings of our fonts from the most to the least interesting in two ways, and analyze them to understand more about what makes a font interesting. We demonstrate the usefulness of our work with the applications of interestingness-guided font visualization and interestingness-guided font search.
28
+
29
+ In this paper, we make the following contributions:
30
+
31
+ - We are the first to study the problem of font interestingness to the best of our knowledge.
32
+
33
+ - We collect data of font interestingness in two different ways, and analyze the data to understand what makes a font interesting.
34
+
35
+ - We compute font interestingness scores and show that the concept of font interestingness can be learned.
36
+
37
+ - We demonstrate the potential uses of font interestingness through the applications of interestingness-guided font visualization and interestingness-guided font search.
38
+
39
+ ## 2 RELATED WORK
40
+
41
+ Our work is inspired by previous works in fonts and interestingness (separately).
42
+
43
+ ### 2.1 Font Attributes
44
+
45
+ O'Donovan et al. [26] developed interfaces for exploring large collections of fonts. They organized fonts using high-level descriptive attributes, such as attractive or not attractive, and showed fonts ordered by similarity relative to a query font. Our work is different in that we focus on studying one feature, compute interestingness scores for fonts, and explore what makes a font interesting. We believe that this particular feature is important for choosing fonts.
46
+
47
+ Wang et al. [33] took as input a set of predefined font attributes and their values to generate glyph images. Although they generated new fonts that their study participants found to be creative, they did not specify which fonts were creative, or analyze what makes them interesting or creative. Our work is different in that we compute a measure or score of how interesting a font is, and use this to understand more about what makes a font interesting.
48
+
49
+ Mackiewicz [24] examined the perceptions of fonts displayed on PowerPoint slides, where participants rated fonts on variables including "interesting". In this way, this previous work is closely related to our work. However, their work focused on designing presentations and is specifically for display on PowerPoint slides. In contrast, we focus on the problem of font interestingness in general, analyze data to really understand what it means for a font to be interesting, and investigate whether a computational measure of font interestingness can be learned.
50
+
51
+ Researchers have shown that users may associate fonts with personalities $\left\lbrack {{21},{25},{29},{30}}\right\rbrack$ . Our work is different in that we focus on studying one feature, and develop an understanding of interestingness for fonts, because we believe that this particular feature is important for choosing fonts. An interesting font makes the text fun and appealing, and makes it more likely to be read and enjoyed. Moreover, artists and designers could use interesting fonts to create attractive works that people are more likely to enjoy.
52
+
53
+ ### 2.2 Research Problems related to Fonts
54
+
55
+ There are previous works studying various research problems related to fonts. These include font recognition [2], how fonts affect the emotional qualities (eg. more funny) of text [18], using font design as a tool for poster design [35], relations between the font of a brand and consumer perceptions of the brand personality [15], font specificity [28], and relations between fonts and reading speed [32]. In this paper, we study the novel problem of font interestingness.
56
+
57
+ ### 2.3 Interestingness of various Media
58
+
59
+ Previous works have explored the interestingness of images $\lbrack 3,6,{10}$ , ${14},{16},{31}\rbrack$ , videos $\left\lbrack {4,7,{17},{34}}\right\rbrack ,3\mathrm{D}$ shapes $\left\lbrack {20}\right\rbrack$ , text passages $\left\lbrack 8\right\rbrack$ , and interestingness measures for data mining [11]. These works show the importance of the research problem of interestingness. However, there has been no work in font interestingness, and we fill this gap in this paper.
60
+
61
+ ### 2.4 Crowdsourcing
62
+
63
+ Previous works have used crowdsourcing to collect data from humans. Crowdsourcing has been used to collect style similarity data for clip art [9], fonts [26], and 3D models [22, 23]. Crowdsourcing has also been used to "extract depth layers or image normals from a photo" [12], and to "convert low-quality drawings into high-quality ones" [13]. In this paper, we use crowdsourcing to collect data of how humans perceive the interestingness of fonts.
64
+
65
+ ## 3 COLLECTING DATA OF FONT INTERESTINGNESS
66
+
67
+ To study the problem of font interestingness, since there are no "right or wrong" answers to how interesting a font is, and different people may have different opinions, we take a human perception approach and collect data from humans.
68
+
69
+ We collected 100 fonts from an online library (fontlibrary.org), and collected data of font interestingness in two ways. First, we showed participants one font at a time and asked them to rate the font's interestingness on a Likert scale of 1-5 . We call this our X1 case, since there is one input font per data sample. Second, we showed participants pairs of fonts and asked them to judge which font they perceive to be more interesting than the other. We call this our X2 case, since there are two input fonts per data sample. For the $\mathrm{X}2$ case, we were inspired by previous works that ask users to select among triplets or pairs of items (e.g. clip art [9], fonts [26], and 3D models $\left\lbrack {{22},{23}}\right\rbrack )$ . We instruct users that an interesting shape is one that can attract or hold their attention in any way.
70
+
71
+ We use crowdsourcing as a method to collect data, and post the fonts on the Amazon Mechanical Turk platform. Each HIT (a set of questions on Mechanical Turk) starts with instructions for the participants. Since there are no correct answers to the questions, we did not filter out any responses that could be random (i.e. from users who answer randomly just to get paid). We encouraged users to be serious when answering the questions by specifying in the instructions that: "If you randomly choose your answers, your responses will not be taken, and you will not be paid." Also, each user can answer our questions only if their acceptance rate (i.e. as recorded by the requesters on Mechanical Turk) of their previous completed questions on Mechanical Turk is at least 90%.
72
+
73
+ <table><tr><td>AABBCCDDEEFFGGHHII JJKKLLMMNNOOPPQQRF SSTTUUVVWWXXYYZZ 0123456789</td><td>AaBbCcDdEeHGgHhK TiKkAMmNuOoPoQaRr STEWNNNAXYLZ 0123456789</td></tr><tr><td>AaBbCcDdEeFfGgHhli JjKkLlMmNnOoPpQqRr SsTtUuVvWwXxYyZz 0123456789</td><td>$\mathrm{A}a\mathrm{\;B}b\mathrm{C}c\mathrm{D}d\mathrm{E}e\mathrm{F}f\mathrm{G}g\mathrm{H}b\mathrm{I}i$ JjKkL/MmNnOoP $\overline{p}{Qq}$ Rr SsTt $\widetilde{t}$ UuVvWwXxYyZz 0123456789</td></tr></table>
74
+
75
+ Figure 2: Top row: An example font (left) that scored low in aesthetics and interestingness, and an example font (right) that scored high in aesthetics and interestingness. Bottom row: An example font (left) that scored low in "serif-ness" and interestingness, and an example font (right) that scored high in "serif-ness" and interestingness.
76
+
77
+ For our X1 case, there are 50 fonts per HIT. The order of the fonts is chosen randomly. The users took between 1 and 4 minutes per HIT, and we paid $\$ {0.10}$ for each HIT. For each font, we collected data for 15 participants. For the X1 case, the font interestingness score is the average score given by the participants. For our X2 case, we generated 50 font pairs randomly for each HIT, by placing the fonts 1 to 100 randomly into 50 rows of 2 . The users took between 2 and 5 minutes per HIT, and we paid \$0.15 for each HIT. For each HIT (and thereby each font in this case), we collected data for 15 participants. For the X2 case, we have to perform the learning step in Section 5 to get the function that gives the font interestingness score. At the end of each HIT, we asked the participants to provide (by typing in a text box) their thoughts on how they decided how interesting a font is.
78
+
79
+ ## 4 WHAT MAKES A FONT INTERESTING?
80
+
81
+ We analyze the collected data to try to understand what makes a font interesting. We first do this in a qualitative way. We collect additional data of how humans perceive the fonts according to some subjective features: creative, unusual, aesthetic, thin, serif, and italic. We also considered others features such as simple and fancy, but decided that these are too similar to the ones we have used already. For each feature and each font, we asked participants to provide a score on a 1-5 Likert-scale. For example, for "aesthetic", the participants would choose 5 if they strongly agree that the font is aesthetic. We also use the Amazon Mechanical Turk platform for this data, but note that the participants here are different from those in Section 3. There are also 50 fonts per HIT here, and the order of the fonts is chosen randomly. The users took between 1 and 5 minutes per HIT, and we paid $\$ {0.10}$ for each HIT. For each font, we collected data for 15 participants. The overall score of each feature for each font is the average score given by the participants. For each feature, we then correlated the scores for all fonts with the interestingness scores from our X1 case. We wish to see whether font interestingness is related to other qualitative features. The Pearson correlation coefficients for each feature are (in decreasing order): 0.8511 (creative), 0.7966 (unusual), 0.4794 (aesthetic), 0.3723 (serif), 0.2122 (italic), and 0.1425 (thin). The p-value for these are less than 0.05 and hence the correlations are significant. The only exception is for "thin" (with a p-value of 0.1573).
82
+
83
+ We discuss the results based on the correlation coefficients. Among the features we tested, "creative" has the highest correlation with interestingness. Hence the more creative a font is, then the more interesting it is. This is intuitive and is not a surprise, since a creative font tends to be fancy or appealing in some way, thereby making it interesting. A font that is more unusual is more interesting. An unusual font tends to be strange, weird, or stand out in some way, which makes it interesting. A font that is more aesthetic is more interesting. An aesthetic or beautiful font is good to look at and observe, which could be interesting. A serif font is more interesting. The wider and sharper parts of a serif font give it a characteristic look, which is interesting. An italic font is more interesting. It makes sense that slanted letters tend to be interesting. Finally, a font that is more thin is more interesting, but the correlation for this is not significant. In any case, thin letters are more like handwritten characters, so it makes sense that they could be more interesting.
84
+
85
+ ![01963df8-aa07-7da3-874a-5bacd4ff3dc2_2_153_151_713_279_0.jpg](images/01963df8-aa07-7da3-874a-5bacd4ff3dc2_2_153_151_713_279_0.jpg)
86
+
87
+ Figure 3: Our neural network with 6 layers. $\mathbf{x}$ is an input font and $y$ is the font's interestingness score. The number of nodes is indicated for each layer. The network is fully connected. Note that this is for both our X1 and X2 cases. For the X2 case, we have two copies of this network for the batch gradient descent.
88
+
89
+ To gain more insights into what makes a font interesting, we show some visual examples of fonts with varying aesthetics, serif, and interestingness scores (Figure 2). We found (from above) that more aesthetic fonts are more interesting. The figure shows one font with low scores for both aesthetics and interestingness. It is quite plain, and has the same letters for both capitals and non-capitals. There is also one font with high scores for both aesthetics and interestingness. It is a handwritten and cursive font. Furthermore, we found that serif fonts are more interesting. The figure shows one font with low scores for both serif and interestingness. It is thin and quite simple. There is also one font with high scores for both serif and interestingness. It is a bit cursive.
90
+
91
+ We then try to understand what makes a font interesting in a more quantitative way, by testing whether some quantitative $2\mathrm{D}$ descriptors can be used to predict font interestingness. We learn a function that takes as input the $2\mathrm{D}$ descriptors and compute as output the font’s interestingness score from our X1 case. The 2D descriptors are: HoG (Histograms of Oriented Gradients) [5], SURF, SIFT, and the Sobel operator. Each descriptor is a histogram and we concatenate them into a single vector with a total of 1460 values. The function is a multi-layer neural network with fully-connected layers. We then perform 10 -fold cross-validation and the resulting ${R}^{2}$ value is 0.46 . The results are as expected as we did not think that a basic set of descriptors can predict interestingness, and the results show that the concept of font interestingness is complex.
92
+
93
+ ## 5 LEARNING FONT INTERESTINGNESS
94
+
95
+ We learn font interestingness scores in two ways. First, our X1 case has one input font per data sample. We learn a function that takes as input a font and predict as output its interestingness score (Figure 3).
96
+
97
+ Second, our X2 case has two input fonts per data sample. With our pairwise fonts data, we follow the formulation in [19] which can take pairwise data and learn a function that computes a score for one font (which is also the network in Figure 3). Different from the usual supervised learning framework, we do not have the target values $y$ that we wish to compute, as again our pairwise data are for pairs of fonts. Hence we take a learning-to-rank formulation [1, 27], and learn $\mathbf{W}$ and $\mathbf{b}$ to minimize this ranking loss function:
98
+
99
+ $$
100
+ \mathcal{L}\left( {\mathbf{W},\mathbf{b}}\right) = \frac{1}{2}\parallel \mathbf{W}{\parallel }_{2}^{2} + \frac{{C}_{\text{param }}}{\left| {\mathcal{I}}_{\text{train }}\right| }\mathop{\sum }\limits_{{\left( {{\mathbf{x}}_{A},{\mathbf{x}}_{B}}\right) \in {\mathcal{I}}_{\text{train }}}}{l}_{1}\left( {{y}_{A} - {y}_{B}}\right) \tag{1}
101
+ $$
102
+
103
+ where $\parallel \mathbf{W}{\parallel }_{2}^{2}$ is the ${L}^{2}$ regularizer to prevent over-fitting, ${C}_{\text{param }}$ is a hyper-parameter, ${\mathcal{I}}_{\text{train }}$ contains fonts ${\mathbf{x}}_{A}$ and ${\mathbf{x}}_{B}$ where the user specified that font $A$ is more interesting than font $B,{l}_{1}\left( t\right) =$ $\max {\left( 0,1 - t\right) }^{2}$ , and ${y}_{A} = {h}_{\mathbf{W},\mathbf{b}}\left( {\mathbf{x}}_{A}\right)$ .
104
+
105
+ To minimize $\mathcal{L}\left( {\mathbf{W},\mathbf{b}}\right)$ , we perform an end-to-end neural network backpropagation with batch gradient descent, and we follow the formulation in [19]. The forward propagation step takes each pair $\left( {{\mathbf{x}}_{A},{\mathbf{x}}_{B}}\right) \in {\mathcal{I}}_{\text{train }}$ and propagates ${\mathbf{x}}_{A}$ and ${\mathbf{x}}_{B}$ through the network with the current(W, b)to get ${y}_{A}$ and ${y}_{B}$ respectively. Hence there are two copies of the network for each of the two cases $A$ and $B$ . We then perform a backward propagation step for each of the two copies of the network and compute these delta $\left( \delta \right)$ values:
106
+
107
+ $$
108
+ {\delta }_{i}^{\left( {n}_{l}\right) } = y\left( {1 - y}\right) \;\text{for output layer} \tag{2}
109
+ $$
110
+
111
+ $$
112
+ {\delta }_{i}^{\left( l\right) } = \left( {\mathop{\sum }\limits_{{k = 1}}^{{s}_{l + 1}}{\delta }_{k}^{\left( l + 1\right) }{w}_{ki}^{\left( l + 1\right) }}\right) \left( {1 - {\left( {a}_{i}^{\left( l\right) }\right) }^{2}}\right) \;\text{ for inner layers } \tag{3}
113
+ $$
114
+
115
+ where the $\delta$ and $y$ values are indexed as ${\delta }_{Ai}$ and ${y}_{A}$ in the case for $A$ . The index $i$ in $\delta$ is the neuron in the corresponding layer and there is only one node in our output layers. We use the tanh activation function which leads to these $\delta$ formulas. Note that due to the learning-to-rank aspect, these $\delta$ are different from the usual $\delta$ in the usual neural network backpropagation.
116
+
117
+ We now compute the partial derivatives for the gradient descent. For $\frac{\partial \mathcal{L}}{\partial {w}_{ij}^{\left( l\right) }}$ , we split this into a $\frac{\partial \mathcal{L}}{\partial \parallel \mathbf{W}{\parallel }_{2}}\frac{\partial \parallel \mathbf{W}{\parallel }_{2}}{\partial {w}_{ij}^{\left( l\right) }}$ term and $\frac{\partial \mathcal{L}}{\partial y}\frac{\partial y}{\partial {w}_{ij}^{\left( l\right) }}$ terms (a term for each ${y}_{A}$ and each ${y}_{B}$ computed from each $\left( {{\mathbf{x}}_{A},{\mathbf{x}}_{B}}\right)$ pair). The $\frac{\partial \mathcal{L}}{\partial y}\frac{\partial y}{\partial {w}_{ij}^{\left( l\right) }}$ term is expanded for the $A$ case for example to $\frac{\partial \mathcal{L}}{\partial {y}_{A}}\frac{\partial {y}_{A}}{\partial {a}_{i}}\frac{\partial {a}_{i}}{\partial {z}_{i}}\frac{\partial {z}_{i}}{\partial {w}_{ij}^{\left( l\right) }}$ where the last three partial derivatives are computed with the copy of the network for the $A$ case. The entire partial derivative is:
118
+
119
+ $$
120
+ \frac{\partial \mathcal{L}}{\partial {w}_{ij}^{\left( l\right) }} = {w}_{ij}^{\left( l\right) }
121
+ $$
122
+
123
+ $$
124
+ + \frac{2{C}_{\text{param }}}{\left| {\mathcal{I}}_{\text{train }}\right| }\mathop{\sum }\limits_{\left( A, B\right) }\max \left( {0,1 - {y}_{A} + {y}_{B}}\right) \operatorname{chk}\left( {{y}_{A} - {y}_{B}}\right) {\delta }_{Ai}^{\left( l + 1\right) }{a}_{Aj}^{\left( l\right) } \tag{4}
125
+ $$
126
+
127
+ $$
128
+ - \frac{2{C}_{\text{param }}}{\left| {\mathcal{I}}_{\text{train }}\right| }\mathop{\sum }\limits_{\left( A, B\right) }\max \left( {0,1 - {y}_{A} + {y}_{B}}\right) \operatorname{chk}\left( {{y}_{A} - {y}_{B}}\right) {\delta }_{Bi}^{\left( l + 1\right) }{a}_{Bj}^{\left( l\right) }
129
+ $$
130
+
131
+ There is one term for each of the $A$ and $B$ cases.(A, B)represents $\left( {{\mathbf{x}}_{A},{\mathbf{x}}_{B}}\right) \in {\mathcal{I}}_{\text{train }}$ and all terms in the summation can be computed with the corresponding $\left( {{\mathbf{x}}_{A},{\mathbf{x}}_{B}}\right)$ pair. The $\operatorname{chk}\left( \right)$ function is:
132
+
133
+ $$
134
+ \operatorname{chk}\left( t\right) = 0\;\text{ if }t \geq 1 \tag{5}
135
+ $$
136
+
137
+ $$
138
+ = - 1\;\text{ if }t < 1 \tag{6}
139
+ $$
140
+
141
+ For each(A, B)pair, we can check the value of $\operatorname{chk}\left( {{y}_{A} - {y}_{B}}\right)$ before doing the backpropagation. If it is zero, we do not have to perform the backpropagation for that pair as the term in the summation is zero. The partial derivative for the biases is derived similarly.
142
+
143
+ The batch gradient descent starts by initializing $\mathbf{W}$ and $\mathbf{b}$ randomly. It then goes through the fonts for a fixed number of iterations, where each iteration involves taking a set of data samples and computing the partial derivatives. Each iteration of batch gradient descent sums the partial derivatives from a set of data samples and updates $\mathbf{W}$ and $\mathbf{b}$ with a learning rate $\alpha$ as usual:
144
+
145
+ $$
146
+ {w}_{ij}^{\left( l\right) } = {w}_{ij}^{\left( l\right) } - \alpha \frac{\partial \mathcal{L}}{\partial {w}_{ij}^{\left( l\right) }} \tag{7}
147
+ $$
148
+
149
+ $$
150
+ {b}_{i}^{\left( l\right) } = {b}_{i}^{\left( l\right) } - \alpha \frac{\partial \mathcal{L}}{\partial {b}_{i}^{\left( l\right) }} \tag{8}
151
+ $$
152
+
153
+ ## 6 RESULTS AND EVALUATION
154
+
155
+ We show results of the 5 most interesting fonts and the 5 least interesting fonts for our X1 and X2 cases (Figures 1 and 4). In Figure 1, the 5 most interesting fonts include some handwritten fonts, some aesthetic fonts, and an unusual font consisting of QR codes. The 5 least interesting fonts include some sans-serif fonts, some italic fonts, some simple fonts, and some fonts with thin characters. In Figure 4, the 5 most interesting fonts include some handwritten fonts, and some unusual fonts. The 5 least interesting fonts are mostly simple fonts. They include some non-italic fonts, some serif fonts, and some sans-serif fonts. Comparing the X1 and X2 cases, the 5 most interesting fonts for both cases are visually similar, and the 5 least interesting fonts for both cases are also visually similar. Therefore, although the X1 and X2 cases collect data differently, the visual results are similar. Moreover, we note that our qualitative analysis of what is an interesting font is subjective, and we encourage the reader to observe the fonts for themselves.
156
+
157
+ Online Submission ID: 0
158
+
159
+ ![01963df8-aa07-7da3-874a-5bacd4ff3dc2_3_151_152_1497_258_0.jpg](images/01963df8-aa07-7da3-874a-5bacd4ff3dc2_3_151_152_1497_258_0.jpg)
160
+
161
+ Figure 4: The top row shows the 5 most interesting fonts among our 100 fonts, and the bottom row show the 5 least interesting fonts (with interestingness scores decreasing from left to right). This is for our X1 case (please see the text for more details). See Figure 8 for all 100 fonts.
162
+
163
+ After the participants answered a set of questions, they provided their thoughts on how they decided how interesting a font is. For the X1 case, they said: "handwriting looks more beautiful and interesting", "unusualness, strangeness is interesting", "look for curved parts, thin parts", "plain and straight not interesting", "try to compare with previous fonts", "could have more than 5 options", and "sometimes some letters look better, and some are less interesting, so have to balance them". For the X2 case, the participants said: 'look at details of curves and edges, or whether tips of strokes bend back versus are more straight", "if both interesting, it's difficult", "sometimes both plain, then it is the same", "more cute, more pretty is better", "thin or italics more interesting", "strange/unusual makes it more interesting too", "fancy is more interesting", "look for more cursive or handwritten letters", "capital letter is less interesting than non-capital letter", and "special characters are more interesting". We note that for the X2 case, in general, the words "both" and "more" are often used, which makes sense for comparing between fonts.
164
+
165
+ In the introduction, we made the hypotheses that sans-serif fonts are less interesting, serif fonts are more interesting, and script fonts are more interesting. These hypotheses are correct, as we found in Section 4 that aesthetic and serif fonts are more interesting. Moreover, the participants' words (above) agree with these hypotheses.
166
+
167
+ We provide some parameters used in our method. The hyper-parameter ${C}_{\text{param }}$ is set to 1000 . We initialize each weight and bias in $\mathbf{W}$ and $\mathbf{b}$ by sampling from a normal distribution with mean 0 and standard deviation 0.1 . We go through all the fonts 100 times or more for the network to produce reasonable results. For each iteration of the batch gradient descent, we choose between 50 and 100 data samples for ${\mathcal{I}}_{\text{train }}$ . The learning rate $\alpha$ is set to 0.0001 . The training step can be done offline. For example, 100 iterations of batch gradient descent for one font takes about 3 seconds in MATLAB. This runtime scales linearly as the number of fonts increases.
168
+
169
+ We describe the accuracy of the learning method. For the X1 case, we perform a 10 -fold cross-validation, and the resulting ${R}^{2}$ value is 0.71 . For the X2 case, after the training step, we can use the neural network to compute an interestingness score for each font. We take all font pairs in the data and perform 10 -fold cross-validation. For each pair $\left( {{\mathbf{x}}_{A},{\mathbf{x}}_{B}}\right)$ , we compute ${y}_{A}$ and ${y}_{B}$ with the trained network, and then predict the font with the higher score to be more interesting. The percentage of samples where the participant interestingness response is predicted correctly is ${76.3}\%$ .
170
+
171
+ ![01963df8-aa07-7da3-874a-5bacd4ff3dc2_3_926_547_719_212_0.jpg](images/01963df8-aa07-7da3-874a-5bacd4ff3dc2_3_926_547_719_212_0.jpg)
172
+
173
+ Figure 5: Interestingness-Guided Font Visualization. Instead of showing all 100 fonts, we choose a subset of fonts as one way to visualize them.
174
+
175
+ ## 7 APPLICATION
176
+
177
+ We demonstrate the potential uses of the font interestingness concept with some interestingness-guided applications.
178
+
179
+ We show an application of interestingness-guided font visualization. The idea is to choose a subset of fonts to visualize the whole set of fonts. It would be useful to visualize all the fonts with a smaller subset, both to understand what types of fonts are in the set, and to choose a font to use from a smaller subset. One way to do so is to take the fonts ranked according to interestingness, and choose one for every $\mathrm{k}$ fonts. For our $\mathrm{X}1$ case, we tried this for $\mathrm{k} = 5$ and 10, but there were too many fonts chosen which did not look good. We then decided to do this with $\mathrm{k} = {20}$ (Figure 5), to get a subset of five fonts. Among this subset, the first font is cursive, then they are more and more simple or plain looking. From the second font, they alternate between serif and sans-serif fonts. There are a variety of fonts: there are one handwritten, two italic, two non-italic and one bold fonts. The only main difference from the set of 100 fonts is that these do not include some unusual font, but there is a good aspect to this, as an unusual font is not likely to be used in practice. We performed a test and posted the subset of fonts on Amazon Mechanical Turk, and asked 15 participants to give a Likert-scale rating of 1-5 for these statements: "It is useful to visualize the set of fonts this way" (in this case, we also showed the set of all fonts) and "You can choose a font to use from these". The average rating for the first statement is 4.2 , and for the second statement is 4.5 .
180
+
181
+ We show another application of interestingness-guided font search. Figure 6 shows a query font, the top-4 results from searching with interestingness scores, and then the top-4 results from searching with other 2D descriptors. The query font has high interestingness, and the first row has interesting fonts. The other rows have fonts that are not high in interestingness, except for one (the third font in the second row). This shows that if a user wants to search for a font according to interestingness, applying our font interestingness scores would be useful.
182
+
183
+ ## 8 DISCUSSION, LIMITATIONS, AND FUTURE WORK
184
+
185
+ We have investigated the novel problem of font interestingness, and started to develop a computational understanding of this concept. We demonstrate in this paper that this is a worthwhile problem to study, and hope that our research will inspire more work.
186
+
187
+ ![01963df8-aa07-7da3-874a-5bacd4ff3dc2_4_156_146_715_310_0.jpg](images/01963df8-aa07-7da3-874a-5bacd4ff3dc2_4_156_146_715_310_0.jpg)
188
+
189
+ Figure 6: Interestingness-Guided Font Search. The top left font is the query. Each row shows the top-4 results from searching based on our interestingness scores, HoG, SIFT, SURF, and Sobel descriptors respectively.
190
+
191
+ One limitation is that we currently have 100 fonts (although it took much time to prepare these fonts). For future work, we may gather more fonts and collect more data.
192
+
193
+ We currently have only black-colored fonts with a white background. For future work, it is possible to have colored fonts, and/or colored backgrounds, or even decorations on the characters, such as in the recent Google doodle for the December holidays.
194
+
195
+ Currently, the learned function is a neural network that is relatively simple. However, the learning itself is not our contribution, but the goal was to show that the concept of font interestingness can be learned. We may explore more complex functions in future work.
196
+
197
+ ## REFERENCES
198
+
199
+ [1] O. Chapelle and S. Keerthi. Efficient algorithms for ranking with SVMs. Information Retrieval Journal, 13(3):201-215, 2010.
200
+
201
+ [2] G. Chen, J. Yang, H. Jin, J. Brandt, E. Shechtman, A. Agarwala, and T. X. Han. Large-Scale Visual Font Recognition. IEEE Conference on Computer Vision and Pattern Recognition, pp. 3598-3605, 2014.
202
+
203
+ [3] M. G. Constantin, M. Redi, G. Zen, and B. Ionescu. Computational understanding of visual interestingness beyond semantics: Literature survey and analysis of covariates. ACM Comput. Surv., 52(2):25, 2019.
204
+
205
+ [4] M. G. Constantin, L.-D. Stefan, B. Ionescu, N. Q. K. Duong, C.- H. Demarty, and M. Sjöberg. Visual Interestingness Prediction: A Benchmark Framework and Literature Review. International Journal of Computer Vision, 129(5):1526-1550, 2021.
206
+
207
+ [5] N. Dalal and B. Triggs. Histograms of Oriented Gradients for Human Detection. IEEE Computer Vision and Pattern Recognition (CVPR), pp. 886-893, 2005.
208
+
209
+ [6] S. Dhar, V. Ordonez, and T. L. Berg. High level describable attributes for predicting aesthetics and interestingness. IEEE CVPR, pp. 1657- 1664, 2011.
210
+
211
+ [7] Y. Fu, T. M. Hospedales, T. Xiang, S. Gong, and Y. Yao. Interestingness Prediction by Robust Learning to Rank. European Conference on Computer Vision, pp. 488-503, 2014.
212
+
213
+ [8] D. Ganguly, J. Leveling, and G. Jones. Automatic prediction of aesthetics and interestingness of text passages. International Conference on Computational Linguistics, pp. 905-916, 2014.
214
+
215
+ [9] E. Garces, A. Agarwala, D. Gutierrez, and A. Hertzmann. A Similarity Measure for Illustration Style. ACM Transactions on Graphics, 33(4):93:1-93:9, July 2014.
216
+
217
+ [10] M. Gardezi, K. H. Fung, U. M. Baig, M. Ismail, O. Kados, Y. S. Bonneh, and B. R. Sheth. What Makes an Image Interesting and How Can We Explain It. Frontiers in Psychology, 2021.
218
+
219
+ [11] L. Geng and H. J. Hamilton. Interestingness measures for data mining: A survey. ACM Computing Surveys, 38(3), Sept. 2006.
220
+
221
+ [12] Y. Gingold, A. Shamir, and D. Cohen-Or. Micro Perceptual Human Computation for Visual Tasks. ACM Transactions on Graphics, 31(5):119:1-119:12, Sept. 2012.
222
+
223
+ [13] Y. Gingold, E. Vouga, E. Grinspun, and H. Hirsh. Diamonds from the Rough: Improving Drawing, Painting, and Singing via Crowdsourcing. AAAI Workshop on Human Computation, 2012.
224
+
225
+ [14] H. Grabner, F. Nater, M. Druey, and L. Van Gool. Visual interestingness in image sequences. ACM International Conference on Multimedia, pp. 1017-1026, 2013.
226
+
227
+ [15] B. Grohmann, J. L. Giese, and I. D. Parkman. Using Type Font Characteristics to Communicate Brand Personality of New Brands. Journal of Brand Management, 20:389-403, 2013.
228
+
229
+ [16] M. Gygli, H. Grabner, H. Riemenschneider, F. Nater, and L. V. Gool. The interestingness of images. IEEE International Conference on Computer Vision, pp. 1633-1640, 2013.
230
+
231
+ [17] Y. Jiang, Y. Wang, R. Feng, X. Xue, Y. Zheng, and H. Yang. Understanding and predicting interestingness of videos. AAAI, pp. 1113- 1119, 2013.
232
+
233
+ [18] S. Juni and J. S. Gross. Emotional and persuasive perception of fonts. Perceptual and Motor Skills, 106(1):35-42, 2008.
234
+
235
+ [19] M. Lau, K. Dev, W. Shi, J. Dorsey, and H. Rushmeier. Tactile Mesh Saliency. ACM Transactions on Graphics, 35(4):52:1-52:11, 2016.
236
+
237
+ [20] M. Lau and L. Power. The Interestingness of 3D Shapes. ACM Symposium on Applied Perception, pp. 1-5, 2020.
238
+
239
+ [21] Y. Li and C. Y. Suen. Typeface Personality Traits and Their Design Characteristics. International Workshop on Document Analysis Systems, pp. 231-238, 2010.
240
+
241
+ [22] T. Liu, A. Hertzmann, W. Li, and T. Funkhouser. Style Compatibility for 3D Furniture Models. ACM Transactions on Graphics, 34(4):85:1- 85:9, 2015.
242
+
243
+ [23] Z. Lun, E. Kalogerakis, and A. Sheffer. Elements of style: Learning perceptual shape style similarity. ACM Transactions on Graphics, 34(4):84:1-84:14, July 2015.
244
+
245
+ [24] J. Mackiewicz. Audience Perceptions of Fonts in Projected Power-Point Text Slides. IEEE International Professional Communication Conference, 2006.
246
+
247
+ [25] J. Mackiewicz and R. Moeller. Why people perceive typefaces to have different personalities. International Professional Communication Conference, 2004.
248
+
249
+ [26] P. O'Donovan, J. Libeks, A. Agarwala, and A. Hertzmann. Exploratory Font Selection Using Crowdsourced Attributes. ACM Transactions on Graphics, 33(4):92:1-92:9, July 2014.
250
+
251
+ [27] D. Parikh and K. Grauman. Relative Attributes. International Conference on Computer Vision (ICCV), pp. 503-510, 2011.
252
+
253
+ [28] L. Power and M. Lau. Font Specificity. Eurographics, 2019.
254
+
255
+ [29] A. Shaikh. Psychology of Onscreen Type. PhD thesis, Wichita State University, 2007.
256
+
257
+ [30] D. Shaikh and B. S. Chaparro. Perception of fonts: Perceived personality traits and appropriate uses. Digital Fonts and Reading, pp. 226-247, 2016.
258
+
259
+ [31] B. Sheth, M. Gardezi, K. H. Fung, M. Ismail, and M. Baig. What makes an image interesting? Journal of Vision, 19(10), 2019.
260
+
261
+ [32] S. Wallace, Z. Bylinskii, J. Dobres, B. Kerr, S. Berlow, R. Treit-man, N. Kumawat, K. Arpin, D. B. Miller, J. Huang, and B. D. Sawyer. Towards Individuated Reading Experiences: Different Fonts Increase Reading Speed for Different Individuals. ACM Transactions on Computer-Human Interaction, 29(4):1-56, 2022.
262
+
263
+ [33] Y. Wang, Y. Gao, and Z. Lian. Attribute2Font: creating fonts you want from attributes. ACM Transactions on Graphics, 39(4):69:1-69:15, 2020.
264
+
265
+ [34] G. Zen, P. de Juan, Y. Song, and A. Jaimes. Mouse Activity as an Indicator of Interestingness in Video. International Conference on Multimedia Retrieval, pp. 47-54, 2016.
266
+
267
+ [35] Z. Zhang and Z. Cai. Research on the Creative Expression of Font Design in Poster Design. Conference on Art and Design: Inheritance and Innovation, pp. 207-211, 2021.
268
+
269
+ ![01963df8-aa07-7da3-874a-5bacd4ff3dc2_5_284_113_1235_2021_0.jpg](images/01963df8-aa07-7da3-874a-5bacd4ff3dc2_5_284_113_1235_2021_0.jpg)
270
+
271
+ Figure 7: The 100 fonts are ranked from most to least interesting (from top left, and left to right in each row). This is for our X2 case. Please see the text for more details, and please zoom in to better see the fonts.
272
+
273
+ <table><tr><td>AaBbCcDdFeFfGaHh ${CAB}\ddot{o}{CoOdEeHG}\ddot{o}4\dot{n}2\dot{n}$ NARESSDARFIGSIIII A a BbTcD d Ee F Kliq Nh. 7i. AaBbCcOdEeFfGgHbNi Jikklimmnoopplar JiK@MmMoOoPoQR JjBKLIMMNn00PQRr TickettermentsPress JjKkLIMMNnOOPPOGRI SSTEQUWWXXYZZ StTeUniv.NuTeYuZz SsTTUuVvWw距 STUWWWWWWXXYYZZ SsTHUUVUWWXYYZZ 01234567890123456789012345678901234567890123456709</td></tr><tr><td>RADDOCDDEEFFOCHHII AaBbCcDdEeFfGgHhI: AaBbCcDdEeFfGgHhIi $\mathrm{A}a\mathrm{\;B}b\mathrm{C}c\mathrm{D}d\mathrm{E}e\mathrm{F}f\mathrm{G}g\mathrm{H}b\mathrm{I}i$ AaBbCcDdEeFfGg.Hhii DINKLIMMANOOPPOONN J」KkLIMmNnOoPpQqRe JjKkLCMMNnOoPpQqRn JjKkL/MmNnOoPpQgRr JjKKLLMmNnOoPjQqRp SETTUUDRUMANTICI `sttUuV`WwXxYyZz SoTtUuVvWwXxYyZ3 $\mathrm{{SsTt}}\widetilde{t}\mathrm{U}u\mathrm{V}v\mathrm{W}w\mathrm{X}x\mathrm{Y}y\dot{\mathrm{Z}}z$ SsTtUuVvWwXxYyZz 0164567890123456789012345678901234567890123456789</td></tr><tr><td> <img src="https://cdn.noedgeai.com/01963df8-aa07-7da3-874a-5bacd4ff3dc2_6.jpg?x=312&y=361&w=207&h=82&r=0"/> $C.{ORbC}.C.D.d{E}_{e}F.{GqH}{H}_{2}{Li}$ <img src="https://cdn.noedgeai.com/01963df8-aa07-7da3-874a-5bacd4ff3dc2_6.jpg?x=832&y=359&w=215&h=86&r=0"/> AARECCDDEEFFGCHHIAaBbCcDdEeFfGgHhIiJJKKLLMMNNOOPPQOREJjKkLlMmNn OoPpQqRrSsTtUuVvWwXxYyZz0123456789LjKkLIMmNnQoPpQqRn${S}_{\Delta }{OTt}{l}_{\Delta }{V}_{\Delta }{W}_{\Delta }{x}_{\Delta }{y}_{\Delta }{z}_{\Delta }$0123456789</td></tr><tr><td>${A}_{A}{B}_{B}{C}_{C}{D}_{D}{E}_{E}{F}_{F}{G}_{G}{H}_{H}{I}_{1}$ AaBbCcDdEeffGgHhli AABBCCDdEEFfGgHHIi NaBbCcDdEeEFGgHbli ${AaBbCcDdEeFfGgHbIi}$ J」 ${K}_{K}{L}_{L}{M}_{M}{N}_{N}{O}_{O}{PPQQRR}$ JjKkLlMmNnOoPpQqRr JjKKLIMMNnOOPPQqRr JjKkLlMmNnOoPpQqRr TiKkLlMmNnOoPpQqRr ${S}_{S}T + {U}_{U}{V}_{V}{W}_{W}{X}_{X}{Y}_{Y}{Z}_{Z}$ SsTtUuVvWwXxYyZz SsTTUuVvWwXXYvZz S5TrUpVvWwXxYuZz SsTtUuVvWwXxYvZz 01234567890123456789012345678901234567890123456789</td></tr><tr><td>${AaBbCcDdEeFfGgHbIi}$ AaBbCcDdEeFfGqHhIi AaBbCcDdEeFfGgHhIi AaBbCcDdEeFf6gHhli JiKkLlMmNnOoPpQqRr JjKkLlMmNnOoPpQqRr JjKkL1MmNnOoPpQqRr JjKkLIMmNnOoPpQqRr XYYZZ SsTtUuVvWwXxYyZz SsTtUuDvWwXxYyZz $\begin{matrix} {SsTtUu} \\ V \\ v \\ W \\ w \\ X \\ x \\ Y \\ y \\ Z \\ z \end{matrix}$ SsTtUuVvWwXxYyZz 890123456789012345678901234567890123456789</td></tr><tr><td>labshidditelliti ${AaBbCcDdEeFfGgHbIi}$ AaBbCcDdEeFfGgHhli AaBbCcDdEeFfGgHhIi AaBbCcDdEeFfGgHhIi <img src="https://cdn.noedgeai.com/01963df8-aa07-7da3-874a-5bacd4ff3dc2_6.jpg?x=318&y=677&w=142&h=24&r=0"/> JjKkLlMmNnOoPpQqRr JjKkLIMmNnOoPpQqRr JjKkL1MmNnOoPpQqRr JjKkLIMmNnOoPpQqRr SstellableMwXxVvL SsTtUuVvWwXxYyZz SsTfUuVvWwXxYyZz SsTtUuVvWwXxYyZz SsTtUuVvWwXxYyZz 0123667890123456789012345678901234567890123456789</td></tr><tr><td>${A}_{A}{B}_{B}{C}_{C}{D}_{D}{E}_{E}{F}_{F}{G}_{G}{H}_{H}{I}_{1}$ AaBbCcDdEeFfGgHhIi AaBbCcDdEeFfGgHhIi AaBbCcDdEeFfGgHhIi AaBbCcDdEeFfGqHhIi JIKKLLMMNNOOPPQQRR JjKkL1MmNnOoPpQqRr JiKkLIMmNnOoPpQqRr JjKkLIMmNnOoPpQqRi JjKkLlMmNnOoPpQqRr SsTtUuVvWwXxYyZz SsTtUuVvWwXxYyZz SsTtUuVνWwXxYyZz $\begin{matrix} {SsTt} & {Uu} & {Vv} \\ {Ww} & {Xx} & {Yy} \\ {Zz} & & \end{matrix}$ SsTtUuVvWwXxYyZz 01234567890123456789012345678901234567890123456789</td></tr><tr><td>${A}_{A}{B}_{B}{C}_{C}{D}_{D}{E}_{E}{F}_{F}{G}_{G}{H}_{H}{l}_{I}$ AaBbCcDdEeFfGgHhli AaBbCcDdEeFfGgHhIi AaBbCcDdEeFfGgHhIi AaBbCcDdEeFfGgHhIi 」」KKLLMMNNOOPPQQRR JjKkLlMmNnOoPpQqRi JjKkLlMmNnOoPpQqRn JjKkLlMmNnOoPpQqRn JjKkLlMmNnOoPpQqRr SsTtUuVvWwXxYyZz SsTtUuVuWwXxYyZz _____ $\mathbf{{SsTtUuVvWwXxYyZz}}$ $\mathrm{{\mathtt{S} \mathtt{s} \mathtt{T} \mathtt{t} \mathtt{U} \mathtt{u} \mathtt{V} \mathtt{V} \mathtt{M} \mathtt{X} \mathtt{x} \mathtt{Y} уΖ\mathtt{z} }}$ 01234567890123456789012345678901234567890123456789</td></tr><tr><td>AaBbCcDdEeFfGgHhIi AaBbCcDdEeFfGgHhIi AaBbCcDdEeFfGgHhIi AaBbCcDdEeFfGgHhIi AaBbCcDdEeFfGqHhIi JjKkL1MmNnOoPpQqRr JjKkLIMmNnOoPpQqRr JjKkLIMmNnOoPpQqRr JjKkLlMmNnOoPpQqRr JjKkLlMmNnOoPp9qRr SsTtUuVvWwXxYyZz SsTtUuVvWwXxYyZz SsTtUuVvWwXxYyZz $\begin{matrix} {Ss} & \text{Tt} & \text{Uu} & \text{Vv} \\ \text{Ww} & \text{Xx} & \text{Yy} & \text{Zz} \end{matrix}$ SsTtUuVuWwXxYuZz 01234567890123456789012345678901234567890123456789</td></tr><tr><td>AABBCCDDEEF AaBbCcDdEeFfGaHhIi ${\mathrm{{AaB}}}_{B}\mathrm{{CcDoEe}}{\mathrm{{FfG}}}_{6}{\mathrm{{HH}}}_{4}{\mathrm{I}}_{1}$ AaBbCcDdEeFfGgHhIi JJKKLLMMNNOOPPQQRR $\mathrm{{JjKkLlMmNnOo}{p}_{P}{Q}_{f}{R}_{I}}$ JjKkLIMmNnOoPpQqRr JiKkLLMMNnOoPrQQRr JjKkLlMmNnOoPpQqRr SSTTUUVVWWXXYYZZ SsTtUuVvWwXxYyZz SsTtUuVvWwXxYyZz _____ SsTtUuVvWwXxYyZz 01234567890123456789012345678901234567890123456789</td></tr><tr><td>AaBbCcDdEeFfGgHhIi AaBbCcDdEeFfGgHhli AaBbCcDdEeFfGgHhli AaBbCcDdEeFfGgHhli AaBbCcDdEeFfGgHhIi JjKkLlMmNnOoPpQqRr JIKKLIMMNnOOPPQGRr JiKkLIMmNnOoPpQqRr JjKkLlMmNnOoPpQqRi JjKkLlMmNnOoPpQqRr SsTtUuVvWwXxYyZz Sstuluwwwxxyyzz SsTtUuVvWwXxYvZz SsTtUuVvWwXxYvZz SsTtUuVvWwXxYyZz 01234567890123456789012345678901234567890123456789</td></tr><tr><td>AaBbCcDdEeFfGgHhIi AaBbCcDdEeFfGgHhIi AaBbCcDdEeFfGgHhIi ${\mathrm{{AaB}}}_{\mathrm{B}}{\mathrm{C}}_{\mathrm{C}}{\mathrm{D}}_{\mathrm{D}}{\mathrm{{EeF}}}_{\mathrm{F}}{\mathrm{G}}_{\mathrm{G}}{\mathrm{H}}_{\mathrm{H}}{\mathrm{I}}_{\mathrm{I}}$ ${A}_{A}{B}_{B}{C}_{C}{D}_{D}{E}_{E}{F}_{F}{G}_{G}{H}_{H}{I}_{I}$ JjKkLlMmNnOoPpQqRr JjKkLlMmNnOoPpQqRr JjKkL1MmNnOoPpQqRr JjKKLLMMNNOOPPQQRR JIKKLLMMNNOOPPQQRR SsTtUuVvWwXxYyZz SsTtUuVvWwXxYyZz SsTtUuVvWwXxYyZz SsTrUuVvWwXxYyZz SsTtUuVvWwXxYyZz 01234567890123456789012345678901234567890123456789</td></tr><tr><td>AaBbCcDoEeFfGgHhIi AaBbCcDdEeFfGgHhIi AaBbCcDdEeFfGgHhIi AaBbCcDdEeFfGaHhIi AaBbCcDdEeFfGaHhli JjKkLLMmNnOoPrQQRE JjKkLlMmNnOoPpQqR: JjKkLlMmNnOoPpQqR JjKkLlMmNnOoPpOqRi JjKkLlMmNnOoPpQqRi SsTruuvvwwxxyyzz SsTtUuVvWwXxYvZz SsTtUuVvWwXxYvZ2 SsTtUuVuWwXxYuZz SSTtUuVuWwXxYyZz 01234567890123456789012345678901234567890123456789</td></tr><tr><td>AaBbCcDdEeFfGgHhIi AaBbCcDdEeFfGaHhIi AaBbCcDdEeFfGaHhIi AaBbCcDdEeFfGaHhIi AaBbCcDdEeFfGaHhli JjKkLlMmNnOoPpQqRr JiKkLIMmNnOoPbQgRr JiKkLIMmNnOoPpQaRr JjKkLlMmNnOoPpQqRr JjKkLlMmNnOoPpQqRi $\mathrm{{SsTtUuVvWwXxYyZz}}$ SsTtUuVVWWXxYyZz SsTtUuVvWwXxYvZz SsTtUuVvWwXxYyZz 01234567890123456789012345678901234567890123456789</td></tr><tr><td>AaBbCcDdEeFfGgHhli AaBbCcDdEeFfGaHhIi AaBbCcDdEeFfGgHhIi AaBbCcDdEeFfGgHhIi AaBbCcDdEeFfGgHhIi JjKkLlMmNnOoPpQqR JjKKLIMMNnOoPpQqRr JiKkLlMmNnOoPoQaRi JjKkLlMmNnOoPpQqRi JjKkLlMmNnOoPpQqR1 SsTtUuVvWwXxYyZz SSTUUVVWWXXYYZZ $\begin{matrix} S & \xrightarrow[]{s} & T & \xrightarrow[]{t} & U & \xrightarrow[]{u} & V & \xrightarrow[]{w} & X & \xrightarrow[]{x} & Y & \xrightarrow[]{Z} & z \end{matrix}$ SsTtUuVvWwXxYyZz $\mathbf{{SsTtUuVvWwXxYyZz}}$ 01234567890123456789012345678901234567890123456789</td></tr><tr><td>AaBbCcDdEeFfGgHhli AaBbCcDdEeFfGgHhIi AaBbCcDdEeFfGgHhI AaBbCcDdEeFfGaHhIi AaBbCcDdEeFfGaHhIi JjKkLIMmNnOoPpQqRr JjKkLIMmNnOoPpQqR JjKkLIMmNnOoPpQqR JjKkLIMmNnOoPpQqRr JjKkLlMmNnOoPpQqRi 01234567890123456789</td></tr><tr><td>AaBbCcDdEeFfGgHhli AaBbCcDdEeFfGgHhli AaBbCcDdEeFfGaHhli AaBbCcDdEeFfGaHhIi AaBbCcDdEeFfGgHhIi JjKkLlMmNnOoPpQqRr JjKkLlMmNnOoPpQqRr JjKkLlMmNnOoPpQqRr JjKkLlMmNnOoPpQqRr JjKkLlMmNnOoPpQqRr SsTtUuVvWwXxYvZz SSTIUUVVWWXXYVZZ SSTIUUVVWWXXYVZZ SsTtUuVvWwXxYvZz SsTtUuVvWwXxYvZz 01234567890123456789012345678901234567890123456789</td></tr><tr><td>AaBbCcDdEeFfGgHhIi AaBbCcDdEeFfGgHhIi AaBbCcDdEeFfGgHhli AaBbCcDdEeFfGgHhli AaBbCcDdEeFfGgHhli JjKkLlMmNnOoPpQqRr $\mathrm{J}\mathrm{j}\mathrm{K}\mathrm{k}\mathrm{L}\mathrm{l}\mathrm{M}\mathrm{m}\mathrm{N}\mathrm{n}\mathrm{O}\mathrm{o}\mathrm{P}\mathrm{p}\mathrm{Q}\mathrm{q}\mathrm{R}\mathrm{r}$ JjKkLIMmNnOoPpQqRr IjKkLIMmNnOoPpQqRr JjKkLIMmNnOoPpQqRr SsTtUuVvWwXxYyZz $\mathrm{{\mathtt{C} \mathtt{s} \mathtt{T} \mathtt{t} \mathtt{U} \mathtt{u} \mathtt{V} \mathtt{v} \mathtt{M} \mathtt{M} \mathtt{X} \mathtt{X} \mathtt{Y} }}\mathrm{Z}\mathrm{z}$ SsTtUuVvWwXxYyZz SsTtUuVvWwXxYyZz SsTtUuVvWwXxYyZz 01234567890123456789012345678901234567890123456789</td></tr><tr><td>AaBbCcDdEeFfGgHhli AaBbCcDdEeFfGaHhli AaBbCcDdEeFfGaHhli AaBbCcDdEeFfGgHhli AaBbCcDdEeFfGgHhIi JjKkLIMmNnOoPpQqRr JjKkLlMmNnOoPpQqRi JjKkLlMmNnOoPpQqRr JjKkLIMmNnOoPpQqRi JjKkL1MmNnOoPpQqRr SsTtUuVvWwXxYyZz SsTtUuVvWwXxYvZz SsTtUuVvWwXxYvZz SsTtUuVvWwXxYvZz SsTtUuVvWwXxYyZz 01234567890123456789012345678901234567890123456789</td></tr><tr><td>AaBbCcDdEeFfGgHhIi AaBbCcDdEeFfGgHhIi AaBbCcDdEeFfGgHhIi AaBbCcDdEeFfGgHhli AABBCCDDEEFFGGHHII JjKkLlMmNnOoPpQqRr JjKkLlMmNnOoPpQqRr JjKkLlMmNnOoPpQqRr JjKkLIMmNnOoPpQqRr JJKKLLMMNNOOPPQQRR SsTtUuVvWwXxYvZz SsTtUuVvWwXxYyZz SsTtUuVvWwXxYyZz SsTtUuVvWwXxYyZz SSTTUUVVWWXXYYZZ 01234567890123456789012345678901234567890123456789</td></tr></table>
274
+
275
+ Figure 8: The 100 fonts are ranked from most to least interesting (from top left, and left to right in each row). This is for our X1 case. Please see the text for more details, and please zoom in to better see the fonts.
276
+
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference/qp1fTRLKbIj/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,301 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # "Can you do it for me?": Understanding Use-by-Proxy in Interactive Systems
2
+
3
+ Anon*
4
+
5
+ Anon
6
+
7
+ ## Abstract
8
+
9
+ "I can't reach", "my hands are full", "I'm driving"—can you do it for me? If using a smartphone is challenging for a user because of either physical or cognitive encumbrances, they often ask another person to perform the desired task on their behalf. In this situation, the user with the motivation or goal to perform the task is not directly using the device but is instead working through an intermediary, a use-by-proxy, where the proxy-user has limited initiative. Through a qualitative study, we probe this use-by-proxy phenomenon. We explore triggers, frequencies, and breakdowns that confound use-by-proxy interaction. We identify the challenges both for the end-user and the proxy-user (e.g., that proxy-user input is a deficient form of interaction for both the main user and the proxy user) and discuss consequences and implications for the design of this uneven collaborative interaction.
10
+
11
+ Index Terms: Project and People ManagementLife Cycle; 500 [Human-centered computing Interaction tech]: -
12
+
13
+ ## 1 INTRODUCTION
14
+
15
+ As people become more attached to technology, we see an increase in users who struggle to operate their digital devices during a physically, socially, or cognitively taxing task. For example, a struggle to interact includes instances of smartphone use while physically encumbered [21, 22], while driving [33], or while having a conversation [38].
16
+
17
+ Human-Computer Interaction (HCI) designers have attempted to alleviate these challenges by designing alternative input methods to allow users to interact hands-free. Designers attempt to encourage hands-free interactions by refining of the form factor of digital assistants (e.g., Siri, Google Assistant, Alexa, Cortana); however, hands-free options have yet to replace the temptation to use direct interaction techniques $\left\lbrack {{18},{45}}\right\rbrack$ even under conditions where use of a digital device increases the likelihood of injury [38]!
18
+
19
+ A simple work around often employed by over encumbered users is to request help from a friend to allow indirect interaction with the application through 'use-by-proxy'. In this use case we can identify two main user roles: (1) primary user, who is motivated to interact with the application, and (2) the proxy-user who executes the task. For example, a passenger in a vehicle who assists the driver with navigation by using a Global Position System (GPS) application enters an address specified by the driver. We note that use-by-proxy can occur either on the main user's or the proxy-user's device.
20
+
21
+ Use-by-proxy is a collaborative interaction; however, the lack of parity in this collaboration creates an important niche user pair to study. In contrast to other forms of collaboration, during the proxy-user use case users are not motivated by the same goal; this is the main characteristic of use-by-proxy interaction. The primary user has a goal which necessitates the software use, and the proxy-user's goal is to assist. It is true-by its nature-that this characterisation results in a range of possible interactions-from: a proxy-user replying to a text message by entering text via dictation; to the proxy-user writing a longer document for another user. A proxy-user can support input (entering an address into a GPS systems, for example) or output (providing directions from a GPS system to a driver).
22
+
23
+ ![01963df9-a696-7e97-bbd7-32f1ca675be5_0_935_469_673_432_0.jpg](images/01963df9-a696-7e97-bbd7-32f1ca675be5_0_935_469_673_432_0.jpg)
24
+
25
+ Figure 1: The design domain diagram above clarifies the scope of our topic represented by the shaded green area.
26
+
27
+ In this figure, we clarify the scope of our paper. The x-axis represents an increase in granularity of data. The y-axis indicates the direction of information as input or output. From the diagram, readers are informed that the scope of the research paper focuses on low granularity, simpler tasks e.g. sending a text message on behalf of a user is within scope, whereas synthesising a complex email conversation is not.
28
+
29
+ In Figure 1, we describe this domain of use-by-proxy interactions and highlight (in green) our particular focus within this domain. We collect data on use-by-proxy scenarios that span the spectrum of use-by-proxy interactions with the goal of understanding the richness of these interactions.
30
+
31
+ Our paper contributes to the identification and understanding of the proxy-user edge use case, which results in an uneven collaboration between two users. Unique to this situation, the user who is interacting with the interface is not providing the main motivation to use the application. The primary user is instead accessing the functions of the application through a proxy, who we define as a secondary stakeholder.
32
+
33
+ To explore the proxy-user use case, we employ qualitative interviews, which probe participant experiences with use-by-proxy. Based on the results of our work, we highlight design implications and encourage designers to consider a proxy-use in computer-supported collaboration design-especially for applications commonly used while multitasking or in divided attention scenarios (e.g., driving, exercising, cooking).
34
+
35
+ ## 2 BACKGROUND AND RELATED WORK
36
+
37
+ Literature on collaboration is expansive and includes exploration of multiple factors including, not limited to: time [19] or location synchrony [15], social learning [13, 17, 47], collaborator types [44, 48], communication [41,42], and territoriality $\left\lbrack {{34},{36},{37},{46}}\right\rbrack$ . We specify our scope to be interactions between users engaging in uneven collaboration-specifically use-by-proxy interaction-because of the vast collection of work in this area, .
38
+
39
+ ---
40
+
41
+ *e-mail: anon@email.com
42
+
43
+ ---
44
+
45
+ Literature on uneven interactions includes Parikh.2006's work [24] on mediated interaction. Mediated interaction is defined as a pattern of interaction where two or more users access the same device [24]. The 2006 report describes four kinds of inter-mediation: (1) cooperative scenarios, where users get (nearly) equal access to the interface, (2) dominated scenarios, where one or more users dominate the access, (3) inter-mediated interaction-which is necessary when a user has no direct access to a device but depends on the outcome- where users can see the output directly, and finally, (4) indirect interaction which deviates from its predecessor by removing the user's ability to observe the actions of their collaborator [24].
46
+
47
+ Restrictions in the physical space can prompt uneven collaboration. Shoulder-surfing content [4] is a well-studied aspect of public display use, which contributes to the staggered lineups of groups waiting for a display [3]. When shoulder-surfers begin to contribute to the interaction, we observe a use-by-proxy situation. Peltonen.2008 and Azad:2012:TBA:2317956.2318025 both investigated multi-user interactions with a touchscreen in a public space. The researchers concluded that the mediated interactions occurred because of the physical space limitations. Members of a large group gathered around the display behind limited subset of users, who could physically access the device. These physically distant users contributed to the interaction through proxy by advising and commenting on the user’s actions $\left\lbrack {3,{25}}\right\rbrack$ . Similarly, studies of territoriality reveal that space constraints can stem from the division of collaborative spaces into territories $\left\lbrack {{34},{35}}\right\rbrack$ . These territories limit access to the usable screen real estate.
48
+
49
+ Physical restrictions—as described above—limit access; however, physical constraints are only one factor that may bar users from interacting directly with an application. User's may be limited by their technology and computer literacy levels [31]. Sambasivan2010 investigated patterns emerging from different constraint constellations during intermediate interaction. They describe surrogate usage as input and interpretation of the output by a proxy-user, proximate enabling, which involves a proxy-user operating a device owned by technology-illiterate primary user, and proximate translation as system-output translation by proxy for the textual illiterate [31]. Furthermore, language proficiency and literacy levels can also be a barrier to use. Interaction may be carried out in an inter-mediated mode because of illiteracy of the primary user, fear of technology, habits of dependency, costs or access constraints such as age [31]. Use-by-proxy clearly has accessibility advantages.
50
+
51
+ In contrast, collaboration may be sought based on the value of the contributor. For example, expert knowledge in domestic IT infrastructure is a contributing reason for the frequency of proxy-user input scenarios. Kiesler. 2000, reports that the intellectual authority in families can be shifted when a skilled or motivated member (mostly-but not necessarily-a teenager [5]) becomes the "family guru" [16]. Poole.2009 were motivated to explore factors which influence the way such "helpers" provide aid. The paper found although helpers often did not advertise their skills, they actively maintain their identity as experts and get frustrated when presented with an unsolvable problem or when a distrustful person requests help [26].
52
+
53
+ Proxy-user computing is a common input technique in operating rooms [10] and is referred to using different terms in the medical literature including: task delegation [11], assistant-controlled computer keyboard [49], assistant-in-the-middle [8], or yell-and-click [43]. Surgeons have limited access to computing input devices because of the need for complete sterilisation $\left\lbrack {{20},{29},{43}}\right\rbrack$ .
54
+
55
+ Information handover in medical spaces is critical [14]. Although surgeons may rely on the information output by a computer application, they—quite literally—have their hands full [23]. Therefore, verbal delegation of computing tasks to a proxy-user is common in these environments $\left\lbrack {{20},{43}}\right\rbrack$ . Proxy-user collaboration is often considered a benchmark for surgeon-computer interaction $\left\lbrack {{11},{27},{49}}\right\rbrack$ . As a result, the compendium of medical literature provides insight into proxy-use situations and exposes the delicate nature of communication between a primary user and a proxy-user.
56
+
57
+ In 2004, Grange.2004 published a case study where a misunderstanding between the surgeon and the proxy-user resulted in error. In an attempt to recover from this error, hospital management allowed intervention of three additional assistants. Despite the added resources, the final resolution of the problem required the surgeon to halt the surgery, remove themselves from the sterile environment, and access the computer directly. The actions of the surgeon resulted in a delay of eight minutes. An eight-minute delay could be fatal because of the critical nature of surgery [8].
58
+
59
+ The disadvantages introduced by proxy-user scenarios are clarified explicitly in the above example: Proxy-use is prone to misinterpretation, is indirect, and depends on the assistants' experience level. Therefore, research into who is selected as an assistant becomes relevant. Selecting a proxy-user goes beyond surface level attributes (e.g., race or gender). Instead, scholastic abilities, aptitude, extroversion, and high participation levels dictate the desirability of a potential proxy user [12]. The reported selection criteria [12] are in line with classic research on collaboration in the children's playroom by Cockburn. 1996. These researchers reported that collaboration among children benefited from any kind of negotiation. Mutual awareness and breakdowns even occurred in successful collaborations, while domination or ignorance indicated less effective situations [6].
60
+
61
+ Selection of collaborators or requests for help in the workplace further reveals reasoning behind collaborator selection. Adams.2005 reported on how different methods of accessing digital libraries are perceived in academic and health-care institutions. Digital information was made accessible (1) via existing computers in people's offices and libraries, (2) shared spaces, and (3) by information intermediaries supporting the users (i.e., clinical staff). Users in academia using personal computers report few points of contact with librarians and criticised the library system. In contrast, medical professionals working with computers in the hospital ward, expressed that they felt a lack of personal competence, which was exacerbated when asking a younger colleague for support. Information intermediaries-which act as an interface between clinical staff and the digital library-add librarian domain knowledge; they were seen as beneficial for effective information usage [1].
62
+
63
+ After a review of the literature, it becomes apparent that use-by-proxy is an area of application design that warrants further exploration.
64
+
65
+ ## 3 Methodology
66
+
67
+ To explore use-by-proxy interaction, we use a qualitative approach to investigate the occurrence of a proxy-user situations. The overall goal of our study is to better understand participants' expectations and experiences of use-by-proxy. We investigated tasks in which one user is the main motivating driver (i.e., proxy-user scenarios).
68
+
69
+ For our study, we define the proxy-user use case as: a task where the primary user, who is motivated to operate the system, asks another human user to interact with the system on their behalf. The recruited user assisting by allowing the primary user to use the application by proxy, is what we have identified as the proxy-user. For example, when driving a car, the driver may ask the passenger to get directions from a mapping application.
70
+
71
+ ### 3.1 Participants
72
+
73
+ Eighteen adult participants (18+ years old) were recruited via mailing lists and took part in individual interviews (described below). They were remunerated with ${20}\mathfrak{€}$ .
74
+
75
+ ### 3.2 Interview Structure
76
+
77
+ To understand and identify the definition of the proxy-user, our interviews began by asking participants about situations in which use-by-proxy may have occurred. To clarify use-by-proxy, we provided a number of scenarios to motivate discussion:
78
+
79
+ - Driving with a navigator,
80
+
81
+ - cooking together,
82
+
83
+ - putting together furniture with a friend,
84
+
85
+ - fixing a bike together,
86
+
87
+ - pair programming, and
88
+
89
+ - working collaboratively in their school or work career.
90
+
91
+ Additionally, we also asked how the relationship to the other user (friends, family) or the environment (who owns the house or car) affected the situation.
92
+
93
+ We encouraged participants to structure their description of use-by-proxy as a walkthrough of their interactions with or as a proxy-user in various situations they identified. In our interviews, we attempt to elicit what is different in proxy-user collaborations. We ask the interviewees for insight into their thoughts, feelings, and decisions in these situations. Our goal was to identify factors that contribute to the division of responsibilities, including the unevenness of skill, and the vulnerability of asking for help. We also discuss breakdowns in collaboration, such as providing too much or too little help. Finally, we look at outside factors that influence the relationship between collaborators.
94
+
95
+ #### 3.2.1 Qualitative Interview Analysis
96
+
97
+ The qualitative data was analysed in accordance with the procedures outlined by Corbin, Strauss, et al. $\left\lbrack {7,{39},{40}}\right\rbrack$ . To analyse the data, quotes were separated from general discussion (e.g., quotes regarding opinions vs. introductory conversation). Using a bottom-up approach, the quotations were aggregated and sorted using an affinity diagramming technique. Next, aggregated data points were analysed to pull relevant ideas and information. Overlapping categories were then explored using top-down analysis based on the themes arising from the data. Afterwards, related clusters of themes were analysed to uncover detailed differences, identify overlapping concepts, and pull larger higher level concepts from the data. The resulting work comprises the themes presented in the paper.
98
+
99
+ ## 4 RESULTS
100
+
101
+ Using the qualitative methodology outlined above, we present our results explaining what a proxy-user is, understanding why proxy-users are helpful, and finally, how to engage with proxy-users and navigate the interaction.
102
+
103
+ ### 4.1 Proxy-User: an Uneven Collaborator
104
+
105
+ As we note in our introduction, use-by-proxy is an uneven collaboration. This observation that a proxy user was an uneven collaborator was supported by our data. For example, because the main motivation for interacting with the application is central to the primary user, the primary user also is the most invested in the outcome.
106
+
107
+ "You have less responsibility. You have maximum responsibility for your task, for a special task you're doing or for part of the result, but I think the leader has a responsibility for everything that's happening. Also, for things other people do, and maybe don't do very good. So he's having more responsibility with more risk that if something goes wrong it's his fault" P1
108
+
109
+ Participants felt that, despite the role of the proxy-user being more akin to an assistant, the work is still a contribution and should be treated with the same respect as expected from any collaborative arrangement. Participants expressed that expectations of fair collaboration, positive leadership, and teamwork still apply.
110
+
111
+ "I think that's difficult if you're just the assistant and have to follow orders or the ideas of someone else, but still, if you have the feeling that you're contributing something valuable or something important, I think it can be a positive experience all in all. On the other hand, I think it also can be frustrating if you're not valued; only do the back work or not necessary stuff." P1
112
+
113
+ Moreover, proxy-user interactions remain susceptible to the pitfalls of any collaboration because finding a good collaborator is still a challenge. A collaborator can be unreliable, make mistakes, or misunderstand instructions entirely.
114
+
115
+ ### 4.2 Help: The Great Trade-Off
116
+
117
+ Choosing to collaborate with a proxy-user can have both negative and positive benefits as a result of the ever present differences between computers who reliably execute a task with consistency and humans who reliably introduce variability.
118
+
119
+ "I think it's just small errors, small human errors, like sometimes people do mistakes, and most of the time it's working right, but sometimes you have these inaccuracies in the description... For example, 'Okay, you have to take the third right.' Like, I don't know if they mean only the main streets, or the small streets in between, if you have blocks, you know? Like, do they count this small entry, like this small street, as a turn left, or do they only mean like the big junctions, you know? Like that kind of thing." P4
120
+
121
+ One challenge with proxy use, and risk to the utility of proxy users, is that humans can be unpredictable when it comes to the delivery of information. Anything from the clarity, quantity, depth, or framing of the information can vary. Thus the trade-off between human vs. computer help is especially salient when comparing a human's interpretation to an expected computer-derived outcome. Since the primary user is expecting use-by-proxy to the application, the variability in human delivery can cause conflict. P4 explains:
122
+
123
+ "Like people usually don't guide me wrong, but in these scenarios, like I said, where there's margin for error, I'd rather trust the map. But I said, I don't trust the map, but I trust myself to interpret it right." P4
124
+
125
+ As this quote illustrates, the primary user is dependant on the proxy-user's interpretation of the information. The proxy user is using an application and providing a synthesis of that information, a situation which can be irritating to a primary user who feels that they could have out performed their proxy-user assistant. For proxy-user interactions to be successful, the primary user must trust that the proxy-user is capable of completing the task and outputting the correct information.
126
+
127
+ The variability in human communication presents positive benefits as well. Computers are limited in both the information that they accept and provide, and are also unsuited for particular types of information (e.g. emotional or expressive statements). Additionally, a proxy-user is able to rephrase, verify, and correct actions in accordance to the direction of the primary user.
128
+
129
+ "I can communicate when there are other problems, for example, the real situation is always different than the situation on Google Maps. And if I have a person next to me, I can say, I don't understand what you mean, and can you explain it again. I think it's better." P8 Moreover, the primary user does not need to worry about the format in which they input information to a human proxy-user.
130
+
131
+ "It's easier and quicker to tell them, 'Hey! Google that!' I mean, you can describe things and you don't have to think about how to Google it. You can just describe stuff, like a building, and tell them: 'hey find out what this is'. And you can concentrate on driving." P12
132
+
133
+ In summary, a proxy-user can also provide rich information that is customised to whom they are talking to. Alternatively, a proxy-user can also simplify and filter out unnecessary information. The overall effect of these decisions can result in a tailored experience for the primary user in real time, but the challenge with giving and receiving help is in matching expectations. Essentially, because in some instances the primary user wants information to guide their judgement, and in other instances they desire synthesis and judgement to simplify information sharing, interacting through or as a proxy-user presents pitfalls that are difficult to navigate.
134
+
135
+ ### 4.3 Navigating the Proxy-User Relationship
136
+
137
+ One significant factor that influences proxy user relationships is the relationship between primary and proxy user. The better a proxy-user knows the primary user, the more additional cues - based on this pre-existing relationship - can be used to enhance proxy use. For example, a participant discussed how the close personal relationship with their mother results in better navigation information because the mother acting as a proxy-user will provide more information based on the primary user's emotional cues.
138
+
139
+ "I think because my mom can tell how I feel. 'She looks nervous, I have to tell her what comes next'." P2
140
+
141
+ Participants frequently noted that a closer relationships may result in better communication, or the relationship may act as a rapport for the trustworthiness of information.
142
+
143
+ "I think that lots of factors. The relationship to the person you are trying to help or helping you. I mean I'm ... Everything is different, you know, when I'm trying to help my parents in language or whatever than helping for example my girlfriend or some student or some friend. I mean that's all different. The kind or the type of relationship you have, and age maybe too. Yeah, we'll talk differently to my grandma than to my mother, for example." P6
144
+
145
+ Pre-existing relationships can also result in a more positive or fun experience in and of itself, particularly through joint struggles.
146
+
147
+ "It depends who the person is, but if it's a friend then it's, I think, alone a positive experience to interact with one another, even if it's just finding the way. I think this positive connection or interaction doesn't happen if you just have your phone, even if the phone is telling you, or Siri is telling you where to go. I think this positive experience is lacking." P1
148
+
149
+ That being said, one challenge with pre-existing relationships is that proxy-user interaction can also be more volatile. Close connection provides the possibility for expressing dissatisfaction, whereas more distant relationships are less likely to experience this tension.
150
+
151
+ ## 5 Discussion
152
+
153
+ Our investigation illustrates challenges associated with the uneven collaboration between the primary user and an assisting secondary user who enables use-by-proxy of an application.
154
+
155
+ In a proxy-use collaboration, two users look towards an application with two different underlying motivations and expectations. In motivation, the proxy-user is altruistically motivated to help and expects that the experience will be generally positive and socially rewarding. In contrast, the primary user who is seeking assistance enters the interaction with expectations that the proxy-user will preform at least on-par with the application in the current situation. Extending this to expectations, despite positive intentions motivating the proxy-user, the introduction of a person-in-the-middle does not always alleviate the burden placed on the primary user. Instead, the primary user may become frustrated due to differences in the communication of application outcomes. Given that many people are motivated by their own needs for competence and autonomy [30], shifting the control to proxy-user can have a demotivating effect for the primary user. The proxy-user who expects positive social collaboration, respectful guidance, and feedback may reflect back negativity, resulting in conflict between both stakeholders.
156
+
157
+ Our qualitative approach reveals that due to the differences in motivation and expected outcomes, the proxy-user use case differs from the crafted UX designed for the application. Therefore proxy-users are a challenge to designers whom typically focus on creating a usable interface for a single dedicated, directly motivated user.
158
+
159
+ Identifying the proxy-user use case demands further thought into the design of an interface, especially for applications designed to offer assistance during the cumbersome tasks that also motivate a request for use-by-proxy (e.g. driving or cooking). Our results highlight that the proxy-user may be skilled at the task, but has limited liability to the outcome and may need additional guidance.
160
+
161
+ Both users in the proxy use case face challenges completing distributed responsibilities. These challenges are further exacerbated by the need to manage the overlaid social challenges accompanying any collaboration. Moreover, given that the role of proxy-user exists to help others, one question we can pose is: beyond altruism, what encourages the user to maintain the collaboration? The question becomes especially relevant when the proxy-user is forced to maintain the role (i.e. over long term tasks). Motivated by the challenges of proxy use, we pose a series of design questions that seek to present alternatives to better support this uneven collaboration.
162
+
163
+ ### 5.1 Question 1: Can we eliminate the proxy-user?
164
+
165
+ Motivated by the notion that this form of collaboration is undesirable, designers may wish to negate the use for a proxy. To accomplish this, designers must overcome shortcomings of the current design. The question then becomes how best to do this.
166
+
167
+ Understanding the workarounds of a current system can help us determine what the system should do $\left\lbrack {9,{32}}\right\rbrack$ . In the proxy-user use case, demands made on the proxy-user indicate design directions by allowing designers to understand the gaps in the application's current design and implemented features. We argue that, if it is conceivable that an application could require use-by-proxy (and many may feasibly be used this way), user testing protocols should include a proxy-use scenario to understand the different and changing expectations of both primary and proxy-users as they complete demanding tasks.
168
+
169
+ In many situations, designers already, in part, propose solutions designed to alleviate the need for proxy use. For example, designers of applications geared to multitasking or divided attention scenarios (e.g. driving, exercising, cooking) have explored multiple methods to increase hands-free capabilities. For example, many companies are exploring smart assistance, smart homes, and voice activation. Given both progress and desired system features [45] the elimination of proxy use may be feasible in certain contexts. Realistically, the existence of the proxy use case indicates that advancement in hands-free and smart assistant technology still cannot replace the desire to ask for help from another human. However, if designers actively explore these use-cases, work to understand heterogeneous expectation of information and synthesis, and continue to explore alternative designs, it seems feasible that technological advances can begin to partially address the proxy use-case by, in whole or in part, eliminating proxy-users.
170
+
171
+ ### 5.2 Question 2: Can we re-balance the collaboration?
172
+
173
+ As an alternative approach, designers may attempt to balance the uneven collaboration of the proxy-user by shifting the experience profile from Assistance to Collaboration. The goal would be to transform an "over-the-shoulder-boss" to a more equitable "pair-programming" paradigm.
174
+
175
+ We can support a re-balance of collaboration by supporting the fundamentals of proxy-user collaboration: communication, attitude, and skill. At a low-level this may include adding gaze-level support to understanding how primary user may attempt to gleam information from the application as it is operated by the proxy-user. Alternatively, to help a proxy-user comprehend the instructions given, designers may consider creating a wizard or workflow that a proxy-user can follow to distil all the necessary information from the application to the primary user with concise, easy-to-follow language. These distillations can be geared toward different levels of abstraction: information, synthesis, suggestions, and alternatives. Moreover, a new simple visualisation mode could be added with the intention of supporting the proxy-user's explanation. For example, simplistic and blurred visuals may supplement the proxy-user's instructions and reduce risk of distraction by presenting limited visual information.
176
+
177
+ At a higher level, we want our tools to contribute to the distributed or shared cognition $\left\lbrack {2,{28}}\right\rbrack$ necessary for a successful proxy-user interaction. By shifting responsibilities away from the primary user completely, we may obligate the division of cognition between two users. Strategies can include designing an overview for the proxy-user instead of tailoring the interaction directly towards the primary user. For example, instead of providing an overview of information, allow the proxy-user to build a custom notebook themselves using the application's tool set. Additional functionally can support the proxy-user, who may flip ahead in a manual or along a route and attempt to create a mental summary. While successful recall of information is difficult and learning takes time, if proxy-users can pre-learn, they may become more informed collaborators.
178
+
179
+ ### 5.3 Question 3: Can we aim to guide the proxy-User
180
+
181
+ An ongoing challenge for applications is that they are currently designed for a single user, the device owner. However, there are a number of applications where use-by-proxy seems obvious, including navigation systems. Instead of targeting the primary user, the application could target the proxy-user in a secondary mode by changing the presentation of information from a workflow aimed at the person completing the task to a workflow for a person who seeks to assist.
182
+
183
+ As in the previous example of re-balancing, a modification of the presentation of information can focus on queuing information to help the proxy-user prepare and anticipate next steps. For example, the application could identify larger time gaps between steps to indicate a good time to communicate complex information to the primary user. Moreover, designers might employ picture-within-picture views to allow for peripheral monitoring by the proxy-user during these longer inactive time periods. Notifications of upcoming events can help avoid the sudden call-to-action experienced by a proxy-user while multitasking.
184
+
185
+ ## 6 CONCLUSION
186
+
187
+ It is immediately obvious to anyone who navigated, transcribed a text message, followed a recipe, assembled furniture, or controlled a slide show that, in these situations, sometimes another person will provide information - often from an application - while the user performs a physically or cognitively challenging task. We label these non-primary users 'proxy-users'. In exploring the HCI and Design research literature, design explorations seem almost silent on an analysis of proxy use and the proxy-user experience. What happens in proxy use? Are there pitfalls? Are there opportunities for improvement?
188
+
189
+ Our paper contributes to this design space by highlighting areas for further investigation into this unique form of uneven interaction. Can it be eliminated? Re-balanced to more equal collaboration? Enhanced to improve proxy use efficacy? This paper seeks to formulate these basic questions to encourage more targeted discussion of this unequal, yet frequent, style of collaborative application use.
190
+
191
+ ## ACKNOWLEDGMENTS
192
+
193
+ The authors wish to thank A, B, C. This work was supported in part by a grant from XYZ.
194
+
195
+ ## REFERENCES
196
+
197
+ [1] A. Adams, A. Blandford, and P. Lunt. Social empowerment and exclusion: a case study on digital libraries. ACM Transactions on Computer-Human Interaction (TOCHI), 12(2):174-200, 2005.
198
+
199
+ [2] E. Arias, H. Eden, G. Fischer, A. Gorman, and E. Scharff. Transcending the individual human mind-creating shared understanding through collaborative design. ACM Transactions on Computer-Human Interaction (TOCHI), 7(1):84-113, 2000.
200
+
201
+ [3] A. Azad, J. Ruiz, D. Vogel, M. Hancock, and E. Lank. Territoriality and behaviour on and around large vertical publicly-shared displays. In Proceedings of the Designing Interactive Systems Conference, DIS '12, pp. 468-477. ACM, New York, NY, USA, 2012. doi: 10.1145/2317956 .2318025
202
+
203
+ [4] F. Brudy, D. Ledo, S. Greenberg, and A. Butz. Is anyone looking? mitigating shoulder surfing on public displays through awareness and protection. In Proceedings of The International Symposium on Pervasive Displays, PerDis '14, pp. 1:1-1:6. ACM, New York, NY, USA, 2014. doi: 10.1145/2611009.2611028
204
+
205
+ [5] M. Chetty, J.-Y. Sung, and R. E. Grinter. How smart homes learn: The evolution of the networked home and household. In International Conference on Ubiquitous Computing, pp. 127-144, 2007.
206
+
207
+ [6] A. Cockburn and S. Greenberg. Children's collaboration styles in a newtonian microworld. In Conference Companion on Human Factors in Computing Systems, pp. 181-182, 1996.
208
+
209
+ [7] J. Corbin, A. Strauss, and A. L. Strauss. Basics of qualitative research. sage, 2014.
210
+
211
+ [8] S. Grange, T. Fong, and C. Baur. M/oris: a medical/operating room interaction system. In Proceedings of the 6th international conference on Multimodal interfaces, pp. 159-166, 2004.
212
+
213
+ [9] D. Harrison, P. Marshall, N. Bianchi-Berthouze, and J. Bird. Activity tracking: Barriers, workarounds and customisation. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, UbiComp '15, pp. 617-621. ACM, New York, NY, USA, 2015. doi: 10.1145/2750858.2805832
214
+
215
+ [10] J. Hettig, P. Saalfeld, M. Luz, M. Becker, M. Skalej, and C. Hansen. Comparison of gesture and conventional interaction techniques for interventional neuroradiology. International journal of computer assisted radiology and surgery, 12(9):1643-1653, 2017.
216
+
217
+ [11] J. Hettig, P. Saalfeld, M. Luz, M. Becker, M. Skalej, and C. Hansen. Comparison of gesture and conventional interaction techniques for interventional neuroradiology. International Journal of Computer Assisted Radiology and Surgery, pp. 1643-1653, 2017.
218
+
219
+ [12] C. Ishak, C. Neustaedter, D. Hawkins, J. Procyk, and M. Massimi. Human proxies for remote university classroom attendance. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 931-943, 2016.
220
+
221
+ [13] G. Jacucci, A. Morrison, G. T. Richard, J. Kleimola, P. Peltonen, L. Parisi, and T. Laitinen. Worlds of information: designing for engagement at a public multi-touch display. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2267-2276, 2010.
222
+
223
+ [14] R. Kamaleswaran, R. R. Wehbe, J. E. Pugh, L. Lennart, C. McGregor, and A. James. Collaborative multi-touch clinical handover system
224
+
225
+ for the neonatal intensive care unit. electronic Journal of Health Informatics, 9(1):5, 2015.
226
+
227
+ [15] D. L. Kappen, J. Gregory, D. Stepchenko, R. R. Wehbe, and L. E.
228
+
229
+ Nacke. Exploring social interaction in co-located multiplayer games. In CHI '13 Extended Abstracts on Human Factors in Computing Systems, CHI EA '13, p. 1119-1124. Association for Computing Machinery, New York, NY, USA, 2013. doi: 10.1145/2468356.2468556
230
+
231
+ [16] S. Kiesler, B. Zdaniuk, V. Lundmark, and R. Kraut. Troubles with the internet: The dynamics of help at home. Human-computer interaction, 15(4):323-351, 2000.
232
+
233
+ [17] R. Kildare, R. N. Williams, and J. Hartnett. An online tool for learning collaboration and learning while collaborating. In Proceedings of the 8th Australasian Conference on Computing Education - Volume 52, ACE '06, p. 101-108. Australian Computer Society, Inc., AUS, 2006.
234
+
235
+ [18] J. Kiseleva, K. Williams, A. Hassan Awadallah, A. C. Crook, I. Zitouni, and T. Anastasakos. Predicting user satisfaction with intelligent assistants. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '16, p. 45-54. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/2911451.2911521
236
+
237
+ [19] P. Marshall, R. Morris, Y. Rogers, S. Kreitmayer, and M. Davies. Rethinking'multi-user': an in-the-wild study of how groups approach a walk-up-and-use tabletop interface. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 3033-3042, 2011.
238
+
239
+ [20] H. M. Mentis, K. O'Hara, A. Sellen, and R. Trivedi. Interaction proxemics and image use in neurosurgery. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 927-936, 2012.
240
+
241
+ [21] A. Ng, S. A. Brewster, and J. H. Williamson. Investigating the effects of encumbrance on one-and two-handed interactions with mobile devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1981-1990. ACM, 2014.
242
+
243
+ [22] A. Ng, J. H. Williamson, and S. A. Brewster. Comparing evaluation methods for encumbrance and walking on interaction with touchscreen mobile devices. In Proceedings of the 16th international conference on Human-computer interaction with mobile devices & services, pp. 23-32. ACM, 2014.
244
+
245
+ [23] K. O'Hara, G. Gonzalez, A. Sellen, G. Penney, A. Varnavas, H. Mentis, A. Criminisi, R. Corish, M. Rouncefield, N. Dastur, et al. Touchless interaction in surgery. Communications of the ACM, 57(1):70-77, 2014.
246
+
247
+ [24] J. S. Parikh and K. Ghosh. Understanding and designing for intermediated information tasks in india. IEEE Pervasive Computing, 5(2):32-39, 2006.
248
+
249
+ [25] P. Peltonen, E. Kurvinen, A. Salovaara, G. Jacucci, T. Ilmonen, J. Evans, A. Oulasvirta, and P. Saarikko. It's mine, don't touch!: interactions at a large multi-touch display in a city centre. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 1285-1294, 2008.
250
+
251
+ [26] E. S. Poole, M. Chetty, T. Morgan, R. E. Grinter, and W. K. Edwards. Computer help at home: methods and motivations for informal technical support. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 739-748, 2009.
252
+
253
+ [27] A. V. Reinschluessel, J. Teuber, M. Herrlich, J. Bissel, M. van Eikeren, J. Ganser, F. Koeller, F. Kollasch, T. Mildner, L. Raimondo, et al. Virtual reality for user-centered design and evaluation of touch-free interaction techniques for navigating medical images in the operating room. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, pp. 2001-2009, 2017.
254
+
255
+ [28] L. B. Resnick, J. M. Levine, and S. D. Behrend. Socially shared cognition. American Psychological Association Washington, DC, 1991.
256
+
257
+ [29] W. A. Rutala, M. S. White, M. F. Gergen, and D. J. Weber. Bacterial contamination of keyboards: efficacy and functional impact of disinfectants. Infection Control & Hospital Epidemiology, 27(04):372-377, 2006.
258
+
259
+ [30] R. M. Ryan and E. L. Deci. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American psychologist, 55(1):68, 2000.
260
+
261
+ [31] N. Sambasivan, E. Cutrell, K. Toyama, and B. Nardi. Intermediated
262
+
263
+ technology use in developing communities. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. ${2583} - {2592},{2010}$ .
264
+
265
+ [32] C. Satchell and P. Dourish. Beyond the user: Use and non-use in
266
+
267
+ hci. In Proceedings of the 21st Annual Conference of the Australian Computer-Human Interaction Special Interest Group: Design: Open 24/7, OZCHI '09, pp. 9-16. ACM, New York, NY, USA, 2009. doi: 10 .1145/1738826.1738829
268
+
269
+ [33] P. Schroeder, M. Wilbur, R. Pena, S. Abt, et al. National survey on distracted driving attitudes and behaviors-2015. Technical report, United States. National Highway Traffic Safety Administration, 2018.
270
+
271
+ [34] J. Scott, D. Dearman, K. Yatani, and K. N. Truong. Sensing foot gestures from the pocket. In Proceedings of the 23nd annual ACM symposium on User interface software and technology, pp. 199-208, 2010.
272
+
273
+ [35] S. D. Scott, M. S. T. Carpendale, and K. M. Inkpen. Territoriality in collaborative tabletop workspaces. In Proceedings of the 2004 ACM conference on Computer supported cooperative work, pp. 294-303, 2004.
274
+
275
+ [36] S. D. Scott, K. D. Grant, and R. L. Mandryk. System guidelines for co-located, collaborative work on a tabletop display. In ECSCW 2003, pp. 159-178, 2003.
276
+
277
+ [37] S. D. Scott, K. D. Grant, and R. L. Mandryk. System guidelines for co-located, collaborative work on a tabletop display. In ECSCW 2003, pp. 159-178, 2003.
278
+
279
+ [38] D. Stavrinos, K. W. Byington, and D. C. Schwebel. Distracted walking: cell phones increase injury risk for college pedestrians. Journal of safety research, 42(2):101-107, 2011.
280
+
281
+ [39] A. Strauss and J. Corbin. Grounded theory methodology. Handbook of qualitative research, 17:273-85, 1994.
282
+
283
+ [40] A. Strauss and J. M. Corbin. Basics of qualitative research: Grounded theory procedures and techniques. Sage Publications, Inc, 1990.
284
+
285
+ [41] D. Vaddi, Z. Toups, I. Dolgov, R. Wehbe, and L. Nacke. Investigating the impact of cooperative communication mechanics on player performance in portal 2. In Proceedings of the 42nd Graphics Interface Conference, GI '16, p. 41-48. Canadian Human-Computer Communications Society, Waterloo, CAN, 2016.
286
+
287
+ [42] D. Vaddi, Z. Toups, I. Dolgov, R. Wehbe, and L. Nacke. Investigating the impact of cooperative communication mechanics on player performance in portal 2. In Proceedings of the 42nd Graphics Interface Conference, GI '16, p. 41-48. Canadian Human-Computer Communications Society, Waterloo, CAN, 2016.
288
+
289
+ [43] H. Visarius, J. Gong, C. Scheer, S. Haralamb, and L. P. Nolte. Man-machine interfaces in computer assisted surgery. Computer Aided Surgery: Official Journal of the International Society for Computer Aided Surgery (ISCAS), 2(2):102-107, 1997.
290
+
291
+ [44] M. Vodosek. Relational models in cross-cultural collaboration. In Proceedings of the 3rd International Conference on Intercultural Collaboration, ICIC '10, p. 279-282. Association for Computing Machinery, New York, NY, USA, 2010. doi: 10.1145/1841853. 1841907
292
+
293
+ [45] A. Vtyurina and A. Fourney. Exploring the role of conversational cues in guided task support with virtual assistants. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 208. ACM, 2018.
294
+
295
+ [46] R. R. Wehbe, T. Dickson, A. Kuzminykh, L. E. Nacke, and E. Lank. Personal space in play: Physical and digital boundaries in large-display cooperative and competitive games. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20, p. 1-14. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3313831.3376319
296
+
297
+ [47] R. R. Wehbe, D. L. Kappen, D. Rojas, M. Klauser, B. Kapralos, and L. E. Nacke. Eeg-based assessment of video and in-game learning. In CHI '13 Extended Abstracts on Human Factors in Computing Systems, CHI EA '13, p. 667-672. Association for Computing Machinery, New York, NY, USA, 2013. doi: 10.1145/2468356.2468474
298
+
299
+ [48] R. R. Wehbe, E. Lank, and L. E. Nacke. Left them 4 dead: Perception of humans versus non-player character teammates in cooperative game-play. In Proceedings of the 2017 Conference on Designing Interactive Systems, DIS '17, p. 403-415. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10.1145/3064663.3064712
300
+
301
+ [49] A. Zaman, L. Reisig, A. V. Reinschluessel, H. Bektas, D. Weyhe, M. Herrlich, T. Döring, and R. Malaka. An interactive-shoe for surgeons: Hand-free interaction with medical $2\mathrm{\;d}$ data. In Extended ${Ab}$ - stracts of the 2018 CHI Conference on Human Factors in Computing Systems, CHI EA '18, pp. LBW633:1-LBW633:6. ACM, New York, NY, USA, 2018. doi: 10.1145/3170427.3188606
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference/qp1fTRLKbIj/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,189 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § "CAN YOU DO IT FOR ME?": UNDERSTANDING USE-BY-PROXY IN INTERACTIVE SYSTEMS
2
+
3
+ Anon*
4
+
5
+ Anon
6
+
7
+ § ABSTRACT
8
+
9
+ "I can't reach", "my hands are full", "I'm driving"—can you do it for me? If using a smartphone is challenging for a user because of either physical or cognitive encumbrances, they often ask another person to perform the desired task on their behalf. In this situation, the user with the motivation or goal to perform the task is not directly using the device but is instead working through an intermediary, a use-by-proxy, where the proxy-user has limited initiative. Through a qualitative study, we probe this use-by-proxy phenomenon. We explore triggers, frequencies, and breakdowns that confound use-by-proxy interaction. We identify the challenges both for the end-user and the proxy-user (e.g., that proxy-user input is a deficient form of interaction for both the main user and the proxy user) and discuss consequences and implications for the design of this uneven collaborative interaction.
10
+
11
+ Index Terms: Project and People ManagementLife Cycle; 500 [Human-centered computing Interaction tech]: -
12
+
13
+ § 1 INTRODUCTION
14
+
15
+ As people become more attached to technology, we see an increase in users who struggle to operate their digital devices during a physically, socially, or cognitively taxing task. For example, a struggle to interact includes instances of smartphone use while physically encumbered [21, 22], while driving [33], or while having a conversation [38].
16
+
17
+ Human-Computer Interaction (HCI) designers have attempted to alleviate these challenges by designing alternative input methods to allow users to interact hands-free. Designers attempt to encourage hands-free interactions by refining of the form factor of digital assistants (e.g., Siri, Google Assistant, Alexa, Cortana); however, hands-free options have yet to replace the temptation to use direct interaction techniques $\left\lbrack {{18},{45}}\right\rbrack$ even under conditions where use of a digital device increases the likelihood of injury [38]!
18
+
19
+ A simple work around often employed by over encumbered users is to request help from a friend to allow indirect interaction with the application through 'use-by-proxy'. In this use case we can identify two main user roles: (1) primary user, who is motivated to interact with the application, and (2) the proxy-user who executes the task. For example, a passenger in a vehicle who assists the driver with navigation by using a Global Position System (GPS) application enters an address specified by the driver. We note that use-by-proxy can occur either on the main user's or the proxy-user's device.
20
+
21
+ Use-by-proxy is a collaborative interaction; however, the lack of parity in this collaboration creates an important niche user pair to study. In contrast to other forms of collaboration, during the proxy-user use case users are not motivated by the same goal; this is the main characteristic of use-by-proxy interaction. The primary user has a goal which necessitates the software use, and the proxy-user's goal is to assist. It is true-by its nature-that this characterisation results in a range of possible interactions-from: a proxy-user replying to a text message by entering text via dictation; to the proxy-user writing a longer document for another user. A proxy-user can support input (entering an address into a GPS systems, for example) or output (providing directions from a GPS system to a driver).
22
+
23
+ < g r a p h i c s >
24
+
25
+ Figure 1: The design domain diagram above clarifies the scope of our topic represented by the shaded green area.
26
+
27
+ In this figure, we clarify the scope of our paper. The x-axis represents an increase in granularity of data. The y-axis indicates the direction of information as input or output. From the diagram, readers are informed that the scope of the research paper focuses on low granularity, simpler tasks e.g. sending a text message on behalf of a user is within scope, whereas synthesising a complex email conversation is not.
28
+
29
+ In Figure 1, we describe this domain of use-by-proxy interactions and highlight (in green) our particular focus within this domain. We collect data on use-by-proxy scenarios that span the spectrum of use-by-proxy interactions with the goal of understanding the richness of these interactions.
30
+
31
+ Our paper contributes to the identification and understanding of the proxy-user edge use case, which results in an uneven collaboration between two users. Unique to this situation, the user who is interacting with the interface is not providing the main motivation to use the application. The primary user is instead accessing the functions of the application through a proxy, who we define as a secondary stakeholder.
32
+
33
+ To explore the proxy-user use case, we employ qualitative interviews, which probe participant experiences with use-by-proxy. Based on the results of our work, we highlight design implications and encourage designers to consider a proxy-use in computer-supported collaboration design-especially for applications commonly used while multitasking or in divided attention scenarios (e.g., driving, exercising, cooking).
34
+
35
+ § 2 BACKGROUND AND RELATED WORK
36
+
37
+ Literature on collaboration is expansive and includes exploration of multiple factors including, not limited to: time [19] or location synchrony [15], social learning [13, 17, 47], collaborator types [44, 48], communication [41,42], and territoriality $\left\lbrack {{34},{36},{37},{46}}\right\rbrack$ . We specify our scope to be interactions between users engaging in uneven collaboration-specifically use-by-proxy interaction-because of the vast collection of work in this area, .
38
+
39
+ *e-mail: anon@email.com
40
+
41
+ Literature on uneven interactions includes Parikh.2006's work [24] on mediated interaction. Mediated interaction is defined as a pattern of interaction where two or more users access the same device [24]. The 2006 report describes four kinds of inter-mediation: (1) cooperative scenarios, where users get (nearly) equal access to the interface, (2) dominated scenarios, where one or more users dominate the access, (3) inter-mediated interaction-which is necessary when a user has no direct access to a device but depends on the outcome- where users can see the output directly, and finally, (4) indirect interaction which deviates from its predecessor by removing the user's ability to observe the actions of their collaborator [24].
42
+
43
+ Restrictions in the physical space can prompt uneven collaboration. Shoulder-surfing content [4] is a well-studied aspect of public display use, which contributes to the staggered lineups of groups waiting for a display [3]. When shoulder-surfers begin to contribute to the interaction, we observe a use-by-proxy situation. Peltonen.2008 and Azad:2012:TBA:2317956.2318025 both investigated multi-user interactions with a touchscreen in a public space. The researchers concluded that the mediated interactions occurred because of the physical space limitations. Members of a large group gathered around the display behind limited subset of users, who could physically access the device. These physically distant users contributed to the interaction through proxy by advising and commenting on the user’s actions $\left\lbrack {3,{25}}\right\rbrack$ . Similarly, studies of territoriality reveal that space constraints can stem from the division of collaborative spaces into territories $\left\lbrack {{34},{35}}\right\rbrack$ . These territories limit access to the usable screen real estate.
44
+
45
+ Physical restrictions—as described above—limit access; however, physical constraints are only one factor that may bar users from interacting directly with an application. User's may be limited by their technology and computer literacy levels [31]. Sambasivan2010 investigated patterns emerging from different constraint constellations during intermediate interaction. They describe surrogate usage as input and interpretation of the output by a proxy-user, proximate enabling, which involves a proxy-user operating a device owned by technology-illiterate primary user, and proximate translation as system-output translation by proxy for the textual illiterate [31]. Furthermore, language proficiency and literacy levels can also be a barrier to use. Interaction may be carried out in an inter-mediated mode because of illiteracy of the primary user, fear of technology, habits of dependency, costs or access constraints such as age [31]. Use-by-proxy clearly has accessibility advantages.
46
+
47
+ In contrast, collaboration may be sought based on the value of the contributor. For example, expert knowledge in domestic IT infrastructure is a contributing reason for the frequency of proxy-user input scenarios. Kiesler. 2000, reports that the intellectual authority in families can be shifted when a skilled or motivated member (mostly-but not necessarily-a teenager [5]) becomes the "family guru" [16]. Poole.2009 were motivated to explore factors which influence the way such "helpers" provide aid. The paper found although helpers often did not advertise their skills, they actively maintain their identity as experts and get frustrated when presented with an unsolvable problem or when a distrustful person requests help [26].
48
+
49
+ Proxy-user computing is a common input technique in operating rooms [10] and is referred to using different terms in the medical literature including: task delegation [11], assistant-controlled computer keyboard [49], assistant-in-the-middle [8], or yell-and-click [43]. Surgeons have limited access to computing input devices because of the need for complete sterilisation $\left\lbrack {{20},{29},{43}}\right\rbrack$ .
50
+
51
+ Information handover in medical spaces is critical [14]. Although surgeons may rely on the information output by a computer application, they—quite literally—have their hands full [23]. Therefore, verbal delegation of computing tasks to a proxy-user is common in these environments $\left\lbrack {{20},{43}}\right\rbrack$ . Proxy-user collaboration is often considered a benchmark for surgeon-computer interaction $\left\lbrack {{11},{27},{49}}\right\rbrack$ . As a result, the compendium of medical literature provides insight into proxy-use situations and exposes the delicate nature of communication between a primary user and a proxy-user.
52
+
53
+ In 2004, Grange.2004 published a case study where a misunderstanding between the surgeon and the proxy-user resulted in error. In an attempt to recover from this error, hospital management allowed intervention of three additional assistants. Despite the added resources, the final resolution of the problem required the surgeon to halt the surgery, remove themselves from the sterile environment, and access the computer directly. The actions of the surgeon resulted in a delay of eight minutes. An eight-minute delay could be fatal because of the critical nature of surgery [8].
54
+
55
+ The disadvantages introduced by proxy-user scenarios are clarified explicitly in the above example: Proxy-use is prone to misinterpretation, is indirect, and depends on the assistants' experience level. Therefore, research into who is selected as an assistant becomes relevant. Selecting a proxy-user goes beyond surface level attributes (e.g., race or gender). Instead, scholastic abilities, aptitude, extroversion, and high participation levels dictate the desirability of a potential proxy user [12]. The reported selection criteria [12] are in line with classic research on collaboration in the children's playroom by Cockburn. 1996. These researchers reported that collaboration among children benefited from any kind of negotiation. Mutual awareness and breakdowns even occurred in successful collaborations, while domination or ignorance indicated less effective situations [6].
56
+
57
+ Selection of collaborators or requests for help in the workplace further reveals reasoning behind collaborator selection. Adams.2005 reported on how different methods of accessing digital libraries are perceived in academic and health-care institutions. Digital information was made accessible (1) via existing computers in people's offices and libraries, (2) shared spaces, and (3) by information intermediaries supporting the users (i.e., clinical staff). Users in academia using personal computers report few points of contact with librarians and criticised the library system. In contrast, medical professionals working with computers in the hospital ward, expressed that they felt a lack of personal competence, which was exacerbated when asking a younger colleague for support. Information intermediaries-which act as an interface between clinical staff and the digital library-add librarian domain knowledge; they were seen as beneficial for effective information usage [1].
58
+
59
+ After a review of the literature, it becomes apparent that use-by-proxy is an area of application design that warrants further exploration.
60
+
61
+ § 3 METHODOLOGY
62
+
63
+ To explore use-by-proxy interaction, we use a qualitative approach to investigate the occurrence of a proxy-user situations. The overall goal of our study is to better understand participants' expectations and experiences of use-by-proxy. We investigated tasks in which one user is the main motivating driver (i.e., proxy-user scenarios).
64
+
65
+ For our study, we define the proxy-user use case as: a task where the primary user, who is motivated to operate the system, asks another human user to interact with the system on their behalf. The recruited user assisting by allowing the primary user to use the application by proxy, is what we have identified as the proxy-user. For example, when driving a car, the driver may ask the passenger to get directions from a mapping application.
66
+
67
+ § 3.1 PARTICIPANTS
68
+
69
+ Eighteen adult participants (18+ years old) were recruited via mailing lists and took part in individual interviews (described below). They were remunerated with ${20}\mathfrak{€}$ .
70
+
71
+ § 3.2 INTERVIEW STRUCTURE
72
+
73
+ To understand and identify the definition of the proxy-user, our interviews began by asking participants about situations in which use-by-proxy may have occurred. To clarify use-by-proxy, we provided a number of scenarios to motivate discussion:
74
+
75
+ * Driving with a navigator,
76
+
77
+ * cooking together,
78
+
79
+ * putting together furniture with a friend,
80
+
81
+ * fixing a bike together,
82
+
83
+ * pair programming, and
84
+
85
+ * working collaboratively in their school or work career.
86
+
87
+ Additionally, we also asked how the relationship to the other user (friends, family) or the environment (who owns the house or car) affected the situation.
88
+
89
+ We encouraged participants to structure their description of use-by-proxy as a walkthrough of their interactions with or as a proxy-user in various situations they identified. In our interviews, we attempt to elicit what is different in proxy-user collaborations. We ask the interviewees for insight into their thoughts, feelings, and decisions in these situations. Our goal was to identify factors that contribute to the division of responsibilities, including the unevenness of skill, and the vulnerability of asking for help. We also discuss breakdowns in collaboration, such as providing too much or too little help. Finally, we look at outside factors that influence the relationship between collaborators.
90
+
91
+ § 3.2.1 QUALITATIVE INTERVIEW ANALYSIS
92
+
93
+ The qualitative data was analysed in accordance with the procedures outlined by Corbin, Strauss, et al. $\left\lbrack {7,{39},{40}}\right\rbrack$ . To analyse the data, quotes were separated from general discussion (e.g., quotes regarding opinions vs. introductory conversation). Using a bottom-up approach, the quotations were aggregated and sorted using an affinity diagramming technique. Next, aggregated data points were analysed to pull relevant ideas and information. Overlapping categories were then explored using top-down analysis based on the themes arising from the data. Afterwards, related clusters of themes were analysed to uncover detailed differences, identify overlapping concepts, and pull larger higher level concepts from the data. The resulting work comprises the themes presented in the paper.
94
+
95
+ § 4 RESULTS
96
+
97
+ Using the qualitative methodology outlined above, we present our results explaining what a proxy-user is, understanding why proxy-users are helpful, and finally, how to engage with proxy-users and navigate the interaction.
98
+
99
+ § 4.1 PROXY-USER: AN UNEVEN COLLABORATOR
100
+
101
+ As we note in our introduction, use-by-proxy is an uneven collaboration. This observation that a proxy user was an uneven collaborator was supported by our data. For example, because the main motivation for interacting with the application is central to the primary user, the primary user also is the most invested in the outcome.
102
+
103
+ "You have less responsibility. You have maximum responsibility for your task, for a special task you're doing or for part of the result, but I think the leader has a responsibility for everything that's happening. Also, for things other people do, and maybe don't do very good. So he's having more responsibility with more risk that if something goes wrong it's his fault" P1
104
+
105
+ Participants felt that, despite the role of the proxy-user being more akin to an assistant, the work is still a contribution and should be treated with the same respect as expected from any collaborative arrangement. Participants expressed that expectations of fair collaboration, positive leadership, and teamwork still apply.
106
+
107
+ "I think that's difficult if you're just the assistant and have to follow orders or the ideas of someone else, but still, if you have the feeling that you're contributing something valuable or something important, I think it can be a positive experience all in all. On the other hand, I think it also can be frustrating if you're not valued; only do the back work or not necessary stuff." P1
108
+
109
+ Moreover, proxy-user interactions remain susceptible to the pitfalls of any collaboration because finding a good collaborator is still a challenge. A collaborator can be unreliable, make mistakes, or misunderstand instructions entirely.
110
+
111
+ § 4.2 HELP: THE GREAT TRADE-OFF
112
+
113
+ Choosing to collaborate with a proxy-user can have both negative and positive benefits as a result of the ever present differences between computers who reliably execute a task with consistency and humans who reliably introduce variability.
114
+
115
+ "I think it's just small errors, small human errors, like sometimes people do mistakes, and most of the time it's working right, but sometimes you have these inaccuracies in the description... For example, 'Okay, you have to take the third right.' Like, I don't know if they mean only the main streets, or the small streets in between, if you have blocks, you know? Like, do they count this small entry, like this small street, as a turn left, or do they only mean like the big junctions, you know? Like that kind of thing." P4
116
+
117
+ One challenge with proxy use, and risk to the utility of proxy users, is that humans can be unpredictable when it comes to the delivery of information. Anything from the clarity, quantity, depth, or framing of the information can vary. Thus the trade-off between human vs. computer help is especially salient when comparing a human's interpretation to an expected computer-derived outcome. Since the primary user is expecting use-by-proxy to the application, the variability in human delivery can cause conflict. P4 explains:
118
+
119
+ "Like people usually don't guide me wrong, but in these scenarios, like I said, where there's margin for error, I'd rather trust the map. But I said, I don't trust the map, but I trust myself to interpret it right." P4
120
+
121
+ As this quote illustrates, the primary user is dependant on the proxy-user's interpretation of the information. The proxy user is using an application and providing a synthesis of that information, a situation which can be irritating to a primary user who feels that they could have out performed their proxy-user assistant. For proxy-user interactions to be successful, the primary user must trust that the proxy-user is capable of completing the task and outputting the correct information.
122
+
123
+ The variability in human communication presents positive benefits as well. Computers are limited in both the information that they accept and provide, and are also unsuited for particular types of information (e.g. emotional or expressive statements). Additionally, a proxy-user is able to rephrase, verify, and correct actions in accordance to the direction of the primary user.
124
+
125
+ "I can communicate when there are other problems, for example, the real situation is always different than the situation on Google Maps. And if I have a person next to me, I can say, I don't understand what you mean, and can you explain it again. I think it's better." P8 Moreover, the primary user does not need to worry about the format in which they input information to a human proxy-user.
126
+
127
+ "It's easier and quicker to tell them, 'Hey! Google that!' I mean, you can describe things and you don't have to think about how to Google it. You can just describe stuff, like a building, and tell them: 'hey find out what this is'. And you can concentrate on driving." P12
128
+
129
+ In summary, a proxy-user can also provide rich information that is customised to whom they are talking to. Alternatively, a proxy-user can also simplify and filter out unnecessary information. The overall effect of these decisions can result in a tailored experience for the primary user in real time, but the challenge with giving and receiving help is in matching expectations. Essentially, because in some instances the primary user wants information to guide their judgement, and in other instances they desire synthesis and judgement to simplify information sharing, interacting through or as a proxy-user presents pitfalls that are difficult to navigate.
130
+
131
+ § 4.3 NAVIGATING THE PROXY-USER RELATIONSHIP
132
+
133
+ One significant factor that influences proxy user relationships is the relationship between primary and proxy user. The better a proxy-user knows the primary user, the more additional cues - based on this pre-existing relationship - can be used to enhance proxy use. For example, a participant discussed how the close personal relationship with their mother results in better navigation information because the mother acting as a proxy-user will provide more information based on the primary user's emotional cues.
134
+
135
+ "I think because my mom can tell how I feel. 'She looks nervous, I have to tell her what comes next'." P2
136
+
137
+ Participants frequently noted that a closer relationships may result in better communication, or the relationship may act as a rapport for the trustworthiness of information.
138
+
139
+ "I think that lots of factors. The relationship to the person you are trying to help or helping you. I mean I'm ... Everything is different, you know, when I'm trying to help my parents in language or whatever than helping for example my girlfriend or some student or some friend. I mean that's all different. The kind or the type of relationship you have, and age maybe too. Yeah, we'll talk differently to my grandma than to my mother, for example." P6
140
+
141
+ Pre-existing relationships can also result in a more positive or fun experience in and of itself, particularly through joint struggles.
142
+
143
+ "It depends who the person is, but if it's a friend then it's, I think, alone a positive experience to interact with one another, even if it's just finding the way. I think this positive connection or interaction doesn't happen if you just have your phone, even if the phone is telling you, or Siri is telling you where to go. I think this positive experience is lacking." P1
144
+
145
+ That being said, one challenge with pre-existing relationships is that proxy-user interaction can also be more volatile. Close connection provides the possibility for expressing dissatisfaction, whereas more distant relationships are less likely to experience this tension.
146
+
147
+ § 5 DISCUSSION
148
+
149
+ Our investigation illustrates challenges associated with the uneven collaboration between the primary user and an assisting secondary user who enables use-by-proxy of an application.
150
+
151
+ In a proxy-use collaboration, two users look towards an application with two different underlying motivations and expectations. In motivation, the proxy-user is altruistically motivated to help and expects that the experience will be generally positive and socially rewarding. In contrast, the primary user who is seeking assistance enters the interaction with expectations that the proxy-user will preform at least on-par with the application in the current situation. Extending this to expectations, despite positive intentions motivating the proxy-user, the introduction of a person-in-the-middle does not always alleviate the burden placed on the primary user. Instead, the primary user may become frustrated due to differences in the communication of application outcomes. Given that many people are motivated by their own needs for competence and autonomy [30], shifting the control to proxy-user can have a demotivating effect for the primary user. The proxy-user who expects positive social collaboration, respectful guidance, and feedback may reflect back negativity, resulting in conflict between both stakeholders.
152
+
153
+ Our qualitative approach reveals that due to the differences in motivation and expected outcomes, the proxy-user use case differs from the crafted UX designed for the application. Therefore proxy-users are a challenge to designers whom typically focus on creating a usable interface for a single dedicated, directly motivated user.
154
+
155
+ Identifying the proxy-user use case demands further thought into the design of an interface, especially for applications designed to offer assistance during the cumbersome tasks that also motivate a request for use-by-proxy (e.g. driving or cooking). Our results highlight that the proxy-user may be skilled at the task, but has limited liability to the outcome and may need additional guidance.
156
+
157
+ Both users in the proxy use case face challenges completing distributed responsibilities. These challenges are further exacerbated by the need to manage the overlaid social challenges accompanying any collaboration. Moreover, given that the role of proxy-user exists to help others, one question we can pose is: beyond altruism, what encourages the user to maintain the collaboration? The question becomes especially relevant when the proxy-user is forced to maintain the role (i.e. over long term tasks). Motivated by the challenges of proxy use, we pose a series of design questions that seek to present alternatives to better support this uneven collaboration.
158
+
159
+ § 5.1 QUESTION 1: CAN WE ELIMINATE THE PROXY-USER?
160
+
161
+ Motivated by the notion that this form of collaboration is undesirable, designers may wish to negate the use for a proxy. To accomplish this, designers must overcome shortcomings of the current design. The question then becomes how best to do this.
162
+
163
+ Understanding the workarounds of a current system can help us determine what the system should do $\left\lbrack {9,{32}}\right\rbrack$ . In the proxy-user use case, demands made on the proxy-user indicate design directions by allowing designers to understand the gaps in the application's current design and implemented features. We argue that, if it is conceivable that an application could require use-by-proxy (and many may feasibly be used this way), user testing protocols should include a proxy-use scenario to understand the different and changing expectations of both primary and proxy-users as they complete demanding tasks.
164
+
165
+ In many situations, designers already, in part, propose solutions designed to alleviate the need for proxy use. For example, designers of applications geared to multitasking or divided attention scenarios (e.g. driving, exercising, cooking) have explored multiple methods to increase hands-free capabilities. For example, many companies are exploring smart assistance, smart homes, and voice activation. Given both progress and desired system features [45] the elimination of proxy use may be feasible in certain contexts. Realistically, the existence of the proxy use case indicates that advancement in hands-free and smart assistant technology still cannot replace the desire to ask for help from another human. However, if designers actively explore these use-cases, work to understand heterogeneous expectation of information and synthesis, and continue to explore alternative designs, it seems feasible that technological advances can begin to partially address the proxy use-case by, in whole or in part, eliminating proxy-users.
166
+
167
+ § 5.2 QUESTION 2: CAN WE RE-BALANCE THE COLLABORATION?
168
+
169
+ As an alternative approach, designers may attempt to balance the uneven collaboration of the proxy-user by shifting the experience profile from Assistance to Collaboration. The goal would be to transform an "over-the-shoulder-boss" to a more equitable "pair-programming" paradigm.
170
+
171
+ We can support a re-balance of collaboration by supporting the fundamentals of proxy-user collaboration: communication, attitude, and skill. At a low-level this may include adding gaze-level support to understanding how primary user may attempt to gleam information from the application as it is operated by the proxy-user. Alternatively, to help a proxy-user comprehend the instructions given, designers may consider creating a wizard or workflow that a proxy-user can follow to distil all the necessary information from the application to the primary user with concise, easy-to-follow language. These distillations can be geared toward different levels of abstraction: information, synthesis, suggestions, and alternatives. Moreover, a new simple visualisation mode could be added with the intention of supporting the proxy-user's explanation. For example, simplistic and blurred visuals may supplement the proxy-user's instructions and reduce risk of distraction by presenting limited visual information.
172
+
173
+ At a higher level, we want our tools to contribute to the distributed or shared cognition $\left\lbrack {2,{28}}\right\rbrack$ necessary for a successful proxy-user interaction. By shifting responsibilities away from the primary user completely, we may obligate the division of cognition between two users. Strategies can include designing an overview for the proxy-user instead of tailoring the interaction directly towards the primary user. For example, instead of providing an overview of information, allow the proxy-user to build a custom notebook themselves using the application's tool set. Additional functionally can support the proxy-user, who may flip ahead in a manual or along a route and attempt to create a mental summary. While successful recall of information is difficult and learning takes time, if proxy-users can pre-learn, they may become more informed collaborators.
174
+
175
+ § 5.3 QUESTION 3: CAN WE AIM TO GUIDE THE PROXY-USER
176
+
177
+ An ongoing challenge for applications is that they are currently designed for a single user, the device owner. However, there are a number of applications where use-by-proxy seems obvious, including navigation systems. Instead of targeting the primary user, the application could target the proxy-user in a secondary mode by changing the presentation of information from a workflow aimed at the person completing the task to a workflow for a person who seeks to assist.
178
+
179
+ As in the previous example of re-balancing, a modification of the presentation of information can focus on queuing information to help the proxy-user prepare and anticipate next steps. For example, the application could identify larger time gaps between steps to indicate a good time to communicate complex information to the primary user. Moreover, designers might employ picture-within-picture views to allow for peripheral monitoring by the proxy-user during these longer inactive time periods. Notifications of upcoming events can help avoid the sudden call-to-action experienced by a proxy-user while multitasking.
180
+
181
+ § 6 CONCLUSION
182
+
183
+ It is immediately obvious to anyone who navigated, transcribed a text message, followed a recipe, assembled furniture, or controlled a slide show that, in these situations, sometimes another person will provide information - often from an application - while the user performs a physically or cognitively challenging task. We label these non-primary users 'proxy-users'. In exploring the HCI and Design research literature, design explorations seem almost silent on an analysis of proxy use and the proxy-user experience. What happens in proxy use? Are there pitfalls? Are there opportunities for improvement?
184
+
185
+ Our paper contributes to this design space by highlighting areas for further investigation into this unique form of uneven interaction. Can it be eliminated? Re-balanced to more equal collaboration? Enhanced to improve proxy use efficacy? This paper seeks to formulate these basic questions to encourage more targeted discussion of this unequal, yet frequent, style of collaborative application use.
186
+
187
+ § ACKNOWLEDGMENTS
188
+
189
+ The authors wish to thank A, B, C. This work was supported in part by a grant from XYZ.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/1bxh-dKdrn4/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,633 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # “I Keep Sweet Cats In Real Life, But What I Need In The Virtual World Is A Neurotic Dragon": Virtual Pet Designs With Personality Patterns
2
+
3
+ Hongni Ye*
4
+
5
+ Duke Kunshan University
6
+
7
+ Ruoxin You†
8
+
9
+ University College London Xin Yi]
10
+
11
+ Tsinghua University
12
+
13
+ Kaiyuan Lou‡
14
+
15
+ Duke Kunshan University Xin Tong1 Duke Kunshan University
16
+
17
+ Yili Wen\$
18
+
19
+ Duke Kunshan University
20
+
21
+ ## Abstract
22
+
23
+ Virtual Pets serve as companionships and meaningful in-game narratives in the metaverse. Players have unique personalities and personality preferences for their pets. However, the design of virtual pets often relies on designers' individual experiences without considering the virtual pets' personalities. We designed the virtual pets' visual appearances by following the design guidelines from the Five Factor Model (FFM) in voxel format. We conducted a study to investigate people's perceptions of virtual pets' personalities and appearances through two user studies. Our findings suggest that voxel-style virtual pets better represented agreeableness than realistic pet pictures. Additionally, users prefer virtual pets (voxel-style pets generated with machine-learning techniques) that share similar personalities. The study's results provide valuable insights for game designers and researchers for future pet game design and understanding of how people perceive virtual pets based on their appearance and behavior.
24
+
25
+ Index Terms: H.5.2 [User Interfaces]: User-centered design-Style guides; H.5.1 [Multimedia Information Systems]: Artificial, augmented, and virtual realities-Evaluation/methodology
26
+
27
+ ## 1 INTRODUCTION
28
+
29
+ As technology advances, the demand for artificial companions [14] with diverse personalities in different social roles increases [13]. Virtual pets, in particular, have been gaining immense popularity in the digital games field. They serve as a source of entertainment [11, ${48},{67}\rbrack$ and present the potential to promote mental health $\left\lbrack {9,{58},{90}}\right\rbrack$ and enhance children’s skill development $\left\lbrack {8,{12},{24},{45}}\right\rbrack$ . Virtual pets offer a unique opportunity for individuals who cannot keep real pets for various reasons, such as allergies or lack of resources. Moreover, research has suggested that interacting with virtual pets can lead to positive emotional outcomes, such as reduced anxiety [33].
30
+
31
+ There are various styles for designing virtual pet characters, the most popular ones are the realistic style and cartoon style. The realistic style, in high fidelity, provides a lifelike appearance [58], while the cartoon style, including voxel and two-dimensional sketch, creates a more cartoon look. For example, Tamagotchi [4] and Pokemon Go [2] use sketch and cartoon 3D styles, respectively, and 3D voxel-style virtual pets are present in games like Minecraft [1] and Sandbox [3]. To ensure efficient character modeling and avoid negative impacts on people's perceptions of virtual characters due to low aesthetic qualities and intermediate rendering realisms [73], we designed our virtual pet characters in voxel style. With the explosive growth of machine learning and the refinement of generative networks, neural networks like Generative adversarial networks (GAN) or diffusion models are widely used to replace the manual production of pictures and models in the industry $\left\lbrack {{25},{65}}\right\rbrack$ . However, generative models are more common in 2D pictures or 3D facial construction than virtual pets, making pet models scarce and expensive.
32
+
33
+ Virtual pet games lack personality diversity and fail to provide a similar experience to raising a real pet, as their personalities are unrelated to their appearance. Previous studies have explored personality differences in real pets, such as cat and dog breeds $\left\lbrack {{15},{50},{71},{87}}\right\rbrack$ . Although previous studies have shown that personality differences exist in non-human animals [27], it lacks research on cross-species personality traits in pets. This gap in the literature limits our understanding of the factors that contribute to the development of personalities in different species and how we can design virtual characters that can mimic and respond to these traits.
34
+
35
+ To address the gaps in current research, our study focuses on creating virtual pets that display variations in personality and exploring how players perceive their behaviors and appearance. We aim to answer three primary research questions: (1) How do users perceive the personalities of virtual pets in different styles and representations? (2) How do virtual pets' personalities relate to their appearance, and how can we design virtual pets involving their personalities? (3) What are individuals' perceptions of the appearance and personality of virtual pets generated through machine learning techniques?
36
+
37
+ Through two studies, we found that style and presentation significantly affected users' perceptions of virtual pet personalities. Additionally, we found that visual cues, such as skin color and body shape, can significantly influence how virtual pets' personalities are perceived. In general, our contributions will be threefold: (1) In the game design domain, we applied pets' personality variations to virtual pet design by following the FFM. And we evaluated our designed virtual pet characters with pre-defined personality traits with users' study. (2) We combined the traditional method and the machine learning technique to generate the appearance of virtual pet. We embedded Neural Cellular Automaton (NCA) [55] into our generation process to increase the diversity of generated models, which provides a novel method for game character design. (3) Our study examined the potential of using voxel pets' appearances as virtual companions to enhance the mental well-being of young individuals by reducing anxiety levels through interactive engagement with virtual pets.
38
+
39
+ ## 2 RELATED WORK
40
+
41
+ Below, we describe the design examples, and machine learning techniques for generating models of virtual pets.
42
+
43
+ ### 2.1 Virtual Pet Design and Generation
44
+
45
+ Pet serves the important function as a human companion, a strong and healthy human-animal relationship will be beneficial for both entities [82]. As suggested by previous researchers, appearance is an important consideration in pet owners' decision-making [86]. And One tenant of folk psychology is that people tend to select pet dogs that have a similar appearance to themselves [68]. And in the artificial animal design field, a study about human-alloanimal relations highlighted that cartoon animals can lead to people wanting to be close to the depicted animal, for the reason that their appearances are designed as approachable, cuddly, friendly, and fun beings [18]. Thus, the appearance design deserves our attention when conducting virtual pet design work.
46
+
47
+ ---
48
+
49
+ *e-mail: hongni.ye@mail.polimi.it
50
+
51
+ ${}^{ \dagger }$ e-mail: echoyou67@outlook.com
52
+
53
+ ${}^{ \ddagger }$ e-mail: midstream.lou@gmail.com
54
+
55
+ §e-mail: yili.wen@dukekunshan.edu.cn
56
+
57
+ Te-mail: yixin@tsinghua.edu.cn
58
+
59
+ 'e-mail:xt43@duke.edu
60
+
61
+ ---
62
+
63
+ In the Virtual Reality (VR) pet game domain, to promote better game experience and immersion, pets' appearance design has an intimate connection with user preferences. As introduced by Chaoran Lin et al., there are three user types based on user's motivations and expectations: (1) pet-keepers, (2) animal teammates, and (3) cool hunters [48]. The user type model inspired us to do case studies of virtual pets' appearance according to their target user types. And we proposed three virtual pets types: (1) natural; (2) intelligent; (3) fantasy. And we conduct case studies based on the different types, and the results are summarized in cards. See the case study results in the appendix. Through the case study, we discovered most cases are about natural pets and took the accessibility of pets' personality knowledge into consideration, and we decided to focus on the natural pets' design as our target pet type in this study.
64
+
65
+ Machine learning techniques have gained widespread image and model generation adoption in recent years. Neural networks, including VAEs [42], DCGANs [66], and the diffusion model [36, 77], have demonstrated impressive image generation capabilities. These models have also been extended to 3D model generation by increasing the dimension of the data $\left\lbrack {{65},{91}}\right\rbrack$ . Generating $3\mathrm{\;d}$ mesh is another popular approach to 3D generation, which can also be accomplished with neural networks, such as VAEs [23]. Mesh-based models offer an alternative means of $3\mathrm{D}$ representation that can be more efficient and suitable for certain applications. However, all of these generation approaches require a large amount of high-quality training data, which restricts their usage in Virtual Pet generation. So, we finally decided to use the Neural Cellular Automaton (NCA) in generation [55]. By combining the NCA with the traditional generation process which recombines the different parts of models, we can get plenty of high-quality models with few training data.
66
+
67
+ ### 2.2 Exploring Personality Traits in Animals
68
+
69
+ As illustrated by Yerkes [93], it is commonplace to regard individual animals as possessing distinct personalities. And other researchers have proved that personality differences do exist and can be measured in animals other than humans [27]. There are a great number of works that studied pet's personalities, including cats [44], dogs [20], ferrets [78], dolphins [57], and other reptile animals [85]. Through the literature review, we discovered the breeds of cats and dogs had been investigated the most. Thus, we further researched the work about personality traits among cat and dog breeds. For the cat breeds, Milla Salonen et al. suggested that cat breeds grouped into four clusters by analyzing their personality traits with three components named aggression, extraversion, and shyness [71]. Other researchers ranked dog breeds on ten behavioral characteristics in three factors (aggression, reactivity, trainability), considering breeds of the three most closely related groupings, the wolf-like, guarding, and herding groups [29]. And refer to Fédération Cynologique Internationale (FCI) [37], there are ten dog groups based on various discriminators such as appearance or role. The existing variations of personality traits among different cat and dog breeds motivated us to apply personality analysis to virtual pet design.
70
+
71
+ The Five-Factor Model (FFM) is one of the most commonly used instruments for measuring personality for humans [53]. The FFM comprised the dimensions Neuroticism (N), Extraversion (E), Openness to Experience (O), Conscientiousness (C), and Agreeableness (A). And some researchers also applied the FFM to different species' personality tests. For example, Samuel D et al. built a preliminary framework with the human Five-Factor Model plus Dominance and Activity for measuring the personalities of 12 nonhuman species, and their results indicated that various primates, nonprimate mammals, and even guppies and octopuses all show individual differences that can be organized along dimensions akin to E, N, and A. [26]. And another work presented the " Feline Five, "which was adapted from FFM with a 5-factor analysis: Neuroticism, Extraversion, Dominance, Impulsiveness and Agreeableness. And the "Feline Five" has been proven to introduce a more comprehensive overall pet domestic cat personality structure. Therefore, the "Feline Five" has great potential to measure our designed virtual pets' personalities. We adapted these instruments in our pet design method, which will be introduced in section.4.
72
+
73
+ ## 3 VIRTUAL PET DESIGN IN VOXEL STYLE
74
+
75
+ To investigate the influence of pets' appearance traits on human perception of personality, we developed virtual pet characters inspired by real-life pets. Our methodology involved categorizing primary cat and dog breeds into six distinct clusters based on their appearance traits and creating a virtual pet character to represent each cluster. Using this approach, we aimed to make virtual pets with unique and recognizable physical characteristics while incorporating diverse appearance traits.
76
+
77
+ ### 3.1 Design Objective
78
+
79
+ To address the research questions, our design objective was to create virtual pet characters with a broader range of appearance traits and associated perceived personality traits. We intended to develop a mapping guideline that linked the appearance traits of the pets with their corresponding personality traits. As such, we hypothesized an appearance-personality mapping to guide our design process We recognized that individual perceptions might vary. Hence we aimed to ensure a degree of consistency in the perceptions of people towards our virtual pets.
80
+
81
+ ### 3.2 Design Baseline: Clusters of Cats and Dogs
82
+
83
+ After conducting a case study on three types of virtual pets (see Fig.??, we decided to begin our initial design with dogs and cats, among the most popular and common domestic pets [5]. To achieve our design objective of creating virtual pets with a broader range of appearance traits and perceived personality traits, we compiled a list of common domestic cats and dog breeds. We categorized them into clusters based on the distinctive characteristics of different body parts. In doing so, we referred to the classification systems of FCI [37] and Cat Breeds [62]. We then mapped personality traits to different breeds to identify consistencies within each cluster. Personality traits were transferred from previous studies on cats' and dogs' behavior traits $\left\lbrack {{30},{72}}\right\rbrack$ . We merged groups with similar appearance and behavior traits while excluding those with various behavior traits, dividing the different breeds of cats and dogs into six clusters. Fig. 1 displays the appearance traits, typical breeds, hypothesized personalities, and our corresponding virtual pet characters for each cluster.
84
+
85
+ ### 3.3 Design Baseline: Visual Expression
86
+
87
+ In addition to character archetypes, visual expression plays a crucial role in shaping people's perception of virtual pets [81]. To ensure that our designed pets are displayed in a multi-dimensional form and have a wide range of appearance traits, we created them in 3D using voxel style. This approach increases our efficiency in character modeling and eliminates the potential negative impact of low aesthetic qualities and intermedia rendering realism on people's perceptions of virtual animal characters [73]. In addition to these benefits, we were inspired by the potential of $3\mathrm{D}$ voxel-style modeling in virtual pet design. We aimed to extend 3D generation techniques into this field, building on the success of voxel-based models in Minecraft and sandbox games. 3D generation techniques would be more capable of generating visually appealing virtual pet characters. During our follow-up interviews, we found that people preferred virtual pets in voxel style compared to realistic models, further confirming the potential of the voxel style.
88
+
89
+ ![01963e01-1b0b-7f05-8747-0f4718af520b_2_179_174_1443_653_0.jpg](images/01963e01-1b0b-7f05-8747-0f4718af520b_2_179_174_1443_653_0.jpg)
90
+
91
+ Figure 1: Design of Cat and Dog Clusters. The pet side view pictures used for reference were obtained from [31] and [32].
92
+
93
+ ![01963e01-1b0b-7f05-8747-0f4718af520b_2_174_954_1324_563_0.jpg](images/01963e01-1b0b-7f05-8747-0f4718af520b_2_174_954_1324_563_0.jpg)
94
+
95
+ Figure 2: Case study clusters, (1) natural pets (in green cards): the virtual pets who have a similar appearance and characteristics as the real-life pets, such as dogs, and cats; (2) intelligent pets(in red cards): good teammates for players, who can perform tasks in the virtual environment; (3) fantasy pets(in purple cards): have sci-fi look, and can assist players in exploring in the environment.
96
+
97
+ Visual elements, including shapes, volumes, and colors, are essential components of character design that significantly impact creating emotional experiences [76]. To avoid potential influence on the diversity of perceived personality traits between realistic and virtual pets, we controlled the visual elements by following realistic dogs and cats' body structure, proportion, and color palette in our character designs. This approach ensures that any differences in perceived personality traits between realistic and virtual pets can be attributed to factors other than visual expression.
98
+
99
+ ### 3.4 Character Design
100
+
101
+ We created six characters based on the clusters, selecting one breed within each cluster as the model sample. We used Magicavoxel ${}^{1}$ to create voxel-style characters and render static pictures. We limited the model sizes to ${31} \times {31} \times {31}$ voxels to facilitate pet generation.
102
+
103
+ In addition, previous research showed the potential impact of animation on people's perception of virtual pets [75]. Therefore, we designed another version of the virtual pets with natural movement to study the effect of additional expressiveness on people's perceptions. We did not involve facial animation because of the potential negative reaction caused by the animal uncanny valley [73]. We designed a walking animation for the cat clusters and a running animation for the dog clusters based on the nature of these two pet species. We used VoxEdit ${}^{2}$ to build and Blender ${}^{3}$ to render the animation clips.
104
+
105
+ ---
106
+
107
+ ${}^{1}$ https://ephtracy.github.io/
108
+
109
+ ---
110
+
111
+ ## 4 STUDY 1
112
+
113
+ Study 1 included surveys and interviews to understand users' perspectives of virtual pet characters. The survey was designed with three goals: 1) compare participants' perceived personalities of different styles of pets within the same cluster. 2) evaluate how our designed pet attracts people. 3) understand their perception toward keeping real pets and virtual pets. The follow-up interview aimed to understand further the reasons behind participants' perceptions based on the result of the online survey. The study obtained ethical approval for the study from the Institutional Review Board.
114
+
115
+ We designed the survey using a mixed 3x6 design (pet character styles * pet clusters). Our experimental conditions of the pet character styles included: real static pets, static virtual pets, and animated virtual pets. We controlled the pets representing different conditions belonging to the same clusters, which we defined in section 3.3. Participants were randomly assigned one pet cluster and rated personalities and overall feelings of all three pet character styles. We then invited 9 participants for follow-up interviews to determine the factors of people's perceptions. The interview questions were about the factors of participants' answers, and two card-sorting sessions for further exploring participants' perceptions based on their answers.
116
+
117
+ ### 4.1 Participants
118
+
119
+ Participants voluntarily self-selected to complete the survey and consented before taking it. We recruited 33 participants ( 12 males, 20 females, 1 non-binary) via social media and word of mouth. Participants were 18-24 (N=24) and 25-34 (N=9). 24 participants had the experience of keeping real pets, 7 had dogs, and 7 had cats. For virtual pets, 13 participants reported they had played virtual pet games before, including Animal Crossing $\left( {\mathrm{N} = 3}\right)$ , Tamagotchi $\left( {\mathrm{N} = 2}\right)$ , Tencent QQ Pet $\left( {\mathrm{N} = 6}\right)$ , and others $\left( {\mathrm{N} = 4}\right)$ .
120
+
121
+ ### 4.2 Measurement
122
+
123
+ The online survey consists of four parts. The first three parts measured the perceived pet personality under three conditions. One real pet picture, one static virtual pet picture, and one virtual pet animation clip were randomly distributed into one of the three parts. All real pet pictures were downloaded online, with the same white background and showing the pet's whole body. We downloaded ten pictures for each cluster and randomly displayed one on the survey. The researchers created static pictures and animation clips of virtual pets.
124
+
125
+ We measured the perceived personality with an adapted 7-point scale. We designed the scale based on the Feline Five [49], and pooled items from previous personality assessments on cats [47] and dogs [70]. The scale included four measurement dimensions which were general and commonly used to measure pets' personalities. Each dimension had four pairs of contracting description items. All 32 items showed in random order in each part of the survey. Participants rated each item from 1 to 7 points according to what extent they agreed with the description corresponding to the material provided. After the 32-item chart, two questions followed to figure out whether the participants knew the pet's breed in the picture and their overall feeling about this pet. The last part accessed participants' demographic information, experience keeping pets, and attitudes toward pets.
126
+
127
+ The semi-structured interview has four themes: pet personality, pet appearance, pet interaction, and feelings of our designed pet characters. We collected participants' survey answers and visualized them in a table shown during the interview to recall their memory. Besides, we created two card sorting sessions based on the data of open-end questions. One aimed to determine participants' perceived cuteness, which had primarily been proposed as an expected feature of virtual pets from the surveys. We created the cards with our designed and similar voxel characters and asked the participants to select the cards they thought were cute. The other card sorting focused on the expected interaction methods towards virtual pets, which had been asked on one of the open-end questions in the survey We coded participants' answers and selected nine of them to make cards. We asked the participants to pick and rank the cards according to their expected interactions.
128
+
129
+ ### 4.3 Procedure
130
+
131
+ Participants were randomly distributed into one of six control conditions and completed a Qualtrics form. On the last question, we asked if they were willing to participate in our follow-up interview. After analyzing the data of the surveys, we emailed ten participants whose answers were consistent or contrary to our data results. Nine of them consented to take the interview. Our interviews were conducted online via the Feishu meeting. The participants first had five minutes to read and sign the consent form. Then, we conducted a 40-min semi-structured interview with our participants; participants received 100 RMB as compensation for their time and contribution.
132
+
133
+ ### 4.4 Results
134
+
135
+ The quantitative and qualitative results showed that the style and appearance of virtual pets significantly impact participants' perceptions of their personalities. We also found that participants connected perceived personalities with the appearances of virtual pets. We concluded that the design suggestions involve expected pet types, personality presentation factors, and interaction with pets, which benefit future virtual pet design.
136
+
137
+ #### 4.4.1 Perceived Personality in Different Styles
138
+
139
+ The results of the repeated-measures ANOVA test shows that the style of pets (realistic, virtual) and the presentation (static, animation) significantly affected people's perceptions of their personality traits, especially for Neuroticism and Agreeableness. Fig 4 shows the distribution of scores on four personality traits. People's perception of agreeableness is primarily influenced by pet styles $\left( {F\left( {2,{29}}\right) = {8.10}, p < {0.01}}\right)$ . The results indicated that participants thought voxel pets are much more agreeable (mean $= {40.58},{SD} =$ 6.95) than realistic pets (mean $= {33.7},{SD} = {8.42}$ ). For voxel animations, they received an agreeableness score close to static voxel pets $\left( {\text{mean} = {37.7},{SD} = {7.82}}\right)$ . For Neuroticism, realistic pets receive the highest score (mea $N = {31.6},{SD} = {5.74}$ ), and both static voxel pets and voxel animations are less neurotic (static voxel pets: mean $= {26.8},{SD} = {5.42}$ , animation: mean $= {27.87},{SD} = {5.77}$ ,). Participants' perceptions of pets' extraversion are not significantly influenced by pets’ style $\left( {F\left( {2,{29}}\right) = {1.28}, p = {0.28}}\right)$ . All three style receives extraversion score relatively close to each other (realistic pets: mean $= {36.67},{SD} = {7.61}$ , static voxel pets: mean $=$ ${33.73},{SD} = {7.47}$ , animation: mean $= {34.5},{SD} = {5.45}$ ). Impulsiveness keeps stable when the style changes $\left( {F\left( {2,{29}}\right) = {0.59}, p = {0.55}}\right)$ . The low standard division illustrates that most participants give similar scores to all three styles (realistic pets:mean $= {36.67},{SD} = {7.61}$ , static voxel pets: mean $= {33.73},{SD} = {7.47}$ , animation: mean $=$ 34.5, ${SD} = {5.45}$ ).
140
+
141
+ Through the interview, we discovered three main reasons for these results: Firstly, the design details played a significant role in expressing agreeableness $\left( {\mathrm{N} = 8}\right)$ . For example, a participant evaluated the voxel pet with high agreeable scores explained to us, "I might think voxel is a little bit chubby and silly, but I think it's a little more friendly" (P15). Secondly, their decision-making for the evaluation was influenced by their previous experience and was related to individual differences in spending time with pets $\left( {\mathrm{N} = 3}\right)$ . A participant rated the realistic pets with the highest score in neuroticism and said "If it was a real pet, it reminded me that it could do things that were threatening to me. I was bitten by a dog when I was a child, whereas these virtual things couldn't really threaten me." (P25). Thirdly, some participants rated their personality by inferring the pet breeds $\left( {\mathrm{N} = 3}\right)$ . P33 told us,"The real pet photo is of a Muppet cat, albeit well-behaved. But voxel's static and animated features are not reminiscent of a cat running amok." Thus, we could understand why voxel has the best effect for showing agreeableness, while real pet pictures make the pets look more neurotic.
142
+
143
+ ---
144
+
145
+ ${}^{2}$ https://www.voxedit.io/
146
+
147
+ ${}^{3}$ https://www.blender.org/
148
+
149
+ ---
150
+
151
+ ![01963e01-1b0b-7f05-8747-0f4718af520b_4_169_161_1435_492_0.jpg](images/01963e01-1b0b-7f05-8747-0f4718af520b_4_169_161_1435_492_0.jpg)
152
+
153
+ Figure 3: Procedure of Study1 and Study 2
154
+
155
+ #### 4.4.2 Association between Personality and Appearance.
156
+
157
+ Our goal was to create pets with various personality types based on appearance and perceived personality traits, resulting in six pet clusters, as depicted in Fig. 1. Our survey results indicated that participants' perceptions of pets' personalities aligned with our design intentions. For instance, participants rated the first cat cluster (fat yellow cat) and the second dog cluster (medium-sized yellow-white dog) as having the highest agreeableness scores, consistent with our pet personality design goals.
158
+
159
+ Moreover, our analysis of participants' extroversion rankings for the cat and dog clusters aligned with our initial design intentions. Specifically, participants' extroversion rankings for the cat clusters were in the order of cat cluster 2, cat cluster 3 , and cat cluster 1 . In contrast, their rankings for the dog clusters were in the order of dog cluster 3, dog cluster 2, and dog cluster 1 . These results suggest that our virtual pet design successfully conveyed various personality traits through appearance and that participants could perceive and rate these traits accurately.
160
+
161
+ #### 4.4.3 People prefer pets with cute appearances but neurotic personalities.
162
+
163
+ Cuteness was the word repeatedly mentioned by participants during the interview. We received the answers for why people like cute pets because cute looks make people feel safe, and another explanation is that pets that look cute are easier to keep. For instance, one participant liked cut pet and explained to us, "Cute looking pets are better behaved and easy to keep, while naughty pets may be more nerve-racking." (P16). Cuteness has some common distinct appearance patterns, one is the body shape. Participants associated these traits with cuteness: small size, short and fat, short legs, and round ear shape. "The pet's small size makes it less aggressive", the quote by P4. And "The ears of this pet are round, which makes me think she is very friendly." said by P16. Another key factor that makes pets look cute is the coat color. As mentioned by interviewees, warm and bright colors, heterochromous and clean colors are the color of cuteness in their minds. And other factors that contribute to a pet's cuteness consist of special shapes, such as a lighting-shaped tail. And the pets with beards and dimples are definitely plus points for being cute. All interviewees treated cuteness as a sign of agreeableness; however, some participants preferred a contrast between personality and appearance $\left( {\mathrm{N} = 5}\right)$ . Specifically, they like virtual pets who have cute appearances but are inclined to be neurotic and cold in personality. One interviewee told us, "I like those crazy pets. They're more neurotic. Because animals don't do it like that, you might have difficulty understanding it, and there's a great sense of mystery." (P16).
164
+
165
+ In a word, we intended to construct a relationship between virtual pets' appearances and personalities. Through the user study, we proved the pieces of evidence we showed through our design work. And we found all interviewees thought of cuteness as a sign of agreeableness. Further, we also discovered that people prefer virtual pets with cute looks but with neurotic personality traits.
166
+
167
+ #### 4.4.4 Suggestions for Virtual Pet Type and Interaction Design in Pet Game
168
+
169
+ Through the open questions in the survey, we found 3 participants expected to keep fantasy pets, such as dragons, dinosaurs, and sci-fi pets that can not be found in real life. And 7 participants expected to keep cats and dogs. We asked why they decided on the expected pet type selection, and we discovered some people have experienced petting a pet type, and (s)he $\left( {\mathrm{N} = 1}\right)$ decided to continue to have the same type as a virtual pet. However, other participants do the opposite. We conclude with two main reasons for this. One is that pet keepers recall their memories with pets, this could be both positive and negative, which leads to their decision-making on keeping the same pet type or not. Another reason is mainly about the specific pet personality traits; that is to say, people would be addicted to certain personality traits of pets, such as agreeableness, as a consequence, they regard pets who possess these traits as the first choice.
170
+
171
+ We also discovered interesting findings about pet animation through the interview. On the one hand, the animation of the pet combined with the sound can convey its personality more directly. There is a quote from P4: "I think animation is very important for the expression of personality, especially the voice, when it is happy and when it is angry, and when it is threatening, the voice is completely different." On the other hand, some movements of specific body parts, for instance, ear, tail, and leg movements are important references to the perception of personality $\left( {\mathrm{N} = 3}\right)$ , one interviewee told us, "I think sometimes the tail of a dog is more informative, that is, you can tell if he is happy or unhappy by his tail, so you can tell what kind of mood he is in, maybe he is extroverted."(P16).
172
+
173
+ In addition to the pet type, we investigate the expected interaction with virtual pet in the virtual world. Through the survey's open questions and interview, we concluded the three most popular interactions: Talking(9 votes), touching(6 votes), and feeding(6 votes). And "why do you want to talk with your virtual pet in the virtual pet as the main interaction" is that they want to communicate in the same language to understand the pet better. For example, "I want to be able to talk to him and he understands and it can react. I'm more interested in the behavior of his feedback than the content of the conversation." (P25). Moreover, other popular interaction types among the participants are treasure hunting and adventure.
174
+
175
+ ![01963e01-1b0b-7f05-8747-0f4718af520b_5_154_595_740_583_0.jpg](images/01963e01-1b0b-7f05-8747-0f4718af520b_5_154_595_740_583_0.jpg)
176
+
177
+ Figure 4: Distribution of scores on four personality traits from 30 people.
178
+
179
+ ## 5 PET GENERATION
180
+
181
+ We have developed a unique $3\mathrm{D}$ model generator that automatically creates virtual pets by hybridizing existing models and creating new ones based on input. The following section will explain the generator's process, which includes dividing and recombining models, dyeing them for harmonious colors, and texturing them for unique patterns.
182
+
183
+ ### 5.1 Generation
184
+
185
+ With the development of our search, the demand for a 3D model generator was increasing. Firstly, the number of manually created pet models is too small, and more models are needed to prove the universality of the research outcomes. Then, many models must target which feature, color, or combination leads people's impressions towards a pet more precisely. Moreover, it is also an exploration of generation techniques since the outstanding generation models are mostly $2\mathrm{\;d}$ -based nowadays, while the demand for $3\mathrm{D}$ models is increasing rapidly. So, we implemented a generator to generate virtual pets automatically. The generator takes input from several 3D pet models and outputs new models based on the input.
186
+
187
+ The generator uses existing hybrid models and creates new models based on them. Input models are parent models, and generated models are children. Every child model inherits appearances from all their parents (in the generation, a child model can have more than two parent models). Random mutations are applied in the hybrid process to ensure that every newly generated child model is unique even if they have the same parents, guaranteeing variety and a more significant number of generated models.
188
+
189
+ The generation process is shown in Fig. 5, which is mainly three steps. Divide and recombination, dyeing, and texturing.
190
+
191
+ #### 5.1.1 Divide and recombination
192
+
193
+ The divide and recombination process gives basic shape to newly generated models.
194
+
195
+ In the research, the generator takes input from three cat models and divides each into five parts: the head, ears, body, tail, and limbs. The generator will loop over all the possible combinations to get the shapes of new models as much as possible. When combined, all the parts will be aligned automatically since the models' size and transformation might differ.
196
+
197
+ After getting all the possible combinations, the generator will rate the results and pick the reasonable ones to send to the next step. For example, the combination won't give the effect of a cat with the fattest body and thinnest tail, which didn't make sense in real life.
198
+
199
+ #### 5.1.2 Dyeing
200
+
201
+ After dividing and recombination, many models with reasonable shapes were generated. But, we can not directly use these models because they might have unharmonic colors. Usually, the inherence of color follows some rules. The child is more likely to have a mixture of color or shows a transition color of its parents. However, the models generated after recombination inherit all the colors and patterns on their parents' skin. A cat can have different colors on their head, body, and limbs, and there is no transition, which makes those models unreal. So, all the models will be repainted after recombination. They can have a color closer to one of their parents or in the middle of their parents' color.
202
+
203
+ During the dyeing, the generator will give random mutation. After generating the overall color and choosing the palette, the color might mutate several degrees darker or brighter. Also, the larger mutation that gives a model a new palette will happen sometimes. It can prevent models from losing color variety after often dyeing.
204
+
205
+ #### 5.1.3 Texturing
206
+
207
+ Texturing is a random process that can give models unique patterns or textures on the skin.
208
+
209
+ During texturing, each time a new model was generated after the previous two steps, a new model called "mask" will be generated. The masked model is a real-time generated random voxel model of the same size as the generated pet model. Every newly generated pet model will have a mask for it. Then, the pet and mask models will be put in the same coordinate. After that, the generator will iterate every voxel on the model, if there is an overlap between mask and pet, the color of that voxel will change.
210
+
211
+ The different masks can give different patterns to a pet model. For example, the mask of many little floating balls can make a pet spotty, and the mask of many vertical planes draws stripes on the pet. Since the mask is randomly generated, every pet can have its unique pattern.
212
+
213
+ Neural Cellular Automaton (NCA) was used to generate masks. The 3D NCA model used in the generator takes the input from the voxel 3D model and also outputs voxel 3D models. The network was trained to generate simple objects like spheres or plates from a single dot as a seed. When generating the mask, a random seed(many randomly distributed dots in 3D space) will be sent to the pre-trained network. Then dots will start to grow to the object when running the network. The growing process will stop after random steps, then the output model will be the final mask.
214
+
215
+ ## 6 STUDY 2
216
+
217
+ We conducted Study 2 through surveys and semi-structured interviews to gather users' feedback on our generated characters. This study aimed to explore how participants perceive virtual pets' personalities by observing their appearances using quantitative and qualitative methods.
218
+
219
+ ![01963e01-1b0b-7f05-8747-0f4718af520b_6_180_163_1440_850_0.jpg](images/01963e01-1b0b-7f05-8747-0f4718af520b_6_180_163_1440_850_0.jpg)
220
+
221
+ Figure 5: Generation Procedure
222
+
223
+ ### 6.1 Participants
224
+
225
+ We employed convenience sampling by posting our survey link through WeChat subscriptions. 57 participants (19 males, 33 females, 3 non-binaries, and 2 who preferred not to say) voluntarily completed the survey, different from the participants in study one. The age distribution of the participants consisted of participants aged 18-24 years $\left( {\mathrm{N} = {39}}\right)$ and 25-34 years $\left( {\mathrm{N} = {18}}\right)$ . 47 participants had previous experience in owning real pets, while 41 had experience in owning virtual pets. Following the survey, 12 participants ( 6 males, 5 females, and 1 who preferred not to say) voluntarily participated in the interview session. The age distribution of the interviewees included participants aged 18-24 years $\left( {\mathrm{N} = 7}\right)$ and 25-34 years $\left( {\mathrm{N} = 5}\right)$ . Of these interviewees, 9 had experience owning real pets, while 10 had experience owning virtual pets.
226
+
227
+ ### 6.2 Measurement
228
+
229
+ The research employed an online survey consisting of four distinct parts, namely the IPIP-20 [17], the Motives for Online Gaming Questionnaire (MOGQ) [16], likability assessment of virtual pets, and demographic data collection. The IPIP-20 is a brief instrument for assessing FFM of personality traits using the International Personality Item Pool (IPIP) resources. This study utilized the questionnaire to identify participants' personalities, as our modified pet personality questionnaire in Study 1 shared four personality traits with the Big Five personality traits. The primary objective was to correlate participants' personality traits with their perceived personality of virtual pets to investigate the link between the two constructs. Both the IPIP-20 and MOGQ have demonstrated reliability and validity in measuring people’s personalities and gaming behavior [16,17]. Given the convenience sampling technique utilized to recruit participants, which may have reached a broad Chinese population, the study included validated Chinese translations of the IPIP-20 [39] and MOGQ [89]. In the likability assessment section, participants were presented with ten images, comprising three manually designed virtual pets(M1, M2, M3)and seven computer-generated ones $(\mathrm{G}1$ to G7), refer to Fig. 6, and requested to rate their likability on a scale of 1 to 5 . The demographic section of the survey collected data on participants' gender, age, education level, and pet-raising experience.
230
+
231
+ The follow-up interview consisted of questions parts and one card-sorting session in 45 minutes. The interview guide a series of open-ended questions about the virtual pet's personality, such as "What kind of personality do you think this virtual pet has?" and "What do you like and dislike about this virtual pet's appearance?" refer to the appendix for the interview outline. We created a personality mapping template for the card-sorting session, which consisted of four areas representing four personality traits and pictures of the ten virtual pets as cards that needed to be sorted in Figma.
232
+
233
+ ### 6.3 Procedure
234
+
235
+ All participants completed a Qualtrics form that contained four test parts. After analyzing the survey data, we contacted twenty participants whose personalities and pet-keeping experiences are diverse via email. Twelve of them agreed to take part in the interview. We conducted the interviews through Voov Meeting. We informed participants of the study's purpose and procedures and obtained their written consent before the study began. Participants received 100 RMB as compensation.
236
+
237
+ ### 6.4 Quantitative Results
238
+
239
+ The statistical results showed that appearance features played a significant role in participants' likability ratings of virtual pets, specifically cobby cats with light-colored coats and decorations. Additionally, we identified significant correlations between certain personality and player-type traits and likability ratings of virtual pets.
240
+
241
+ ![01963e01-1b0b-7f05-8747-0f4718af520b_7_236_170_1318_449_0.jpg](images/01963e01-1b0b-7f05-8747-0f4718af520b_7_236_170_1318_449_0.jpg)
242
+
243
+ Figure 6: Generated Virtual Pets and Manual Designed Virtual Pets.
244
+
245
+ Table 1: Mean and SD of the Likability of Each Virtual Pet
246
+
247
+ <table><tr><td>Virtual Pet ID</td><td>$\mathbf{{Mean}}$</td><td>SD</td></tr><tr><td>M1</td><td>3.72</td><td>1.18</td></tr><tr><td>M2</td><td>2.82</td><td>1.30</td></tr><tr><td>M3</td><td>2.96</td><td>1.35</td></tr><tr><td>G1</td><td>2.68</td><td>1.36</td></tr><tr><td>G2</td><td>2.70</td><td>1.27</td></tr><tr><td>G3</td><td>2.46</td><td>1.38</td></tr><tr><td>G4</td><td>2.67</td><td>1.26</td></tr><tr><td>G5</td><td>3.70</td><td>1.30</td></tr><tr><td>G6</td><td>3.35</td><td>1.79</td></tr><tr><td>G7</td><td>2.53</td><td>1.27</td></tr></table>
248
+
249
+ #### 6.4.1 Preference toward Cobby Cats with Light-colored Coat
250
+
251
+ We conducted a repeated-measures ANOVA to compare the likability ratings of ten virtual pets. Table 1 presents the descriptive statistics for these ratings, Fig. 6 illustrates each virtual pet. The ANOVA revealed a significant main effect for virtual pets $\left( {F\left( {9,{81}}\right) = {12.02}, p < {.001}}\right)$ . To identify which virtual pets differed significantly from each other, we conducted post hoc tests using the Holm correction to adjust for multiple comparisons. The results showed that the likability ratings for G5 (a cobby cat with a wight coat and orange coloring on its ears, breast and tails) and M1 (a cobby cat with an orange coat with brown stripes on the back) were significantly higher than those for all other virtual pets except G6. In turn, the likability rating for G6 was considerably higher than those for G3 and G7, which received the lowest mean ratings. These findings suggest that appearance features significantly influence participants' likability ratings of virtual pets. Specifically, cobby cats with light-colored coats, such as G5, G6, and M1, were rated as more likable than other virtual pets. Additionally, color decorations on the fur, such as the orange coloring on the ears, tail, and breast of G5 and G6 and the brown stripes on the back of M1, also appeared to positively impact participants' likability ratings.
252
+
253
+ #### 6.4.2 Effects of Personality and Player Type on Virtual Pet Likability Ratings
254
+
255
+ We examined the relationship between participants' personalities and player types and their ratings of virtual pets using Pearson correlation. Before analysis, we checked the normality assumptions of variables using the Shapiro-Wilk test. The distributions of the 21 variables ( 10 likability, 5 personality traits, 6 player type traits) were assessed using the Shapiro-Wilk test, and the results indicated that more than half of the variables $\left( {\mathrm{N} = {16}}\right)$ were significantly non-normal $\left( {p < {0.05}}\right)$ . Of the non-normal variables. Given the non-normality of all liabilities of virtual pets, non-parametric tests were used to assess the relationships between variables. The Spearman's rank correlation showed that five pairs of variables were significantly correlated: Extraversion and G2 $\left( {p = {0.006}}\right)$ , Social and G1 $\left( {p = {0.002}}\right)$ , Social and $\mathrm{G}6\left( {p = {0.05}}\right)$ , Social and $\mathrm{G}7\left( {p = {0.03}}\right)$ , Skill Development and G2 $\left( {p = {0.008}}\right)$ , and Skill Development and G7 $\left( {p = {0.005}}\right)$ . The correlation coefficient between Extraversion and G2 was negative (Coef $= - {0.36}$ ), while the correlation coefficients between the other five pairs were positive, ranging from 0.26 to 0.41 (see Table 2).
256
+
257
+ Table 2: Correlation Coefficient Between Participant's Features and Likability of Virtual Pets
258
+
259
+ <table><tr><td>Scale</td><td>Feature</td><td>$\mathbf{{DV}}$</td><td>Coef</td><td>p-value</td></tr><tr><td>IPIP-20</td><td>Extraversion</td><td>G2</td><td>-0.36</td><td>0.006*</td></tr><tr><td rowspan="5">MOGQ</td><td>Social</td><td>G1</td><td>0.41</td><td>0.002*</td></tr><tr><td>Social</td><td>G6</td><td>0.26</td><td>0.05*</td></tr><tr><td>Social</td><td>G7</td><td>0.29</td><td>0.03*</td></tr><tr><td>Skill Development</td><td>G2</td><td>0.35</td><td>0.008*</td></tr><tr><td>Skill Development</td><td>G7</td><td>0.37</td><td>0.005*</td></tr></table>
260
+
261
+ ### 6.5 Qualitative Results of Interview
262
+
263
+ The qualitative results show a strong correlation between virtual pets' physical features and personalities attributed by users. Appearance plays a significant role in conveying their personalities, as participants identified specific personality traits based on design elements like body shape, skin color, facial features, and expressions. Additionally, users' personalities can influence their preferred pet personalities, explaining why some prefer pets with corresponding personalities. Most participants found the voxel-style pets more relaxing and easier to create with intricate details than the realistic-style pets. Overall, appearance significantly shapes users' perceptions of virtual pets, and their preferences are related to the emotions evoked by the pet styles.
264
+
265
+ #### 6.5.1 Correlation between virtual pets' design elements and personalities attributed by users.
266
+
267
+ Through our study, we discovered that the appearance of our virtual pets was instrumental in conveying their personalities to participants. Most participants $\left( {\mathrm{N} = 7}\right)$ identified specific personality traits based on certain design elements, such as the shape of the pets' body and their skin color. For instance, participants associated warm, light-colored skin, fat and round body with agreeableness traits. One participant remarked, "The pets' warm and light appearance made them feel tame and sweet, like dessert. Seven participants $\left( {\mathrm{N} = 7}\right)$ also noted that the facial features and expressions of the pets influenced their perceived personalities. For instance, one participant pointed out,"the kitten's dark middle face gave it a neurotic look, which they associated with gloominess and neuroticism." We have included a diagram in Fig. 7 that showcases all the appearances mentioned in the interview and the traits related to the pets' personalities. In conclusion, our findings indicate that appearance significantly shapes users' perceptions of virtual pets.
268
+
269
+ #### 6.5.2 Participants' Personalities were Related to Their Pet Choices
270
+
271
+ Our interview reveals a relationship between the personalities of participants' preferred pets and their personalities. Ten participants have similar personalities to the preferred pets $\left( {\mathrm{N} = {10}}\right)$ . Eight of the participants admitted that they preferred pets with personalities similar to their own $\left( {\mathrm{N} = 8}\right)$ , while the rest made unconscious choices $\left( {\mathrm{N} = 2}\right)$ . According to the analysis, when participants liked their personalities, they preferred pets that were similar to their personalities $\left( {\mathrm{N} = 8}\right)$ . On the contrary, they do not like pets with similar personalities $\left( {\mathrm{N} = 2}\right)$ . For instance, P5 mentioned,"Maybe it's because I'm impulsive, so I don't like (the pets that are impulsive)." Thus, participants' personality preferences influence their choice of pets.
272
+
273
+ #### 6.5.3 Comparison of Participants' Preferences for Voxel Style and Realistic Style
274
+
275
+ Most participants preferred the voxel style to the realistic style $\left( {\mathrm{N} = 8}\right)$ , while three preferred the realistic style $\left( {\mathrm{N} = 3}\right)$ , and two accepted both styles $\left( {\mathrm{N} = 2}\right)$ . Based on the analysis, participants’ decisions on style preferences were related to the emotions evoked by the pet style. Participants who preferred the voxel style felt that it made them feel more relaxed $\left( {\mathrm{N} = 8}\right)$ , while the realistic style made them feel scared and overwhelmed $\left( {\mathrm{N} = 4}\right)$ . For instance, $\mathrm{P}{11}$ explained," $I$ think that this cat's eye and its overall appearance makes me feel Uncanny Valley." Besides, although seven of the pets are machine-generated (G1 to G7), this does not affect the participants' preference for style. Participants could not distinguish between machine-generated pets and manually designed pets $\left( {\mathrm{N} = 4}\right)$ . Also, they believed that machine-generated pets basically had similar characteristics to real pets $\left( {\mathrm{N} = 4}\right)$ .
276
+
277
+ ## 7 Discussion
278
+
279
+ We conducted this study to investigate players' perceptions of virtual pets' personalities and discover the relationship between personalities and appearance. In the following sections, we discussed our findings through the results.
280
+
281
+ ### 7.1 Perceiving Virtual Pet Personalities through Style and Representation
282
+
283
+ In our first study, we investigated how the style (voxel or realistic) and representation (static or animated) of virtual pets influence users' perceptions of their personalities. Our findings revealed that both factors significantly affected users' perception of virtual pet personalities. Specifically, participants perceived virtual pets with voxel style as friendlier, cuter, and more playful than those with realistic styles. This finding is noteworthy because it contrasts with previous research on similar personality ratings for realistic and cartoon avatars in virtual human characters [69]. We suggest that users' preference for the voxel style may be due to its association with an abstract and cartoonish aesthetic, which enhances the presentation of pets' personalities. Our interview results further support this interpretation, as participants expressed a greater attachment to virtual pets with the voxel style, citing their agreeableness and cuteness, and the greater imagination space offered by the voxel style. In contrast, some participants found the realistic style to make virtual pets feel robotic and uncomfortable, reducing their emotional connection to them, a phenomenon known as the uncanny valley [56]. These findings underscore the significance of considering virtual pet style and representation in design, as they can impact users' perceptions of digital characters' personalities.
284
+
285
+ ### 7.2 The Link between Virtual Pets' Personalities and Ap- pearances
286
+
287
+ We aimed to explore the relationship between virtual pets' appearance and their perceived personality traits. Building upon previous research by Hanna Ekström [7], which suggests that visual cues such as shape and proportions can significantly influence how viewers perceive a character's personality traits, we designed six pet clusters with different visual cues to present various personality traits, as shown in Fig.1. Study 1 found that participants' perceptions of virtual pets' personalities aligned with our design intentions. Specifically, participants rated cat cluster 1 with a high agreeableness score and cat cluster 2 as more extroverted, consistent with previous research that suggests round and soft shapes are associated with friendliness and warmth. In contrast, angular and sharp shapes convey aggression and danger [7].
288
+
289
+ To further explore the relationship between virtual pets' appearance and perceived personality traits, we conducted study 2. Here, we utilized machine learning techniques to generate more voxel cat pictures and evaluated their personality presentation, as shown in Fig. 7. Our findings suggest that skin color is the most notable visual cue for describing voxel pets' personality traits. Participants perceived cats with warm-tone skin color as more friendly and sweet, while markings on a cat's face were associated with neuroticism. Additionally, participants linked a fat and round body shape with agreeableness and extroversion traits. In contrast, a towering body was associated with impulsiveness traits, and a small body was linked to neuroticism. Furthermore, we found that other parts of the virtual pets, such as the head, legs, and tails, could also provide visual cues for conveying personality traits.
290
+
291
+ Overall, our findings suggest that visual cues can significantly influence how virtual pets' personalities are perceived and that different parts of a virtual pet's appearance can provide valuable information for conveying personality traits. These results could be useful for designing virtual pets that accurately convey specific personality traits and enhance user engagement in future virtual pet game design.
292
+
293
+ ### 7.3 Design Implications For Virtual Pet Characters De- sign
294
+
295
+ Our analysis of studies one and two leads to several design implications for virtual pet character design. These include considering players' preferences for virtual pets, designing with players' personalities, and developing interacting features in virtual pet games.
296
+
297
+ #### 7.3.1 Preference on Virtual Pets
298
+
299
+ The results of our study suggest that players generally prefer virtual pets in the form of cats and dogs, with some expressing interest in fantasy pets like dragons. Interestingly, our research also showed that participants preferred virtual pets with a neurotic personality and a cute appearance, with common traits including warm-tone skin colors, large eyes, and fat body shapes. These findings align with previous research that suggests people prefer dogs' features associated with the infant schema [35].
300
+
301
+ Additionally, our study found that the style and presentation of virtual pets had a significant impact on players' perceptions of their personalities. The majority of participants showed a keen preference for the voxel style, finding its abstract and cute appearance to be calming and potentially effective in reducing anxiety and depression compared to a realistic style. This result is consistent with previous research that suggests virtual animals' MR-based interaction can reduce mental stress and induce positive emotions [58]. However, our interview results revealed that the visual design of the virtual animal used in the previous study was realistic, which potentially cause players to experience the uncanny valley effect. In this phenomenon, a realistic but not quite natural appearance can cause unease or discomfort.
302
+
303
+ ![01963e01-1b0b-7f05-8747-0f4718af520b_9_176_163_1434_452_0.jpg](images/01963e01-1b0b-7f05-8747-0f4718af520b_9_176_163_1434_452_0.jpg)
304
+
305
+ Figure 7: This figure depicts the relationship between the appearances of virtual pets and their perceived personalities. The left panel shows how different body parts of virtual pets relate to personality traits (A, E, I, N), with the frequency of each trait indicated by the corresponding color. The right panel displays a virtual pet image with labeled body parts for reference.
306
+
307
+ We also identified the most popular interaction schemes with virtual pets, such as talking, touching, and feeding, which can serve as a reference for future virtual pet game design. Additionally, our research showed that players prefer virtual pets to take on the role of companions rather than mentors or enemies, which differs from the suggestions made by previous researchers for non-player character roles in narrative settings.
308
+
309
+ These findings emphasize the importance of designers considering players' preferences for virtual pets' type, style, and interaction role when designing virtual pets. Designers should aim to create virtual pets with cute and endearing features while also incorporating traits that add depth to their personalities.
310
+
311
+ #### 7.3.2 Incorporating Player's Personality and Virtual Pets' Per- sonality in Designing Virtual Pets
312
+
313
+ While prior studies have examined pets' personalities, finding that owners were more satisfied with cats that were high in agreeableness and low in neuroticism [19], our research delved into the link between pet and owner personalities, focusing on virtual pets. Specifically, our study found that participants preferred virtual pets with personalities similar to their own, as indicated by the quantitative results of study two. Interestingly, our analysis also revealed that individual personality differences might influence how participants perceive and rate virtual pets. For instance, those with extraversion traits showed a preference for a virtual pet with slim legs and thin bodies (i.e., G2) that was more extroverted.
314
+
315
+ Our qualitative findings further supported this, showing that individuals with agreeableness traits in their personalities favored friendly and less aggressive virtual pets. Additionally, our results mirrored those of previous researchers, who noted a positive link between owner dominance and cat dominance, extraversion, and neuroticism [19]. However, unlike their work on natural pets, our study examined this relationship among virtual pets. Overall, our findings suggest that pet personality is an essential factor to consider in designing virtual pets and that personality traits of both the pet and owner may influence user preferences and satisfaction.
316
+
317
+ Our investigation into players' preferences was partly inspired by prior work identifying three user types based on preferences and gameplay styles in VR pet games [48]. In addition, we examined how individual differences in players' in-game behavior influenced their perception and ratings of virtual pets. We found that participants who played the game with a social or skill development purpose rated virtual pets with extraversion traits higher, as identified through interviews. These pets were characterized by cold-tone skin colors, small heads, and ears (G1). Our findings suggest that considering players' in-game models is a promising approach for designing virtual pet characters that align with users' preferences and engagement styles.
318
+
319
+ ### 7.4 Controllable and Cost-Effective Model Generation us- ing Recombination and Repainting to Improve User Response Data Quality
320
+
321
+ In the discussion section, we analyzed the effectiveness of our machine-generated pet pictures, which were used to collect user responses in our experiment. Our generator utilizes a recombination and repainting approach to produce high-quality results, and the method is relatively inexpensive compared to $3\mathrm{D}$ generative neural networks. While generative neural networks are only minimally used for texturing due to insufficient training data, using them for random texturing can significantly reduce poor generation results. However, relying solely on traditional generation methods for shape and color can lead to less creative and predictable results. However, incorporating new colors and textures can make it difficult for people to identify the origin of the model's parts. In our study, we conducted a semi-structured interview in which participants were unaware that a machine generated the cat pictures. All participants noted the cat's body and other parts' excellent convergence.
322
+
323
+ ### 7.5 Limitation and Future Work
324
+
325
+ This study has several limitations that we need to address in future work. Firstly, the limited training sample we used to generate cat pictures using machine learning techniques may have resulted in the lack of diversity in the appearance and personality traits of the voxel pets. To overcome this limitation, we plan to create and label diverse pets' body parts with different visual cues to better capture a broader range of personality traits.
326
+
327
+ Secondly, our design only focused on the static voxel style. It lacked animations and sound, which may have hindered the accurate perception of the pets' personalities in a more precise way. To improve the accuracy of personality perception and enhance the user experience, we plan to involve animated clips by rigging the voxel pet models and add sounds related to pet behaviors when designing virtual pets.
328
+
329
+ Furthermore, we plan to use our current voxel pet characters as artificial companions and integrate their personality traits to design an interactive virtual pet game. The game aims to reduce anxiety and stress levels as an intervention tool. It provides users with a fun and engaging way to interact with virtual pets, potentially improving their mental health and well-being.
330
+
331
+ In conclusion, although our study provides valuable insights into the link between virtual pets' appearances and their perceived personality traits, several limitations must be addressed in future work. By enriching our sample for generating voxel pets, involving animations and sound, and developing an interactive virtual pet game, we hope to provide users with a more authentic and engaging virtual pet experience.
332
+
333
+ ## 8 CONCLUSION
334
+
335
+ In conclusion, our study aimed to address the gaps in current research on virtual pets and their potential for promoting mental health and enhancing skill development in individuals who cannot keep real pets. We focused on creating virtual pets with personality traits and exploring how players perceive their personalities. Our research found that appearance variations affect users' perceptions of virtual pet personalities. Players prefer virtual pets like cats and dogs with neurotic personalities and cute appearances. We also developed a novel method for a game character design that combined traditional methods with machine learning techniques. Our study provides several design implications for virtual pet character design. It highlights the potential of using voxel pets' appearances as virtual companions to enhance the mental well-being of young individuals by reducing anxiety levels through interactive engagement with virtual pets. Overall, our study contributes to the understanding of factors contributing to the development of personalities in different species and how we can design artificial companions that mimic and respond to these traits.
336
+
337
+ ## REFERENCES
338
+
339
+ [1] Minecraft the official website of minecraft. https://www.minecraft.net/en-us.Accessed: 2022-09-30.
340
+
341
+ [2] Pokemon Go the official website of pokemon go. https://www.pokemon.com/us, note = Accessed: 2022-09-30.
342
+
343
+ [3] Sandbox the official website of tamagotch. https://www.sandbox.game/en/.Accessed: 2022-09-30.
344
+
345
+ [4] Tamagotchi website of tamagotch. https://tamagotchi.com/.Accessed: 2022-09-30.
346
+
347
+ [5] Top 10 most popular pets in the world, 2021.
348
+
349
+ [6] E. Adams. Fundamentals of game design. 2006.
350
+
351
+ [7] H. af Ekström. How can a character's personality be conveyed visually, through shape. 2013.
352
+
353
+ [8] E. L. Altschuler. Play with online virtual pets as a method to improve mirror neuron and real world functioning in autistic children. Medical hypotheses, 70 4:748-9, 2008.
354
+
355
+ [9] R. Aminuddin, A. J. C. Sharkey, and L. Levita. Interaction with the paro robot may reduce psychophysiological stress responses. 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 593-594, 2016.
356
+
357
+ [10] R. A. Bartle. Designing virtual worlds. 2003.
358
+
359
+ [11] D. S. Bylieva, N. I. Almazova, V. V. Lobatyuk, and A. V. Rubtsova. Virtual pet: Trends of development. 2019.
360
+
361
+ [12] Z.-H. Chen, C.-Y. Chou, Y.-C. Deng, and T. W. Chan. Animal companions as motivators for teammates helping each other learn. In ${CSCL}$ , 2005.
362
+
363
+ [13] C. Clavel, C. Faur, J.-C. Martin, S. Pesty, and D. Duhaut. Artificial
364
+
365
+ companions with personality and social role. 2013 IEEE Symposium on Computational Intelligence for Creativity and Affective Computing (CICAC), pp. 87-95, 2013.
366
+
367
+ [14] M. Courgeon, C. Hoareau, and D. Duhaut. Interaction with artificial companions: Presentation of an exploratory study. In International Conference on Software Reuse, 2016.
368
+
369
+ [15] M. M. Delgado, J. D. Munera, and G. M. Reevy. Human perceptions of coat color as an indicator of domestic cat personality. Anthrozoös, 25:427 - 440, 2012.
370
+
371
+ [16] Z. Demetrovics, R. Urbán, K. Nagygyörgy, J. Farkas, D. Zilahy, B. Mervó, A. Reindl, C. Ágoston, A. Kertész, and E. Harmath. Why do you play? the development of the motives for online gaming questionnaire (mogq). Behavior research methods, 43:814-825, 2011.
372
+
373
+ [17] M. B. Donnellan, F. L. Oswald, B. M. Baird, and R. E. Lucas. The mini-ipip scales: tiny-yet-effective measures of the big five factors of personality. Psychological assessment, 18(2):192, 2006.
374
+
375
+ [18] J. M. Dydynski and N. Mäekivi. Impacts of cartoon animals on human-alloanimal relations. Anthrozoös, 34:753 - 766, 2021.
376
+
377
+ [19] R. Evans, M. Lyons, G. Brewer, and S. Tucci. The purrfect match: The influence of personality on owner satisfaction with their domestic cat (felis silvestris catus). Personality and Individual Differences, 2019.
378
+
379
+ [20] J. L. Fratkin, D. L. Sinn, E. A. Patall, and S. D. Gosling. Personality consistency in dogs: A meta-analysis. PLoS ONE, 8, 2013.
380
+
381
+ [21] S. Gächter, C. Starmer, and F. Tufano. Measuring the closeness of relationships: a comprehensive evaluation of the'inclusion of the other in the self'scale. PloSone, 10(6):e0129478, 2015.
382
+
383
+ [22] C. P. Gallagher, R. Niewiadomski, M. Bruijnes, G. Huisman, and M. Mancini. Eating with an artificial commensal companion. Companion Publication of the 2020 International Conference on Multimodal Interaction, 2020.
384
+
385
+ [23] K. Genova, F. Cole, A. Sud, A. Sarna, and T. Funkhouser. Variational autoencoders for deforming $3\mathrm{\;d}$ mesh models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4458-4467, 2018.
386
+
387
+ [24] D. Golan-Shemesh, T. Lotan, Y. Zadorozhnaya, A. Zamansky, T. Brilant, K. Ablamunits, and D. van der Linden. Exploring digitalization of animal-assisted reading. Eight International Conference on Animal-Computer Interaction, 2021.
388
+
389
+ [25] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial networks. Communications of the ACM, 63(11):139-144, 2020.
390
+
391
+ [26] S. D. Gosling and O. P. John. Personality dimensions in nonhuman animals. Current Directions in Psychological Science, 8:69-75, 1999.
392
+
393
+ [27] S. D. Gosling, V. S. Y. Kwan, and O. P. John. A dog's got personality: a cross-species comparative approach to personality judgments in dogs and humans. Journal of personality and social psychology, 85 6:1161- 9, 2003.
394
+
395
+ [28] G. Grinstein, D. Keim, and M. Ward. Information visualization, visual data mining, and its application to drug design. IEEE Visualization Course #1 Notes, October 2002.
396
+
397
+ [29] B. L. Hart and L. A. H. (Geyer). Breed and gender differences in dog behavior. 2016.
398
+
399
+ [30] B. L. Hart and L. A. Hart. Breed and gender differences in dog behavior. The Domestic Dog: Its Evolution, Behavior and Interactions with People,, pp. 119-132, 2016.
400
+
401
+ [31] S. Hartwell. Conformation charts.
402
+
403
+ http://messybeast.com/conformation-charts.htm.Accessed: 2023-03- 28.
404
+
405
+ [32] S. Hartwell. What is conformation and does it matter? part 1, 2018.
406
+
407
+ [33] R. Hayashi. Influence of users' personality traits on impression of robots' reactions to approach from them. Transactions of the Society of Instrument and Control Engineers, 2020.
408
+
409
+ [34] R. Hayashi and S. Kato. Psychological effects of physical embodiment in artificial pet therapy. Artificial Life and Robotics, 22:58-63, 2017.
410
+
411
+ [35] J. Hecht and A. Horowitz. Seeing dogs: Human preferences for dog physical attributes. Anthrozoös, 28:153-163, 2015.
412
+
413
+ [36] J. Ho, X. Chen, A. Srinivas, Y. Duan, and P. Abbeel. Improved denois-ing diffusion probabilistic models. In Advances in Neural Information Processing Systems, pp. 680-689, 2019.
414
+
415
+ [37] F. C. Internationale. Fci breeds nomenclature. https://www.fci.be/en/Nomenclature/.Accessed: 2022-05-30.
416
+
417
+ [38] P. Isenberg, F. Heimerl, S. Koch, T. Isenberg, P. Xu, C. Stolper, M. Sedl-
418
+
419
+ mair, J. Chen, T. Möller, and J. Stasko. vispubdata.org: A Metadata Collection about IEEE Visualization (VIS) Publications. IEEE Transactions on Visualization and Computer Graphics, 23, 2017. To appear. doi: 10.1109/TVCG.2016.2615308
420
+
421
+ [39] S. Jhu. The 50-item ipip representation of the goldberg's markers for the big-five factor structure: Development of the traditional chinese version. National Dong Hwa University: Hualien, Taiwan, 2016.
422
+
423
+ [40] K. Johnsen, S. J. G. Ahn, J. Moore, S. Brown, T. P. Robertson, A. Marable, and A. Basu. Mixed reality virtual pets to reduce childhood obesity. IEEE Transactions on Visualization and Computer Graphics, 20:523-530, 2014.
424
+
425
+ [41] G. Kindlmann. Semi-automatic generation of transfer functions for direct volume rendering. Master's thesis, Cornell University, USA, 1999.
426
+
427
+ [42] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
428
+
429
+ [43] Kitware, Inc. The Visualization Toolkit User's Guide, January 2003.
430
+
431
+ [44] C. M. Lee, J. J. Ryan, and D. S. Kreiner. Personality in domestic cats. Psychological Reports, 100:27 - 29, 2007.
432
+
433
+ [45] H. R. Lee, W. R. Panont, B. Plattenburg, J. de la Croix, D. Patharacha-lam, and G. D. Abowd. Asthmon: empowering asthmatic children's self-management with a virtual pet. CHI '10 Extended Abstracts on Human Factors in Computing Systems, 2010.
434
+
435
+ [46] M. Levoy. Display of Surfaces from Volume Data. PhD thesis, University of North Carolina at Chapel Hill, USA, 1989.
436
+
437
+ [47] J. M. Ley, P. McGreevy, and P. C. Bennett. Inter-rater and test-retest reliability of the monash canine personality questionnaire-revised (mcpq-r). Applied Animal Behaviour Science, 119(1-2):85-90, 2009.
438
+
439
+ [48] C. Lin, T. Faas, L. S. Dombrowski, and E. L. Brady. Beyond cute: exploring user types and design opportunities of virtual reality pet games. Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology, 2017.
440
+
441
+ [49] C. A. Litchfield, G. Quinton, H. Tindle, B. Chiera, K. H. Kikillus, and P. Roetman. The 'feline five': An exploration of personality in pet cats (felis catus). PLoS One, 12(8):e0183455, 2017.
442
+
443
+ [50] S. E. Lofgren, P. Wiener, S. C. Blott, E. Sánchez-Molano, J. A. Wool-liams, D. N. Clements, and M. J. Haskell. Management and personality in labrador retriever dogs. Applied Animal Behaviour Science, 156:44- 53, 2014.
444
+
445
+ [51] W. E. Lorensen and H. E. Cline. Marching cubes: A high resolution 3D surface construction algorithm. SIGGRAPH Computer Graphics, 21(4):163-169, Aug. 1987. doi: 10.1145/37402.37422
446
+
447
+ [52] N. Max. Optical models for direct volume rendering. IEEE Transactions on Visualization and Computer Graphics, 1(2):99-108, June 1995. doi: 10.1109/2945.468400
448
+
449
+ [53] R. McCrae and O. P. John. An introduction to the five-factor model and its applications. Journal of personality, 60 2:175-215, 1992.
450
+
451
+ [54] D. A. Min, Y. Kim, S.-A. Jang, K. Y. Kim, S. Jung, and J.-H. Lee. Pretty pelvis: A virtual pet application that breaks sedentary time by promoting gestural interaction. Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, 2015.
452
+
453
+ [55] A. Mordvintsev, E. Randazzo, E. Niklasson, and M. Levin. Growing neural cellular automata. Distill, 2020. https://distill.pub/2020/growing-ca.doi: 10.23915/distill.00023
454
+
455
+ [56] M. Mori. The uncanny valley. The Monster Theory Reader, 2020.
456
+
457
+ [57] F. B. Morton, L. M. Robinson, S. Brando, and A. Weiss. Personality structure in bottlenose dolphins (tursiops truncatus). Journal of comparative psychology, 2021.
458
+
459
+ [58] H. Na, S. Park, and S.-Y. Dong. Mixed reality-based interaction between human and virtual cat for mental stress management. Sensors (Basel, Switzerland), 22, 2022.
460
+
461
+ [59] K. Nakajima and M. Niitsuma. Effects of space and scenery on virtual pet-assisted activity. Proceedings of the 8th International Conference on Human-Agent Interaction, 2020.
462
+
463
+ [60] M. Nalin, I. Baroni, A. Sanna, and C. Pozzi. Robotic companion for diabetic children: emotional and educational support to diabetic
464
+
465
+ children, through an interactive robot. In International Conference on Interaction Design and Children, 2012.
466
+
467
+ [61] G. M. Nielson and B. Hamann. The asymptotic decider: Removing the ambiguity in marching cubes. In Proc. Visualization, pp. 83-91. IEEE
468
+
469
+ Computer Society, Los Alamitos, 1991. doi: 10.1109/VISUAL. 1991. 175782
470
+
471
+ [62] T. G. C. of the Cat Fancy. Cat breeds. https://www.gccfcats.org/getting-a-cat/choosing/cat-breeds/.Accessed: 2023-03-28.
472
+
473
+ [63] R. M. Packer, D. G. O'Neill, F. Fletcher, and M. J. Farnworth. Come for the looks, stay for the personality? a mixed methods investigation of reacquisition and owner recommendation of bulldogs, french bulldogs and pugs. PLoS ONE, 15, 2020.
474
+
475
+ [64] A. M. Perkins. The benefits of pet therapy. Nursing made Incredibly Easy, 18:5-8, 2020.
476
+
477
+ [65] B. Poole, A. Jain, J. T. Barron, and B. Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. arXiv preprint arXiv:2209.14988, 2022.
478
+
479
+ [66] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
480
+
481
+ [67] J.-L. Rault. Pets in the digital age: Live, robot, or virtual? Frontiers in Veterinary Science, 2, 2015.
482
+
483
+ [68] M. Roy and J. S. Nicholas. Do dogs resemble their owners? Psychological Science, 15:361 - 363, 2004.
484
+
485
+ [69] K. Ruhland, K. Zibrek, and R. McDonnell. Perception of personality through eye gaze of realistic and cartoon models. Proceedings of the ACM SIGGRAPH Symposium on Applied Perception, 2015.
486
+
487
+ [70] M. Salonen, S. Mikkola, E. Hakanen, S. Sulkama, J. Puurunen, and H. Lohi. Reliability and validity of a dog personality and unwanted behavior survey. Animals, 11(5):1234, 2021.
488
+
489
+ [71] M. Salonen, K. Vapalahti, K. Tiira, A. Mäki-Tanila, and H. Lohi. Breed differences of heritable behaviour traits in cats. Scientific Reports, 9, 2019.
490
+
491
+ [72] M. Salonen, K. Vapalahti, K. Tiira, A. Mäki-Tanila, and H. Lohi. Breed differences of heritable behaviour traits in cats. Scientific reports, 9(1):1-10, 2019.
492
+
493
+ [73] V. Schwind, K. Leicht, S. Jäger, K. Wolf, and N. Henze. Is there an uncanny valley of virtual animals? a quantitative and qualitative investigation. International Journal of Human-Computer Studies, 111:49-61, 2018.
494
+
495
+ [74] K. Scoresby, E. B. Strand, Z. Ng, K. C. Brown, C. R. Stilz, K. Strobel, C. S. Barroso, and M. J. Souza. Pet ownership and quality of life: A systematic review of the literature. Veterinary Sciences, 8, 2021.
496
+
497
+ [75] A. Sierra Rativa, M. Postma, and M. Van Zaanen. The influence of game character appearance on empathy and immersion: Virtual non-robotic versus robotic animals. Simulation & Gaming, 51(5):685-711, 2020.
498
+
499
+ [76] C. Solarski. Drawing basics and video game art: Classic to cutting-edge art techniques for winning video game design. Watson-Guptill, 2012.
500
+
501
+ [77] H. Song, Z. Li, X. Huang, G. Hua, and C. C. Loy. Ddim: Diffusion for image modeling. In European Conference on Computer Vision, pp. 748-765. Springer, 2020.
502
+
503
+ [78] S. Talbot, R. Freire, and S. Wassens. Identifying behavioural traits and underlying personality dimensions in domestic ferrets (mustela putorius furo). Animals : an Open Access Journal from MDPI, 11, 2021.
504
+
505
+ [79] A. Tapus, C. Tapus, and M. J. Matarić. User-robot personality matching and assistive robot behavior adaptation for post-stroke rehabilitation therapy. Intelligent Service Robotics, 1:169-183, 2008.
506
+
507
+ [80] S. Thomas, Y. Ferstl, R. McDonnell, and C. Ennis. Investigating how speech and animation realism influence the perceived personality of virtual characters and agents. 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 11-20, 2022.
508
+
509
+ [81] S. Thomas, Y. Ferstl, R. McDonnell, and C. Ennis. Investigating how speech and animation realism influence the perceived personality of virtual characters and agents. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 11-20. IEEE, 2022.
510
+
511
+ [82] P. Thorn, T. J. Howell, C. Brown, and P. C. Bennett. The canine cuteness effect: Owner-perceived cuteness as a predictor of human-dog relationship quality. Anthrozoös, 28:569 - 585, 2015.
512
+
513
+ [83] Y. Wang, Z. Li, R. A. Khot, and F. ueller. Toward understanding playful beverage-based gustosonic experiences. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 6:1-23, 2022.
514
+
515
+ [84] C. Ware. Information Visualization: Perception for Design. Morgan Kaufmann Publishers Inc., San Francisco, ${2}^{\text{nd }}$ ed.,2004. doi: 10.1016/ B978-155860819-1/50001-7
516
+
517
+ [85] R. M. Waters, B. B. Bowers, and G. M. Burghardt. Personality and individuality in reptile behavior. 2017.
518
+
519
+ [86] E. Weiss, K. A. Miller, H. Mohan-Gibbons, and C. Vela. Why did you choose this pet?: Adopters and pet selection preferences in five animal shelters in the united states. Animals : an Open Access Journal from MDPI, 2:144 - 159, 2012.
520
+
521
+ [87] J. Wilhelmy, J. A. Serpell, D. C. Brown, and C. Siracusa. Behavioral associations with breed, coat type, and eye color in single-breed cats. Journal of Veterinary Behavior-clinical Applications and Research, 13:80-87, 2016.
522
+
523
+ [88] B. G. Witmer and M. J. Singer. Measuring presence in virtual environments: A presence questionnaire. Presence, 7(3):225-240, 1998.
524
+
525
+ [89] A. M. Wu, M. H. Lai, S. Yu, J. T. Lau, and M.-W. Lei. Motives for online gaming questionnaire: Its psychometric properties and correlation with internet gaming disorder symptoms among chinese people. Journal of Behavioral Addictions, 6(1):11-20, 2016.
526
+
527
+ [90] J. Wu, Y. Dai, Y. Yuan, and J. Li. Ui/ux design methodology of portable customizable simulated pet system considering human mental health. 2022 IEEE 4th Global Conference on Life Sciences and Technologies (LifeTech), pp. 41-45, 2022.
528
+
529
+ [91] J. Wu, C. Zhang, T. Xue, W. T. Freeman, and J. B. Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. arXiv preprint arXiv:1610.07584, 2016.
530
+
531
+ [92] G. Wyvill, C. McPheeters, and B. Wyvill. Data structure for soft objects. The Visual Computer, 2(4):227-234, Aug. 1986. doi: 10. 1007/BF01900346
532
+
533
+ [93] R. M. Yerkes. The life history and personality of the chimpanzee. The American Naturalist, 73:97 - 112, 1939.
534
+
535
+ ## A SEMI-STRUCTURED INTERVIEW SCRIPT
536
+
537
+ ### A.1 Semi-structured Interview of Study 1
538
+
539
+ Thank you for filling out our questionnaire earlier. We'd like to briefly talk with you about your past pet ownership experience and your thoughts on the personality and appearance of virtual pets. This interview is mainly divided into four parts. We will record the interview process, and some operations will be recorded on the screen. If you agree, let's continue.
540
+
541
+ First, let's recall how you filled out the questionnaire before. Here are the pictures and animations of pets you saw before: Pet Personality Comparison: (pictures for recall)
542
+
543
+ #### A.1.1 pet personality
544
+
545
+ 1. We found you have to XX this attribute evaluation in XX, which means more XX. Do you think so?
546
+
547
+ 2. What are the specific reasons that make you feel that way?
548
+
549
+ 3. Regarding virtual pet cats/dogs, do you think animation is a better way to express the pet's personality than static images? And Where is it embodied?
550
+
551
+ 4. Do you think virtual pet images (in the form of static and dynamic voxels) are better or worse at expressing a pet's personality than real pet photos?
552
+
553
+ 5. You're looking at a picture of a cat/dog, but if you've had a cat/dog in your pet live, does that make a difference to your judgment?
554
+
555
+ - If so, where are the main areas affected?
556
+
557
+ - If not, why not?
558
+
559
+ #### A.1.2 Pet Type and Interaction
560
+
561
+ 1. We found out that you used to have XXX, but the pet you want to have is still/is XXX. Can you tell us the reason?
562
+
563
+ 2. The virtual pet you are looking forward to is XXX. Can you tell me why?
564
+
565
+ 3. What characteristics do you think these virtual pets need? In appearance and personality?
566
+
567
+ 4. You mentioned that if you can keep a virtual pet, the most important way to interact with him is XXX. Can you explain why you like this kind of interaction?
568
+
569
+ 5. What do you find most appealing about this type of interaction?
570
+
571
+ 6. Can you talk about the kind of interaction you want?
572
+
573
+ 7. In addition, we also found that other ways of interacting are trendy. If you could choose the top three, what would you choose and rank them?
574
+
575
+ #### A.1.3 Pet Appearance
576
+
577
+ 1. You describe your previous pet's personality as XXX. Do you think its physical characteristics are related?
578
+
579
+ - If so, what specific characteristics (e.g., body shape, limbs, tail position, movement habits) indicate this personality trait?
580
+
581
+ - If you don't feel connected, can you explain why?
582
+
583
+ 2. Do you think there is a contrast between his appearance and his real character? Is that a big contrast?
584
+
585
+ 3. We found that many people like pets that look cute. Do you agree?
586
+
587
+ - If so, why do you like pets that look cute?
588
+
589
+ - If not, why not?
590
+
591
+ 4. What characteristics do you consider cute?
592
+
593
+ 5. We've got some pictures and need your help choosing which ones you think are cute. That's great. We also want you to rank the cuteness of your picks.
594
+
595
+ #### A.1.4 Summary and Advice For Pet Design: Summary and advice for pet design:
596
+
597
+ 1. Statistics show that the highest rating for our three pet pictures is XXX. Can you explain why?
598
+
599
+ 2. Do you think our virtual pet characters can convey the pet's personality traits?
600
+
601
+ - If so, can you tell me how you feel about it? (Color, movement, expression?) Which virtual pet do you prefer to your previous pets? Why?
602
+
603
+ - If not, could you tell us the specific reason?
604
+
605
+ 3. What do you think of our virtual pet characters?
606
+
607
+ 4. Any suggestions on how to improve the design of pet characters or animations?
608
+
609
+ ### A.2 Semi-structured Interview of Study 2
610
+
611
+ Thank you for participating in this interview. We would like to chat briefly with you about your thoughts on the pet images we designed, as well as a deeper understanding of some of your suggestions for anxiety-relieving applications. The interview will be divided into five parts, about 60 to 90 minutes. We will tape and video the interview. If you agree, let's continue.
612
+
613
+ 1. We found that your pet evaluation is: XXX is the highest score. Can you explain why? classifying pets' personalities, we found that the pet you rated highest was XX personality. Can you explain why?
614
+
615
+ 2. We have prepared some pictures of voxel pet. Please put them in the best area according to the images you see. We found xxx, could you please explain why?
616
+
617
+ 3. What do you think is the difference between xx?
618
+
619
+ #### A.2.1 Design
620
+
621
+ 1. Compared with the painting style of real and voxel pet, which do you prefer? Can you expand on that?
622
+
623
+ 2. Which image do you prefer compared to the real and voxel pet? Can you expand on that?
624
+
625
+ 3. What is your favorite part of the Voxel pet's look?
626
+
627
+ - What's your least favorite part?
628
+
629
+ - Can you expand on that?
630
+
631
+ 4. Compared with the real and voxel pet, which image do you find more relaxing and relaxing?
632
+
633
+ 5. Which do you prefer compared to the real pet and voxel pet scenes? Can you expand on that?
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/1bxh-dKdrn4/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,379 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § “I KEEP SWEET CATS IN REAL LIFE, BUT WHAT I NEED IN THE VIRTUAL WORLD IS A NEUROTIC DRAGON": VIRTUAL PET DESIGNS WITH PERSONALITY PATTERNS
2
+
3
+ Hongni Ye*
4
+
5
+ Duke Kunshan University
6
+
7
+ Ruoxin You†
8
+
9
+ University College London Xin Yi]
10
+
11
+ Tsinghua University
12
+
13
+ Kaiyuan Lou‡
14
+
15
+ Duke Kunshan University Xin Tong1 Duke Kunshan University
16
+
17
+ Yili Wen$
18
+
19
+ Duke Kunshan University
20
+
21
+ § ABSTRACT
22
+
23
+ Virtual Pets serve as companionships and meaningful in-game narratives in the metaverse. Players have unique personalities and personality preferences for their pets. However, the design of virtual pets often relies on designers' individual experiences without considering the virtual pets' personalities. We designed the virtual pets' visual appearances by following the design guidelines from the Five Factor Model (FFM) in voxel format. We conducted a study to investigate people's perceptions of virtual pets' personalities and appearances through two user studies. Our findings suggest that voxel-style virtual pets better represented agreeableness than realistic pet pictures. Additionally, users prefer virtual pets (voxel-style pets generated with machine-learning techniques) that share similar personalities. The study's results provide valuable insights for game designers and researchers for future pet game design and understanding of how people perceive virtual pets based on their appearance and behavior.
24
+
25
+ Index Terms: H.5.2 [User Interfaces]: User-centered design-Style guides; H.5.1 [Multimedia Information Systems]: Artificial, augmented, and virtual realities-Evaluation/methodology
26
+
27
+ § 1 INTRODUCTION
28
+
29
+ As technology advances, the demand for artificial companions [14] with diverse personalities in different social roles increases [13]. Virtual pets, in particular, have been gaining immense popularity in the digital games field. They serve as a source of entertainment [11, ${48},{67}\rbrack$ and present the potential to promote mental health $\left\lbrack {9,{58},{90}}\right\rbrack$ and enhance children’s skill development $\left\lbrack {8,{12},{24},{45}}\right\rbrack$ . Virtual pets offer a unique opportunity for individuals who cannot keep real pets for various reasons, such as allergies or lack of resources. Moreover, research has suggested that interacting with virtual pets can lead to positive emotional outcomes, such as reduced anxiety [33].
30
+
31
+ There are various styles for designing virtual pet characters, the most popular ones are the realistic style and cartoon style. The realistic style, in high fidelity, provides a lifelike appearance [58], while the cartoon style, including voxel and two-dimensional sketch, creates a more cartoon look. For example, Tamagotchi [4] and Pokemon Go [2] use sketch and cartoon 3D styles, respectively, and 3D voxel-style virtual pets are present in games like Minecraft [1] and Sandbox [3]. To ensure efficient character modeling and avoid negative impacts on people's perceptions of virtual characters due to low aesthetic qualities and intermediate rendering realisms [73], we designed our virtual pet characters in voxel style. With the explosive growth of machine learning and the refinement of generative networks, neural networks like Generative adversarial networks (GAN) or diffusion models are widely used to replace the manual production of pictures and models in the industry $\left\lbrack {{25},{65}}\right\rbrack$ . However, generative models are more common in 2D pictures or 3D facial construction than virtual pets, making pet models scarce and expensive.
32
+
33
+ Virtual pet games lack personality diversity and fail to provide a similar experience to raising a real pet, as their personalities are unrelated to their appearance. Previous studies have explored personality differences in real pets, such as cat and dog breeds $\left\lbrack {{15},{50},{71},{87}}\right\rbrack$ . Although previous studies have shown that personality differences exist in non-human animals [27], it lacks research on cross-species personality traits in pets. This gap in the literature limits our understanding of the factors that contribute to the development of personalities in different species and how we can design virtual characters that can mimic and respond to these traits.
34
+
35
+ To address the gaps in current research, our study focuses on creating virtual pets that display variations in personality and exploring how players perceive their behaviors and appearance. We aim to answer three primary research questions: (1) How do users perceive the personalities of virtual pets in different styles and representations? (2) How do virtual pets' personalities relate to their appearance, and how can we design virtual pets involving their personalities? (3) What are individuals' perceptions of the appearance and personality of virtual pets generated through machine learning techniques?
36
+
37
+ Through two studies, we found that style and presentation significantly affected users' perceptions of virtual pet personalities. Additionally, we found that visual cues, such as skin color and body shape, can significantly influence how virtual pets' personalities are perceived. In general, our contributions will be threefold: (1) In the game design domain, we applied pets' personality variations to virtual pet design by following the FFM. And we evaluated our designed virtual pet characters with pre-defined personality traits with users' study. (2) We combined the traditional method and the machine learning technique to generate the appearance of virtual pet. We embedded Neural Cellular Automaton (NCA) [55] into our generation process to increase the diversity of generated models, which provides a novel method for game character design. (3) Our study examined the potential of using voxel pets' appearances as virtual companions to enhance the mental well-being of young individuals by reducing anxiety levels through interactive engagement with virtual pets.
38
+
39
+ § 2 RELATED WORK
40
+
41
+ Below, we describe the design examples, and machine learning techniques for generating models of virtual pets.
42
+
43
+ § 2.1 VIRTUAL PET DESIGN AND GENERATION
44
+
45
+ Pet serves the important function as a human companion, a strong and healthy human-animal relationship will be beneficial for both entities [82]. As suggested by previous researchers, appearance is an important consideration in pet owners' decision-making [86]. And One tenant of folk psychology is that people tend to select pet dogs that have a similar appearance to themselves [68]. And in the artificial animal design field, a study about human-alloanimal relations highlighted that cartoon animals can lead to people wanting to be close to the depicted animal, for the reason that their appearances are designed as approachable, cuddly, friendly, and fun beings [18]. Thus, the appearance design deserves our attention when conducting virtual pet design work.
46
+
47
+ *e-mail: hongni.ye@mail.polimi.it
48
+
49
+ ${}^{ \dagger }$ e-mail: echoyou67@outlook.com
50
+
51
+ ${}^{ \ddagger }$ e-mail: midstream.lou@gmail.com
52
+
53
+ §e-mail: yili.wen@dukekunshan.edu.cn
54
+
55
+ Te-mail: yixin@tsinghua.edu.cn
56
+
57
+ 'e-mail:xt43@duke.edu
58
+
59
+ In the Virtual Reality (VR) pet game domain, to promote better game experience and immersion, pets' appearance design has an intimate connection with user preferences. As introduced by Chaoran Lin et al., there are three user types based on user's motivations and expectations: (1) pet-keepers, (2) animal teammates, and (3) cool hunters [48]. The user type model inspired us to do case studies of virtual pets' appearance according to their target user types. And we proposed three virtual pets types: (1) natural; (2) intelligent; (3) fantasy. And we conduct case studies based on the different types, and the results are summarized in cards. See the case study results in the appendix. Through the case study, we discovered most cases are about natural pets and took the accessibility of pets' personality knowledge into consideration, and we decided to focus on the natural pets' design as our target pet type in this study.
60
+
61
+ Machine learning techniques have gained widespread image and model generation adoption in recent years. Neural networks, including VAEs [42], DCGANs [66], and the diffusion model [36, 77], have demonstrated impressive image generation capabilities. These models have also been extended to 3D model generation by increasing the dimension of the data $\left\lbrack {{65},{91}}\right\rbrack$ . Generating $3\mathrm{\;d}$ mesh is another popular approach to 3D generation, which can also be accomplished with neural networks, such as VAEs [23]. Mesh-based models offer an alternative means of $3\mathrm{D}$ representation that can be more efficient and suitable for certain applications. However, all of these generation approaches require a large amount of high-quality training data, which restricts their usage in Virtual Pet generation. So, we finally decided to use the Neural Cellular Automaton (NCA) in generation [55]. By combining the NCA with the traditional generation process which recombines the different parts of models, we can get plenty of high-quality models with few training data.
62
+
63
+ § 2.2 EXPLORING PERSONALITY TRAITS IN ANIMALS
64
+
65
+ As illustrated by Yerkes [93], it is commonplace to regard individual animals as possessing distinct personalities. And other researchers have proved that personality differences do exist and can be measured in animals other than humans [27]. There are a great number of works that studied pet's personalities, including cats [44], dogs [20], ferrets [78], dolphins [57], and other reptile animals [85]. Through the literature review, we discovered the breeds of cats and dogs had been investigated the most. Thus, we further researched the work about personality traits among cat and dog breeds. For the cat breeds, Milla Salonen et al. suggested that cat breeds grouped into four clusters by analyzing their personality traits with three components named aggression, extraversion, and shyness [71]. Other researchers ranked dog breeds on ten behavioral characteristics in three factors (aggression, reactivity, trainability), considering breeds of the three most closely related groupings, the wolf-like, guarding, and herding groups [29]. And refer to Fédération Cynologique Internationale (FCI) [37], there are ten dog groups based on various discriminators such as appearance or role. The existing variations of personality traits among different cat and dog breeds motivated us to apply personality analysis to virtual pet design.
66
+
67
+ The Five-Factor Model (FFM) is one of the most commonly used instruments for measuring personality for humans [53]. The FFM comprised the dimensions Neuroticism (N), Extraversion (E), Openness to Experience (O), Conscientiousness (C), and Agreeableness (A). And some researchers also applied the FFM to different species' personality tests. For example, Samuel D et al. built a preliminary framework with the human Five-Factor Model plus Dominance and Activity for measuring the personalities of 12 nonhuman species, and their results indicated that various primates, nonprimate mammals, and even guppies and octopuses all show individual differences that can be organized along dimensions akin to E, N, and A. [26]. And another work presented the " Feline Five, "which was adapted from FFM with a 5-factor analysis: Neuroticism, Extraversion, Dominance, Impulsiveness and Agreeableness. And the "Feline Five" has been proven to introduce a more comprehensive overall pet domestic cat personality structure. Therefore, the "Feline Five" has great potential to measure our designed virtual pets' personalities. We adapted these instruments in our pet design method, which will be introduced in section.4.
68
+
69
+ § 3 VIRTUAL PET DESIGN IN VOXEL STYLE
70
+
71
+ To investigate the influence of pets' appearance traits on human perception of personality, we developed virtual pet characters inspired by real-life pets. Our methodology involved categorizing primary cat and dog breeds into six distinct clusters based on their appearance traits and creating a virtual pet character to represent each cluster. Using this approach, we aimed to make virtual pets with unique and recognizable physical characteristics while incorporating diverse appearance traits.
72
+
73
+ § 3.1 DESIGN OBJECTIVE
74
+
75
+ To address the research questions, our design objective was to create virtual pet characters with a broader range of appearance traits and associated perceived personality traits. We intended to develop a mapping guideline that linked the appearance traits of the pets with their corresponding personality traits. As such, we hypothesized an appearance-personality mapping to guide our design process We recognized that individual perceptions might vary. Hence we aimed to ensure a degree of consistency in the perceptions of people towards our virtual pets.
76
+
77
+ § 3.2 DESIGN BASELINE: CLUSTERS OF CATS AND DOGS
78
+
79
+ After conducting a case study on three types of virtual pets (see Fig.??, we decided to begin our initial design with dogs and cats, among the most popular and common domestic pets [5]. To achieve our design objective of creating virtual pets with a broader range of appearance traits and perceived personality traits, we compiled a list of common domestic cats and dog breeds. We categorized them into clusters based on the distinctive characteristics of different body parts. In doing so, we referred to the classification systems of FCI [37] and Cat Breeds [62]. We then mapped personality traits to different breeds to identify consistencies within each cluster. Personality traits were transferred from previous studies on cats' and dogs' behavior traits $\left\lbrack {{30},{72}}\right\rbrack$ . We merged groups with similar appearance and behavior traits while excluding those with various behavior traits, dividing the different breeds of cats and dogs into six clusters. Fig. 1 displays the appearance traits, typical breeds, hypothesized personalities, and our corresponding virtual pet characters for each cluster.
80
+
81
+ § 3.3 DESIGN BASELINE: VISUAL EXPRESSION
82
+
83
+ In addition to character archetypes, visual expression plays a crucial role in shaping people's perception of virtual pets [81]. To ensure that our designed pets are displayed in a multi-dimensional form and have a wide range of appearance traits, we created them in 3D using voxel style. This approach increases our efficiency in character modeling and eliminates the potential negative impact of low aesthetic qualities and intermedia rendering realism on people's perceptions of virtual animal characters [73]. In addition to these benefits, we were inspired by the potential of $3\mathrm{D}$ voxel-style modeling in virtual pet design. We aimed to extend 3D generation techniques into this field, building on the success of voxel-based models in Minecraft and sandbox games. 3D generation techniques would be more capable of generating visually appealing virtual pet characters. During our follow-up interviews, we found that people preferred virtual pets in voxel style compared to realistic models, further confirming the potential of the voxel style.
84
+
85
+ < g r a p h i c s >
86
+
87
+ Figure 1: Design of Cat and Dog Clusters. The pet side view pictures used for reference were obtained from [31] and [32].
88
+
89
+ < g r a p h i c s >
90
+
91
+ Figure 2: Case study clusters, (1) natural pets (in green cards): the virtual pets who have a similar appearance and characteristics as the real-life pets, such as dogs, and cats; (2) intelligent pets(in red cards): good teammates for players, who can perform tasks in the virtual environment; (3) fantasy pets(in purple cards): have sci-fi look, and can assist players in exploring in the environment.
92
+
93
+ Visual elements, including shapes, volumes, and colors, are essential components of character design that significantly impact creating emotional experiences [76]. To avoid potential influence on the diversity of perceived personality traits between realistic and virtual pets, we controlled the visual elements by following realistic dogs and cats' body structure, proportion, and color palette in our character designs. This approach ensures that any differences in perceived personality traits between realistic and virtual pets can be attributed to factors other than visual expression.
94
+
95
+ § 3.4 CHARACTER DESIGN
96
+
97
+ We created six characters based on the clusters, selecting one breed within each cluster as the model sample. We used Magicavoxel ${}^{1}$ to create voxel-style characters and render static pictures. We limited the model sizes to ${31} \times {31} \times {31}$ voxels to facilitate pet generation.
98
+
99
+ In addition, previous research showed the potential impact of animation on people's perception of virtual pets [75]. Therefore, we designed another version of the virtual pets with natural movement to study the effect of additional expressiveness on people's perceptions. We did not involve facial animation because of the potential negative reaction caused by the animal uncanny valley [73]. We designed a walking animation for the cat clusters and a running animation for the dog clusters based on the nature of these two pet species. We used VoxEdit ${}^{2}$ to build and Blender ${}^{3}$ to render the animation clips.
100
+
101
+ ${}^{1}$ https://ephtracy.github.io/
102
+
103
+ § 4 STUDY 1
104
+
105
+ Study 1 included surveys and interviews to understand users' perspectives of virtual pet characters. The survey was designed with three goals: 1) compare participants' perceived personalities of different styles of pets within the same cluster. 2) evaluate how our designed pet attracts people. 3) understand their perception toward keeping real pets and virtual pets. The follow-up interview aimed to understand further the reasons behind participants' perceptions based on the result of the online survey. The study obtained ethical approval for the study from the Institutional Review Board.
106
+
107
+ We designed the survey using a mixed 3x6 design (pet character styles * pet clusters). Our experimental conditions of the pet character styles included: real static pets, static virtual pets, and animated virtual pets. We controlled the pets representing different conditions belonging to the same clusters, which we defined in section 3.3. Participants were randomly assigned one pet cluster and rated personalities and overall feelings of all three pet character styles. We then invited 9 participants for follow-up interviews to determine the factors of people's perceptions. The interview questions were about the factors of participants' answers, and two card-sorting sessions for further exploring participants' perceptions based on their answers.
108
+
109
+ § 4.1 PARTICIPANTS
110
+
111
+ Participants voluntarily self-selected to complete the survey and consented before taking it. We recruited 33 participants ( 12 males, 20 females, 1 non-binary) via social media and word of mouth. Participants were 18-24 (N=24) and 25-34 (N=9). 24 participants had the experience of keeping real pets, 7 had dogs, and 7 had cats. For virtual pets, 13 participants reported they had played virtual pet games before, including Animal Crossing $\left( {\mathrm{N} = 3}\right)$ , Tamagotchi $\left( {\mathrm{N} = 2}\right)$ , Tencent QQ Pet $\left( {\mathrm{N} = 6}\right)$ , and others $\left( {\mathrm{N} = 4}\right)$ .
112
+
113
+ § 4.2 MEASUREMENT
114
+
115
+ The online survey consists of four parts. The first three parts measured the perceived pet personality under three conditions. One real pet picture, one static virtual pet picture, and one virtual pet animation clip were randomly distributed into one of the three parts. All real pet pictures were downloaded online, with the same white background and showing the pet's whole body. We downloaded ten pictures for each cluster and randomly displayed one on the survey. The researchers created static pictures and animation clips of virtual pets.
116
+
117
+ We measured the perceived personality with an adapted 7-point scale. We designed the scale based on the Feline Five [49], and pooled items from previous personality assessments on cats [47] and dogs [70]. The scale included four measurement dimensions which were general and commonly used to measure pets' personalities. Each dimension had four pairs of contracting description items. All 32 items showed in random order in each part of the survey. Participants rated each item from 1 to 7 points according to what extent they agreed with the description corresponding to the material provided. After the 32-item chart, two questions followed to figure out whether the participants knew the pet's breed in the picture and their overall feeling about this pet. The last part accessed participants' demographic information, experience keeping pets, and attitudes toward pets.
118
+
119
+ The semi-structured interview has four themes: pet personality, pet appearance, pet interaction, and feelings of our designed pet characters. We collected participants' survey answers and visualized them in a table shown during the interview to recall their memory. Besides, we created two card sorting sessions based on the data of open-end questions. One aimed to determine participants' perceived cuteness, which had primarily been proposed as an expected feature of virtual pets from the surveys. We created the cards with our designed and similar voxel characters and asked the participants to select the cards they thought were cute. The other card sorting focused on the expected interaction methods towards virtual pets, which had been asked on one of the open-end questions in the survey We coded participants' answers and selected nine of them to make cards. We asked the participants to pick and rank the cards according to their expected interactions.
120
+
121
+ § 4.3 PROCEDURE
122
+
123
+ Participants were randomly distributed into one of six control conditions and completed a Qualtrics form. On the last question, we asked if they were willing to participate in our follow-up interview. After analyzing the data of the surveys, we emailed ten participants whose answers were consistent or contrary to our data results. Nine of them consented to take the interview. Our interviews were conducted online via the Feishu meeting. The participants first had five minutes to read and sign the consent form. Then, we conducted a 40-min semi-structured interview with our participants; participants received 100 RMB as compensation for their time and contribution.
124
+
125
+ § 4.4 RESULTS
126
+
127
+ The quantitative and qualitative results showed that the style and appearance of virtual pets significantly impact participants' perceptions of their personalities. We also found that participants connected perceived personalities with the appearances of virtual pets. We concluded that the design suggestions involve expected pet types, personality presentation factors, and interaction with pets, which benefit future virtual pet design.
128
+
129
+ § 4.4.1 PERCEIVED PERSONALITY IN DIFFERENT STYLES
130
+
131
+ The results of the repeated-measures ANOVA test shows that the style of pets (realistic, virtual) and the presentation (static, animation) significantly affected people's perceptions of their personality traits, especially for Neuroticism and Agreeableness. Fig 4 shows the distribution of scores on four personality traits. People's perception of agreeableness is primarily influenced by pet styles $\left( {F\left( {2,{29}}\right) = {8.10},p < {0.01}}\right)$ . The results indicated that participants thought voxel pets are much more agreeable (mean $= {40.58},{SD} =$ 6.95) than realistic pets (mean $= {33.7},{SD} = {8.42}$ ). For voxel animations, they received an agreeableness score close to static voxel pets $\left( {\text{ mean } = {37.7},{SD} = {7.82}}\right)$ . For Neuroticism, realistic pets receive the highest score (mea $N = {31.6},{SD} = {5.74}$ ), and both static voxel pets and voxel animations are less neurotic (static voxel pets: mean $= {26.8},{SD} = {5.42}$ , animation: mean $= {27.87},{SD} = {5.77}$ ,). Participants' perceptions of pets' extraversion are not significantly influenced by pets’ style $\left( {F\left( {2,{29}}\right) = {1.28},p = {0.28}}\right)$ . All three style receives extraversion score relatively close to each other (realistic pets: mean $= {36.67},{SD} = {7.61}$ , static voxel pets: mean $=$ ${33.73},{SD} = {7.47}$ , animation: mean $= {34.5},{SD} = {5.45}$ ). Impulsiveness keeps stable when the style changes $\left( {F\left( {2,{29}}\right) = {0.59},p = {0.55}}\right)$ . The low standard division illustrates that most participants give similar scores to all three styles (realistic pets:mean $= {36.67},{SD} = {7.61}$ , static voxel pets: mean $= {33.73},{SD} = {7.47}$ , animation: mean $=$ 34.5, ${SD} = {5.45}$ ).
132
+
133
+ Through the interview, we discovered three main reasons for these results: Firstly, the design details played a significant role in expressing agreeableness $\left( {\mathrm{N} = 8}\right)$ . For example, a participant evaluated the voxel pet with high agreeable scores explained to us, "I might think voxel is a little bit chubby and silly, but I think it's a little more friendly" (P15). Secondly, their decision-making for the evaluation was influenced by their previous experience and was related to individual differences in spending time with pets $\left( {\mathrm{N} = 3}\right)$ . A participant rated the realistic pets with the highest score in neuroticism and said "If it was a real pet, it reminded me that it could do things that were threatening to me. I was bitten by a dog when I was a child, whereas these virtual things couldn't really threaten me." (P25). Thirdly, some participants rated their personality by inferring the pet breeds $\left( {\mathrm{N} = 3}\right)$ . P33 told us,"The real pet photo is of a Muppet cat, albeit well-behaved. But voxel's static and animated features are not reminiscent of a cat running amok." Thus, we could understand why voxel has the best effect for showing agreeableness, while real pet pictures make the pets look more neurotic.
134
+
135
+ ${}^{2}$ https://www.voxedit.io/
136
+
137
+ ${}^{3}$ https://www.blender.org/
138
+
139
+ < g r a p h i c s >
140
+
141
+ Figure 3: Procedure of Study1 and Study 2
142
+
143
+ § 4.4.2 ASSOCIATION BETWEEN PERSONALITY AND APPEARANCE.
144
+
145
+ Our goal was to create pets with various personality types based on appearance and perceived personality traits, resulting in six pet clusters, as depicted in Fig. 1. Our survey results indicated that participants' perceptions of pets' personalities aligned with our design intentions. For instance, participants rated the first cat cluster (fat yellow cat) and the second dog cluster (medium-sized yellow-white dog) as having the highest agreeableness scores, consistent with our pet personality design goals.
146
+
147
+ Moreover, our analysis of participants' extroversion rankings for the cat and dog clusters aligned with our initial design intentions. Specifically, participants' extroversion rankings for the cat clusters were in the order of cat cluster 2, cat cluster 3, and cat cluster 1 . In contrast, their rankings for the dog clusters were in the order of dog cluster 3, dog cluster 2, and dog cluster 1 . These results suggest that our virtual pet design successfully conveyed various personality traits through appearance and that participants could perceive and rate these traits accurately.
148
+
149
+ § 4.4.3 PEOPLE PREFER PETS WITH CUTE APPEARANCES BUT NEUROTIC PERSONALITIES.
150
+
151
+ Cuteness was the word repeatedly mentioned by participants during the interview. We received the answers for why people like cute pets because cute looks make people feel safe, and another explanation is that pets that look cute are easier to keep. For instance, one participant liked cut pet and explained to us, "Cute looking pets are better behaved and easy to keep, while naughty pets may be more nerve-racking." (P16). Cuteness has some common distinct appearance patterns, one is the body shape. Participants associated these traits with cuteness: small size, short and fat, short legs, and round ear shape. "The pet's small size makes it less aggressive", the quote by P4. And "The ears of this pet are round, which makes me think she is very friendly." said by P16. Another key factor that makes pets look cute is the coat color. As mentioned by interviewees, warm and bright colors, heterochromous and clean colors are the color of cuteness in their minds. And other factors that contribute to a pet's cuteness consist of special shapes, such as a lighting-shaped tail. And the pets with beards and dimples are definitely plus points for being cute. All interviewees treated cuteness as a sign of agreeableness; however, some participants preferred a contrast between personality and appearance $\left( {\mathrm{N} = 5}\right)$ . Specifically, they like virtual pets who have cute appearances but are inclined to be neurotic and cold in personality. One interviewee told us, "I like those crazy pets. They're more neurotic. Because animals don't do it like that, you might have difficulty understanding it, and there's a great sense of mystery." (P16).
152
+
153
+ In a word, we intended to construct a relationship between virtual pets' appearances and personalities. Through the user study, we proved the pieces of evidence we showed through our design work. And we found all interviewees thought of cuteness as a sign of agreeableness. Further, we also discovered that people prefer virtual pets with cute looks but with neurotic personality traits.
154
+
155
+ § 4.4.4 SUGGESTIONS FOR VIRTUAL PET TYPE AND INTERACTION DESIGN IN PET GAME
156
+
157
+ Through the open questions in the survey, we found 3 participants expected to keep fantasy pets, such as dragons, dinosaurs, and sci-fi pets that can not be found in real life. And 7 participants expected to keep cats and dogs. We asked why they decided on the expected pet type selection, and we discovered some people have experienced petting a pet type, and (s)he $\left( {\mathrm{N} = 1}\right)$ decided to continue to have the same type as a virtual pet. However, other participants do the opposite. We conclude with two main reasons for this. One is that pet keepers recall their memories with pets, this could be both positive and negative, which leads to their decision-making on keeping the same pet type or not. Another reason is mainly about the specific pet personality traits; that is to say, people would be addicted to certain personality traits of pets, such as agreeableness, as a consequence, they regard pets who possess these traits as the first choice.
158
+
159
+ We also discovered interesting findings about pet animation through the interview. On the one hand, the animation of the pet combined with the sound can convey its personality more directly. There is a quote from P4: "I think animation is very important for the expression of personality, especially the voice, when it is happy and when it is angry, and when it is threatening, the voice is completely different." On the other hand, some movements of specific body parts, for instance, ear, tail, and leg movements are important references to the perception of personality $\left( {\mathrm{N} = 3}\right)$ , one interviewee told us, "I think sometimes the tail of a dog is more informative, that is, you can tell if he is happy or unhappy by his tail, so you can tell what kind of mood he is in, maybe he is extroverted."(P16).
160
+
161
+ In addition to the pet type, we investigate the expected interaction with virtual pet in the virtual world. Through the survey's open questions and interview, we concluded the three most popular interactions: Talking(9 votes), touching(6 votes), and feeding(6 votes). And "why do you want to talk with your virtual pet in the virtual pet as the main interaction" is that they want to communicate in the same language to understand the pet better. For example, "I want to be able to talk to him and he understands and it can react. I'm more interested in the behavior of his feedback than the content of the conversation." (P25). Moreover, other popular interaction types among the participants are treasure hunting and adventure.
162
+
163
+ < g r a p h i c s >
164
+
165
+ Figure 4: Distribution of scores on four personality traits from 30 people.
166
+
167
+ § 5 PET GENERATION
168
+
169
+ We have developed a unique $3\mathrm{D}$ model generator that automatically creates virtual pets by hybridizing existing models and creating new ones based on input. The following section will explain the generator's process, which includes dividing and recombining models, dyeing them for harmonious colors, and texturing them for unique patterns.
170
+
171
+ § 5.1 GENERATION
172
+
173
+ With the development of our search, the demand for a 3D model generator was increasing. Firstly, the number of manually created pet models is too small, and more models are needed to prove the universality of the research outcomes. Then, many models must target which feature, color, or combination leads people's impressions towards a pet more precisely. Moreover, it is also an exploration of generation techniques since the outstanding generation models are mostly $2\mathrm{\;d}$ -based nowadays, while the demand for $3\mathrm{D}$ models is increasing rapidly. So, we implemented a generator to generate virtual pets automatically. The generator takes input from several 3D pet models and outputs new models based on the input.
174
+
175
+ The generator uses existing hybrid models and creates new models based on them. Input models are parent models, and generated models are children. Every child model inherits appearances from all their parents (in the generation, a child model can have more than two parent models). Random mutations are applied in the hybrid process to ensure that every newly generated child model is unique even if they have the same parents, guaranteeing variety and a more significant number of generated models.
176
+
177
+ The generation process is shown in Fig. 5, which is mainly three steps. Divide and recombination, dyeing, and texturing.
178
+
179
+ § 5.1.1 DIVIDE AND RECOMBINATION
180
+
181
+ The divide and recombination process gives basic shape to newly generated models.
182
+
183
+ In the research, the generator takes input from three cat models and divides each into five parts: the head, ears, body, tail, and limbs. The generator will loop over all the possible combinations to get the shapes of new models as much as possible. When combined, all the parts will be aligned automatically since the models' size and transformation might differ.
184
+
185
+ After getting all the possible combinations, the generator will rate the results and pick the reasonable ones to send to the next step. For example, the combination won't give the effect of a cat with the fattest body and thinnest tail, which didn't make sense in real life.
186
+
187
+ § 5.1.2 DYEING
188
+
189
+ After dividing and recombination, many models with reasonable shapes were generated. But, we can not directly use these models because they might have unharmonic colors. Usually, the inherence of color follows some rules. The child is more likely to have a mixture of color or shows a transition color of its parents. However, the models generated after recombination inherit all the colors and patterns on their parents' skin. A cat can have different colors on their head, body, and limbs, and there is no transition, which makes those models unreal. So, all the models will be repainted after recombination. They can have a color closer to one of their parents or in the middle of their parents' color.
190
+
191
+ During the dyeing, the generator will give random mutation. After generating the overall color and choosing the palette, the color might mutate several degrees darker or brighter. Also, the larger mutation that gives a model a new palette will happen sometimes. It can prevent models from losing color variety after often dyeing.
192
+
193
+ § 5.1.3 TEXTURING
194
+
195
+ Texturing is a random process that can give models unique patterns or textures on the skin.
196
+
197
+ During texturing, each time a new model was generated after the previous two steps, a new model called "mask" will be generated. The masked model is a real-time generated random voxel model of the same size as the generated pet model. Every newly generated pet model will have a mask for it. Then, the pet and mask models will be put in the same coordinate. After that, the generator will iterate every voxel on the model, if there is an overlap between mask and pet, the color of that voxel will change.
198
+
199
+ The different masks can give different patterns to a pet model. For example, the mask of many little floating balls can make a pet spotty, and the mask of many vertical planes draws stripes on the pet. Since the mask is randomly generated, every pet can have its unique pattern.
200
+
201
+ Neural Cellular Automaton (NCA) was used to generate masks. The 3D NCA model used in the generator takes the input from the voxel 3D model and also outputs voxel 3D models. The network was trained to generate simple objects like spheres or plates from a single dot as a seed. When generating the mask, a random seed(many randomly distributed dots in 3D space) will be sent to the pre-trained network. Then dots will start to grow to the object when running the network. The growing process will stop after random steps, then the output model will be the final mask.
202
+
203
+ § 6 STUDY 2
204
+
205
+ We conducted Study 2 through surveys and semi-structured interviews to gather users' feedback on our generated characters. This study aimed to explore how participants perceive virtual pets' personalities by observing their appearances using quantitative and qualitative methods.
206
+
207
+ < g r a p h i c s >
208
+
209
+ Figure 5: Generation Procedure
210
+
211
+ § 6.1 PARTICIPANTS
212
+
213
+ We employed convenience sampling by posting our survey link through WeChat subscriptions. 57 participants (19 males, 33 females, 3 non-binaries, and 2 who preferred not to say) voluntarily completed the survey, different from the participants in study one. The age distribution of the participants consisted of participants aged 18-24 years $\left( {\mathrm{N} = {39}}\right)$ and 25-34 years $\left( {\mathrm{N} = {18}}\right)$ . 47 participants had previous experience in owning real pets, while 41 had experience in owning virtual pets. Following the survey, 12 participants ( 6 males, 5 females, and 1 who preferred not to say) voluntarily participated in the interview session. The age distribution of the interviewees included participants aged 18-24 years $\left( {\mathrm{N} = 7}\right)$ and 25-34 years $\left( {\mathrm{N} = 5}\right)$ . Of these interviewees, 9 had experience owning real pets, while 10 had experience owning virtual pets.
214
+
215
+ § 6.2 MEASUREMENT
216
+
217
+ The research employed an online survey consisting of four distinct parts, namely the IPIP-20 [17], the Motives for Online Gaming Questionnaire (MOGQ) [16], likability assessment of virtual pets, and demographic data collection. The IPIP-20 is a brief instrument for assessing FFM of personality traits using the International Personality Item Pool (IPIP) resources. This study utilized the questionnaire to identify participants' personalities, as our modified pet personality questionnaire in Study 1 shared four personality traits with the Big Five personality traits. The primary objective was to correlate participants' personality traits with their perceived personality of virtual pets to investigate the link between the two constructs. Both the IPIP-20 and MOGQ have demonstrated reliability and validity in measuring people’s personalities and gaming behavior [16,17]. Given the convenience sampling technique utilized to recruit participants, which may have reached a broad Chinese population, the study included validated Chinese translations of the IPIP-20 [39] and MOGQ [89]. In the likability assessment section, participants were presented with ten images, comprising three manually designed virtual pets(M1, M2, M3)and seven computer-generated ones $(\mathrm{G}1$ to G7), refer to Fig. 6, and requested to rate their likability on a scale of 1 to 5 . The demographic section of the survey collected data on participants' gender, age, education level, and pet-raising experience.
218
+
219
+ The follow-up interview consisted of questions parts and one card-sorting session in 45 minutes. The interview guide a series of open-ended questions about the virtual pet's personality, such as "What kind of personality do you think this virtual pet has?" and "What do you like and dislike about this virtual pet's appearance?" refer to the appendix for the interview outline. We created a personality mapping template for the card-sorting session, which consisted of four areas representing four personality traits and pictures of the ten virtual pets as cards that needed to be sorted in Figma.
220
+
221
+ § 6.3 PROCEDURE
222
+
223
+ All participants completed a Qualtrics form that contained four test parts. After analyzing the survey data, we contacted twenty participants whose personalities and pet-keeping experiences are diverse via email. Twelve of them agreed to take part in the interview. We conducted the interviews through Voov Meeting. We informed participants of the study's purpose and procedures and obtained their written consent before the study began. Participants received 100 RMB as compensation.
224
+
225
+ § 6.4 QUANTITATIVE RESULTS
226
+
227
+ The statistical results showed that appearance features played a significant role in participants' likability ratings of virtual pets, specifically cobby cats with light-colored coats and decorations. Additionally, we identified significant correlations between certain personality and player-type traits and likability ratings of virtual pets.
228
+
229
+ < g r a p h i c s >
230
+
231
+ Figure 6: Generated Virtual Pets and Manual Designed Virtual Pets.
232
+
233
+ Table 1: Mean and SD of the Likability of Each Virtual Pet
234
+
235
+ max width=
236
+
237
+ Virtual Pet ID $\mathbf{{Mean}}$ SD
238
+
239
+ 1-3
240
+ M1 3.72 1.18
241
+
242
+ 1-3
243
+ M2 2.82 1.30
244
+
245
+ 1-3
246
+ M3 2.96 1.35
247
+
248
+ 1-3
249
+ G1 2.68 1.36
250
+
251
+ 1-3
252
+ G2 2.70 1.27
253
+
254
+ 1-3
255
+ G3 2.46 1.38
256
+
257
+ 1-3
258
+ G4 2.67 1.26
259
+
260
+ 1-3
261
+ G5 3.70 1.30
262
+
263
+ 1-3
264
+ G6 3.35 1.79
265
+
266
+ 1-3
267
+ G7 2.53 1.27
268
+
269
+ 1-3
270
+
271
+ § 6.4.1 PREFERENCE TOWARD COBBY CATS WITH LIGHT-COLORED COAT
272
+
273
+ We conducted a repeated-measures ANOVA to compare the likability ratings of ten virtual pets. Table 1 presents the descriptive statistics for these ratings, Fig. 6 illustrates each virtual pet. The ANOVA revealed a significant main effect for virtual pets $\left( {F\left( {9,{81}}\right) = {12.02},p < {.001}}\right)$ . To identify which virtual pets differed significantly from each other, we conducted post hoc tests using the Holm correction to adjust for multiple comparisons. The results showed that the likability ratings for G5 (a cobby cat with a wight coat and orange coloring on its ears, breast and tails) and M1 (a cobby cat with an orange coat with brown stripes on the back) were significantly higher than those for all other virtual pets except G6. In turn, the likability rating for G6 was considerably higher than those for G3 and G7, which received the lowest mean ratings. These findings suggest that appearance features significantly influence participants' likability ratings of virtual pets. Specifically, cobby cats with light-colored coats, such as G5, G6, and M1, were rated as more likable than other virtual pets. Additionally, color decorations on the fur, such as the orange coloring on the ears, tail, and breast of G5 and G6 and the brown stripes on the back of M1, also appeared to positively impact participants' likability ratings.
274
+
275
+ § 6.4.2 EFFECTS OF PERSONALITY AND PLAYER TYPE ON VIRTUAL PET LIKABILITY RATINGS
276
+
277
+ We examined the relationship between participants' personalities and player types and their ratings of virtual pets using Pearson correlation. Before analysis, we checked the normality assumptions of variables using the Shapiro-Wilk test. The distributions of the 21 variables ( 10 likability, 5 personality traits, 6 player type traits) were assessed using the Shapiro-Wilk test, and the results indicated that more than half of the variables $\left( {\mathrm{N} = {16}}\right)$ were significantly non-normal $\left( {p < {0.05}}\right)$ . Of the non-normal variables. Given the non-normality of all liabilities of virtual pets, non-parametric tests were used to assess the relationships between variables. The Spearman's rank correlation showed that five pairs of variables were significantly correlated: Extraversion and G2 $\left( {p = {0.006}}\right)$ , Social and G1 $\left( {p = {0.002}}\right)$ , Social and $\mathrm{G}6\left( {p = {0.05}}\right)$ , Social and $\mathrm{G}7\left( {p = {0.03}}\right)$ , Skill Development and G2 $\left( {p = {0.008}}\right)$ , and Skill Development and G7 $\left( {p = {0.005}}\right)$ . The correlation coefficient between Extraversion and G2 was negative (Coef $= - {0.36}$ ), while the correlation coefficients between the other five pairs were positive, ranging from 0.26 to 0.41 (see Table 2).
278
+
279
+ Table 2: Correlation Coefficient Between Participant's Features and Likability of Virtual Pets
280
+
281
+ max width=
282
+
283
+ Scale Feature $\mathbf{{DV}}$ Coef p-value
284
+
285
+ 1-5
286
+ IPIP-20 Extraversion G2 -0.36 0.006*
287
+
288
+ 1-5
289
+ 5*MOGQ Social G1 0.41 0.002*
290
+
291
+ 2-5
292
+ Social G6 0.26 0.05*
293
+
294
+ 2-5
295
+ Social G7 0.29 0.03*
296
+
297
+ 2-5
298
+ Skill Development G2 0.35 0.008*
299
+
300
+ 2-5
301
+ Skill Development G7 0.37 0.005*
302
+
303
+ 1-5
304
+
305
+ § 6.5 QUALITATIVE RESULTS OF INTERVIEW
306
+
307
+ The qualitative results show a strong correlation between virtual pets' physical features and personalities attributed by users. Appearance plays a significant role in conveying their personalities, as participants identified specific personality traits based on design elements like body shape, skin color, facial features, and expressions. Additionally, users' personalities can influence their preferred pet personalities, explaining why some prefer pets with corresponding personalities. Most participants found the voxel-style pets more relaxing and easier to create with intricate details than the realistic-style pets. Overall, appearance significantly shapes users' perceptions of virtual pets, and their preferences are related to the emotions evoked by the pet styles.
308
+
309
+ § 6.5.1 CORRELATION BETWEEN VIRTUAL PETS' DESIGN ELEMENTS AND PERSONALITIES ATTRIBUTED BY USERS.
310
+
311
+ Through our study, we discovered that the appearance of our virtual pets was instrumental in conveying their personalities to participants. Most participants $\left( {\mathrm{N} = 7}\right)$ identified specific personality traits based on certain design elements, such as the shape of the pets' body and their skin color. For instance, participants associated warm, light-colored skin, fat and round body with agreeableness traits. One participant remarked, "The pets' warm and light appearance made them feel tame and sweet, like dessert. Seven participants $\left( {\mathrm{N} = 7}\right)$ also noted that the facial features and expressions of the pets influenced their perceived personalities. For instance, one participant pointed out,"the kitten's dark middle face gave it a neurotic look, which they associated with gloominess and neuroticism." We have included a diagram in Fig. 7 that showcases all the appearances mentioned in the interview and the traits related to the pets' personalities. In conclusion, our findings indicate that appearance significantly shapes users' perceptions of virtual pets.
312
+
313
+ § 6.5.2 PARTICIPANTS' PERSONALITIES WERE RELATED TO THEIR PET CHOICES
314
+
315
+ Our interview reveals a relationship between the personalities of participants' preferred pets and their personalities. Ten participants have similar personalities to the preferred pets $\left( {\mathrm{N} = {10}}\right)$ . Eight of the participants admitted that they preferred pets with personalities similar to their own $\left( {\mathrm{N} = 8}\right)$ , while the rest made unconscious choices $\left( {\mathrm{N} = 2}\right)$ . According to the analysis, when participants liked their personalities, they preferred pets that were similar to their personalities $\left( {\mathrm{N} = 8}\right)$ . On the contrary, they do not like pets with similar personalities $\left( {\mathrm{N} = 2}\right)$ . For instance, P5 mentioned,"Maybe it's because I'm impulsive, so I don't like (the pets that are impulsive)." Thus, participants' personality preferences influence their choice of pets.
316
+
317
+ § 6.5.3 COMPARISON OF PARTICIPANTS' PREFERENCES FOR VOXEL STYLE AND REALISTIC STYLE
318
+
319
+ Most participants preferred the voxel style to the realistic style $\left( {\mathrm{N} = 8}\right)$ , while three preferred the realistic style $\left( {\mathrm{N} = 3}\right)$ , and two accepted both styles $\left( {\mathrm{N} = 2}\right)$ . Based on the analysis, participants’ decisions on style preferences were related to the emotions evoked by the pet style. Participants who preferred the voxel style felt that it made them feel more relaxed $\left( {\mathrm{N} = 8}\right)$ , while the realistic style made them feel scared and overwhelmed $\left( {\mathrm{N} = 4}\right)$ . For instance, $\mathrm{P}{11}$ explained," $I$ think that this cat's eye and its overall appearance makes me feel Uncanny Valley." Besides, although seven of the pets are machine-generated (G1 to G7), this does not affect the participants' preference for style. Participants could not distinguish between machine-generated pets and manually designed pets $\left( {\mathrm{N} = 4}\right)$ . Also, they believed that machine-generated pets basically had similar characteristics to real pets $\left( {\mathrm{N} = 4}\right)$ .
320
+
321
+ § 7 DISCUSSION
322
+
323
+ We conducted this study to investigate players' perceptions of virtual pets' personalities and discover the relationship between personalities and appearance. In the following sections, we discussed our findings through the results.
324
+
325
+ § 7.1 PERCEIVING VIRTUAL PET PERSONALITIES THROUGH STYLE AND REPRESENTATION
326
+
327
+ In our first study, we investigated how the style (voxel or realistic) and representation (static or animated) of virtual pets influence users' perceptions of their personalities. Our findings revealed that both factors significantly affected users' perception of virtual pet personalities. Specifically, participants perceived virtual pets with voxel style as friendlier, cuter, and more playful than those with realistic styles. This finding is noteworthy because it contrasts with previous research on similar personality ratings for realistic and cartoon avatars in virtual human characters [69]. We suggest that users' preference for the voxel style may be due to its association with an abstract and cartoonish aesthetic, which enhances the presentation of pets' personalities. Our interview results further support this interpretation, as participants expressed a greater attachment to virtual pets with the voxel style, citing their agreeableness and cuteness, and the greater imagination space offered by the voxel style. In contrast, some participants found the realistic style to make virtual pets feel robotic and uncomfortable, reducing their emotional connection to them, a phenomenon known as the uncanny valley [56]. These findings underscore the significance of considering virtual pet style and representation in design, as they can impact users' perceptions of digital characters' personalities.
328
+
329
+ § 7.2 THE LINK BETWEEN VIRTUAL PETS' PERSONALITIES AND AP- PEARANCES
330
+
331
+ We aimed to explore the relationship between virtual pets' appearance and their perceived personality traits. Building upon previous research by Hanna Ekström [7], which suggests that visual cues such as shape and proportions can significantly influence how viewers perceive a character's personality traits, we designed six pet clusters with different visual cues to present various personality traits, as shown in Fig.1. Study 1 found that participants' perceptions of virtual pets' personalities aligned with our design intentions. Specifically, participants rated cat cluster 1 with a high agreeableness score and cat cluster 2 as more extroverted, consistent with previous research that suggests round and soft shapes are associated with friendliness and warmth. In contrast, angular and sharp shapes convey aggression and danger [7].
332
+
333
+ To further explore the relationship between virtual pets' appearance and perceived personality traits, we conducted study 2. Here, we utilized machine learning techniques to generate more voxel cat pictures and evaluated their personality presentation, as shown in Fig. 7. Our findings suggest that skin color is the most notable visual cue for describing voxel pets' personality traits. Participants perceived cats with warm-tone skin color as more friendly and sweet, while markings on a cat's face were associated with neuroticism. Additionally, participants linked a fat and round body shape with agreeableness and extroversion traits. In contrast, a towering body was associated with impulsiveness traits, and a small body was linked to neuroticism. Furthermore, we found that other parts of the virtual pets, such as the head, legs, and tails, could also provide visual cues for conveying personality traits.
334
+
335
+ Overall, our findings suggest that visual cues can significantly influence how virtual pets' personalities are perceived and that different parts of a virtual pet's appearance can provide valuable information for conveying personality traits. These results could be useful for designing virtual pets that accurately convey specific personality traits and enhance user engagement in future virtual pet game design.
336
+
337
+ § 7.3 DESIGN IMPLICATIONS FOR VIRTUAL PET CHARACTERS DE- SIGN
338
+
339
+ Our analysis of studies one and two leads to several design implications for virtual pet character design. These include considering players' preferences for virtual pets, designing with players' personalities, and developing interacting features in virtual pet games.
340
+
341
+ § 7.3.1 PREFERENCE ON VIRTUAL PETS
342
+
343
+ The results of our study suggest that players generally prefer virtual pets in the form of cats and dogs, with some expressing interest in fantasy pets like dragons. Interestingly, our research also showed that participants preferred virtual pets with a neurotic personality and a cute appearance, with common traits including warm-tone skin colors, large eyes, and fat body shapes. These findings align with previous research that suggests people prefer dogs' features associated with the infant schema [35].
344
+
345
+ Additionally, our study found that the style and presentation of virtual pets had a significant impact on players' perceptions of their personalities. The majority of participants showed a keen preference for the voxel style, finding its abstract and cute appearance to be calming and potentially effective in reducing anxiety and depression compared to a realistic style. This result is consistent with previous research that suggests virtual animals' MR-based interaction can reduce mental stress and induce positive emotions [58]. However, our interview results revealed that the visual design of the virtual animal used in the previous study was realistic, which potentially cause players to experience the uncanny valley effect. In this phenomenon, a realistic but not quite natural appearance can cause unease or discomfort.
346
+
347
+ < g r a p h i c s >
348
+
349
+ Figure 7: This figure depicts the relationship between the appearances of virtual pets and their perceived personalities. The left panel shows how different body parts of virtual pets relate to personality traits (A, E, I, N), with the frequency of each trait indicated by the corresponding color. The right panel displays a virtual pet image with labeled body parts for reference.
350
+
351
+ We also identified the most popular interaction schemes with virtual pets, such as talking, touching, and feeding, which can serve as a reference for future virtual pet game design. Additionally, our research showed that players prefer virtual pets to take on the role of companions rather than mentors or enemies, which differs from the suggestions made by previous researchers for non-player character roles in narrative settings.
352
+
353
+ These findings emphasize the importance of designers considering players' preferences for virtual pets' type, style, and interaction role when designing virtual pets. Designers should aim to create virtual pets with cute and endearing features while also incorporating traits that add depth to their personalities.
354
+
355
+ § 7.3.2 INCORPORATING PLAYER'S PERSONALITY AND VIRTUAL PETS' PER- SONALITY IN DESIGNING VIRTUAL PETS
356
+
357
+ While prior studies have examined pets' personalities, finding that owners were more satisfied with cats that were high in agreeableness and low in neuroticism [19], our research delved into the link between pet and owner personalities, focusing on virtual pets. Specifically, our study found that participants preferred virtual pets with personalities similar to their own, as indicated by the quantitative results of study two. Interestingly, our analysis also revealed that individual personality differences might influence how participants perceive and rate virtual pets. For instance, those with extraversion traits showed a preference for a virtual pet with slim legs and thin bodies (i.e., G2) that was more extroverted.
358
+
359
+ Our qualitative findings further supported this, showing that individuals with agreeableness traits in their personalities favored friendly and less aggressive virtual pets. Additionally, our results mirrored those of previous researchers, who noted a positive link between owner dominance and cat dominance, extraversion, and neuroticism [19]. However, unlike their work on natural pets, our study examined this relationship among virtual pets. Overall, our findings suggest that pet personality is an essential factor to consider in designing virtual pets and that personality traits of both the pet and owner may influence user preferences and satisfaction.
360
+
361
+ Our investigation into players' preferences was partly inspired by prior work identifying three user types based on preferences and gameplay styles in VR pet games [48]. In addition, we examined how individual differences in players' in-game behavior influenced their perception and ratings of virtual pets. We found that participants who played the game with a social or skill development purpose rated virtual pets with extraversion traits higher, as identified through interviews. These pets were characterized by cold-tone skin colors, small heads, and ears (G1). Our findings suggest that considering players' in-game models is a promising approach for designing virtual pet characters that align with users' preferences and engagement styles.
362
+
363
+ § 7.4 CONTROLLABLE AND COST-EFFECTIVE MODEL GENERATION US- ING RECOMBINATION AND REPAINTING TO IMPROVE USER RESPONSE DATA QUALITY
364
+
365
+ In the discussion section, we analyzed the effectiveness of our machine-generated pet pictures, which were used to collect user responses in our experiment. Our generator utilizes a recombination and repainting approach to produce high-quality results, and the method is relatively inexpensive compared to $3\mathrm{D}$ generative neural networks. While generative neural networks are only minimally used for texturing due to insufficient training data, using them for random texturing can significantly reduce poor generation results. However, relying solely on traditional generation methods for shape and color can lead to less creative and predictable results. However, incorporating new colors and textures can make it difficult for people to identify the origin of the model's parts. In our study, we conducted a semi-structured interview in which participants were unaware that a machine generated the cat pictures. All participants noted the cat's body and other parts' excellent convergence.
366
+
367
+ § 7.5 LIMITATION AND FUTURE WORK
368
+
369
+ This study has several limitations that we need to address in future work. Firstly, the limited training sample we used to generate cat pictures using machine learning techniques may have resulted in the lack of diversity in the appearance and personality traits of the voxel pets. To overcome this limitation, we plan to create and label diverse pets' body parts with different visual cues to better capture a broader range of personality traits.
370
+
371
+ Secondly, our design only focused on the static voxel style. It lacked animations and sound, which may have hindered the accurate perception of the pets' personalities in a more precise way. To improve the accuracy of personality perception and enhance the user experience, we plan to involve animated clips by rigging the voxel pet models and add sounds related to pet behaviors when designing virtual pets.
372
+
373
+ Furthermore, we plan to use our current voxel pet characters as artificial companions and integrate their personality traits to design an interactive virtual pet game. The game aims to reduce anxiety and stress levels as an intervention tool. It provides users with a fun and engaging way to interact with virtual pets, potentially improving their mental health and well-being.
374
+
375
+ In conclusion, although our study provides valuable insights into the link between virtual pets' appearances and their perceived personality traits, several limitations must be addressed in future work. By enriching our sample for generating voxel pets, involving animations and sound, and developing an interactive virtual pet game, we hope to provide users with a more authentic and engaging virtual pet experience.
376
+
377
+ § 8 CONCLUSION
378
+
379
+ In conclusion, our study aimed to address the gaps in current research on virtual pets and their potential for promoting mental health and enhancing skill development in individuals who cannot keep real pets. We focused on creating virtual pets with personality traits and exploring how players perceive their personalities. Our research found that appearance variations affect users' perceptions of virtual pet personalities. Players prefer virtual pets like cats and dogs with neurotic personalities and cute appearances. We also developed a novel method for a game character design that combined traditional methods with machine learning techniques. Our study provides several design implications for virtual pet character design. It highlights the potential of using voxel pets' appearances as virtual companions to enhance the mental well-being of young individuals by reducing anxiety levels through interactive engagement with virtual pets. Overall, our study contributes to the understanding of factors contributing to the development of personalities in different species and how we can design artificial companions that mimic and respond to these traits.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/AME0sErWj0j/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,411 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # FossilSketch: A novel interactive web interface for teaching university-level micropaleontology
2
+
3
+ Category: Research
4
+
5
+ ## Abstract
6
+
7
+ Although the demand for geoscientists is projected to grow and the current population of experts is aging, few students are trained in using micropaleontology. Applications of micropaleontology in solving geologic problems are diverse, and include such areas of research as estimating sea level fluctuations, understanding the causes of past climate upheavals, and finding economically important resources like oil and gas. To aid in teaching micropaleontology in undergraduate classrooms, we developed FossilSketch, a web-based interactive learning tool for the basics of micropaleontology. FossilSketch teaches microfossil identification for Foraminifera and Ostracoda through automatically assessing sketch-based exercises and other practice activities. Results from deploying this system in an undergraduate geology class indicate that FossilSketch benefits both students and instructors. Students find FossilSketch more engaging and less stressful than traditional methods, and instructors have their workload reduced in terms of course preparation.
8
+
9
+ Index Terms: H.5.2 [User Interfaces]: User Interfaces-Graphical user interfaces (GUI); H.5.m [Information Interfaces and Presentation]: Miscellaneous
10
+
11
+ ## 1 INTRODUCTION
12
+
13
+ Micropaleontology is a critical tool for determining the ages of sedimentary rocks for both industrial and scientific applications [26]. Microfossil species are sensitive to specific environmental parameters and are often used to reconstruct past changes in ocean temperature, coastal sea-level, and seafloor oxygenation [34]. Further, microfossils are used in modern, real-time, environmental monitoring because they respond quickly to environmental change [9]. Additionally, micropaleontology can be used in oil exploration to locate reservoirs [38].
14
+
15
+ Despite the importance of micropaleontology for geoscience research and industry, most geoscience students are not exposed to this topic. Micropaleontology is rarely taught at the undergraduate level because of the number of contact hours necessary and the amount of instructor feedback required to train students at the necessary level of detail. Thus, although the field of geology has broadened over the last several decades, micropaleontology is being dropped from the curriculum and students' training in the field has correspondingly declined $\left\lbrack {4,{46}}\right\rbrack$ . This trend becomes problematic as experts in micropaleontology are aging and fewer students are being trained in microfossil identification techniques [39].
16
+
17
+ To enable and enhance the training of undergraduates in the basics of micropaleontology in remote, hybrid, and in-class conditions, we developed FossilSketch. FossilSketch, depicted in Figure 1, is an interactive, intelligent digital tool that introduces students to micropaleontology through educational videos, mini-games, sketch-based identification exercises, and assessments focused on applications of microfossils in the geosciences. Sketch recognition algorithms are used to automatically evaluate sketches and provide feedback to help students internalize various morphological features and identify microfossils from two common microfossil groups, Foraminifera and Ostracoda. FossilSketch reduces the burden on the instructors by providing feedback for these activities and games to help students learn and practice micropaleontology skills. This paper outlines the design of FossilSketch as well as its impact from being deployed in an undergraduate geology classroom.
18
+
19
+ ![01963e08-e1ab-7045-aa43-1f4a105cad6d_0_924_407_721_415_0.jpg](images/01963e08-e1ab-7045-aa43-1f4a105cad6d_0_924_407_721_415_0.jpg)
20
+
21
+ Figure 1: A participant using the FossilSketch educational web app.
22
+
23
+ ## 2 RELATED WORKS
24
+
25
+ ### 2.1 Geoscience Educational Tools
26
+
27
+ Geosciences have been rapidly adopting online and remote-based educational tools over the last five years, including various online resources, pedagogical practices, and course curricula, including high-resolution digital imaging for mapping and documenting geological outcrops, 3D virtual simulations, digitization of fossil collections, and augmented reality field trip games for smartphones and tablets (e.g., $\left\lbrack {3,6 - 8,{11}}\right\rbrack$ ).
28
+
29
+ Successful implementation of software in geoscience education includes sketching software, virtual microscopes, and field experience simulations $\left\lbrack {8,{18},{31}}\right\rbrack$ . For example, CogSketch is a sketching-based application with a series of introductory geoscience worksheets on key geoscience concepts [18] that aids students in solving discipline-specific spatial problems while providing instructors with insights into student thinking and learning.
30
+
31
+ As for micropaleontology, researchers note a lack of human experts and decline in micropalentology training $\left\lbrack {{10},{25},{32}}\right\rbrack$ . That said, most software development has been aimed at automated identification of microfossils, with the most recent approaches focusing on machine learning and using 3D models for planktic and benthic foraminifera identification $\left\lbrack {{10},{25},{32}}\right\rbrack$ . Several large microfossil databases were built $\left\lbrack {{13} - {15},{43}}\right\rbrack )$ . However, these online resources are designed for an advanced user and are difficult to use for entry-level specialists and students without prior instruction on microfossils.
32
+
33
+ Until recently, there were no applications supporting active learning in micropaleontology. FossilSketch is the first application that supports active learning in undergraduate micropaleontology [references redacted for anonymization] To summarize, there is clearly a need and growing interest in developing automated AI tools for geoscience education and microfossil identification. To address this need we designed FossilSketch, a novel, universally accessible, and academically rigorous educational tool for undergraduate geoscience education.
34
+
35
+ ### 2.2 Digital Sketch Recognition in the Classroom
36
+
37
+ Sketching activities in the classroom have pedagogically been linked to enhanced student creativity and learning $\left\lbrack {{33},{35},{40},{50},{52}}\right\rbrack$ . Researchers find that sketching benefits learning in a wide range of disciplines, from human anatomy and biology to engineering, geography, and math $\left\lbrack {5,{18},{19},{37},{44}}\right\rbrack$ . Studies have confirmed that information retention and learning outcomes are significantly improved when engaging in drawing and writing activities vs. using a keyboard as the primary input modality [33]. To that point, sketch-based learning tools have been linked to a higher retention of information and improved skill compared to students who do not learn with sketch-based activities [21,53].
38
+
39
+ Early gesture recognition systems developed by Rubine [45] have led to improved recognition systems including template-matching algorithms from the "Dollar" family of recognizers $\left\lbrack {1,2,{48},{49},{54}}\right\rbrack$ that produced lightweight recognition systems easily added to existing software. The "Dollar" recognizers perform classification tasks by using different methods of calculating distance from user-generated input compared against several samples of trained data. Despite these recognizers being used for classification techniques rather than grading sketch accuracy, we use this work as a basis for our recognition system due to synergy in design. Both feature-based classification techniques and template matching techniques were later expanded into more robust systems for scaffolded recognition via systems like PaleoSketch [41] and LADDER [22], the second of which is notable for its integration of domain-specific shapes to better describe relationships between sketch properties to assist in recognition. More recent works like nuSketch [17] and COGSketch [16] integrate sketch recognition algorithms into educational tools to assist with the learning experience to measurable success.
40
+
41
+ Mechanix [36, 47], Newton's Pen [30] and Newton's Pen II [29], Physics Book [12], and SketchTivity [23, 51] are systems specifically written to leverage the educational advantage of drawing and sketching into the core interactions of their tools. Indeed, these systems serve as the primary conceptual basis from which FossilSketch is designed. We aimed at adapting the educational techniques presented by these tools to the domain of micropaleontology in the classroom. This led to a variety of changes and design considerations taken in the teaching approach outlined in the next section.
42
+
43
+ ## 3 DESIGN
44
+
45
+ ### 3.1 Design Considerations
46
+
47
+ FossilSketch is a web-based educational tool for teaching students techniques for identifying microfossils. FossilSketch focuses on Foraminifera and Ostracoda due to their utility and accessibility in undergraudate lab settings. Foraminifera and Ostracoda are two of the most commonly used groups of microfossils in industrial, environmental, and scientific applications. The morphology of species in both groups is closely related to the environments in which they live $\left\lbrack {{20},{27},{42}}\right\rbrack$ and these two groups are often used in species-specific geochemical studies [24]. However, accurate species identification is required for using this micropaleontological tool effectively. For additional context, Foraminifera are amoeboid protists with shells made of calcium carbonate or agglutinated sediment grains and are often abundant in marine environments [4], and Ostracoda are micro-crustaceans with a bivalved calcareous carapace that are found in all aquatic environments from fresh water lakes to to the deep-sea [4]. These are also some of the larger microfossils, which allows students to view them with standard stereoscopes.
48
+
49
+ To use industry standard packages and tools, the website is built using the Next.js framework and a MySQL database. Educational materials for FossilSketch were developed to supplement various geoscience courses in the College of Arts & Sciences at a large R1 university. Traditionally, undergraduate students learn about micropaleontology through lectures, diagrams, specimens viewed through a stereoscope, and hand-sized models in upper-level courses as part of paleontology courses. FossilSketch educational materials include the following: 1) educational videos; 2) instructional mini-games; 3) microfossil identification exercises; and 4) microfossil assemblage reconstruction exercises. All four types of activities consist of content specifically created for FossilSketch, based on real-life scientific study cases, and tailored to support the educational exercises in traditional and FossilSketch-based courses.
50
+
51
+ Exercises were developed based on the courses' learning objectives, the microfossil collections available, and the expertise of [co-author names redacted for review]. FossilSketch can be used in courses of different levels, from lower-level non-geology majors to upper-level courses for geology majors, and thus, the difficulty and number of activities included in a class vary depending on the teaching goals and the activities assigned to students. Modules, and microfossils can be added, or removed depending on the class or activity in which FossilSketch is deployed. The self-contained nature of the exercises and the flexibility of the landing page interface offers the versatility of rearranging the website experience depending on the course learning objectives.
52
+
53
+ ### 3.2 Educational Videos
54
+
55
+ Educational videos were created specifically for FossilSketch and were created to provide introductory information to help contextualize concepts covered in the rest of FossilSketch's activity types [links redacted for anonymization] When users click on these modules, an overlay with an embedded YouTube link is displayed. Students are free to change playback with the standard embedded YouTube video controls, and captions, and the overlay can be dismissed at any time by clicking outside of the video area. No progress data is recorded for this type of activity.
56
+
57
+ FossilSketch is intended to augment instructor lectures, meaning the videos are not intended to serve as a replacement for lecture material as is usually the case with typical instructional videos in an online learning interface. The FossilSketch system uses instructional videos to provide necessary information for students to engage with the rest of the modules if the students have not yet received instructor lectures, while at the same time emphasising concepts most directly relevant to the activities if they have attended in-depth lectures in the classroom.
58
+
59
+ ### 3.3 Instructional Mini-Games
60
+
61
+ FossilSketch integrates various kinds of interactive instructional tools. In order to improve student comprehension of microfossil identification, we broke identification tasks into mini-games that students could repeat to develop mastery. Each mini-game consists of one or more types of interactions intended to highlight the visual-morphology aspect of learning about microfossil identification. We currently have three matching games and one orientation game.
62
+
63
+ #### 3.3.1 Matching Games
64
+
65
+ Matching games require the participants to match morphological features, such as the outline shape for Ostracoda, or morphototype and type of chamber arrangement for Foraminifera. At the beginning of the game, the students are presented with a reference image that lists each morphotype along with a sketched example, and students are able to return to this reference image again, when needed, by clicking on the zoomed-out image on the bottom right corner of the screen. When the game starts, the screen displays a small number of draggable "discs" or rectangular "cards" with actual microfossil photomicrographs that the user can move into slots with sketched categories for each feature used in this game. At the moment, three different mini-games are created with this kind of interaction: Ostracoda lateral outline identification, Foraminifera chamber arrangement, and Foraminifera morphotype identification.
66
+
67
+ ![01963e08-e1ab-7045-aa43-1f4a105cad6d_2_155_147_1480_890_0.jpg](images/01963e08-e1ab-7045-aa43-1f4a105cad6d_2_155_147_1480_890_0.jpg)
68
+
69
+ did on the exercise overall.
70
+
71
+ Figure 2: FossilSketch mini-games
72
+
73
+ All matching games include three rounds, with each level contributing to a final star score. The Foraminifera chamber arrangement mini-game randomly pulls images of Foraminifera from the database for matching to the corresponding chamber arrangement types, with each round of the game having four cards to match. In the morphotype mini-game, the number of draggable items and slots in later rounds increases from 4 in the first round, to 8 in the third round to increase difficulty. If the answer is incorrect, FossilSketch provides a hint by showing a hint or indicating which of the cards were matched incorrectly, and a user can try again to submit a correct answer. Students receive a star rating from one to three based on how many rounds they got correct on their first attempt.
74
+
75
+ #### 3.3.2 Orientation Game
76
+
77
+ The orientation game integrates a rotation interaction to help students gain an understanding of how to correctly orient the ostracod valve for identification. An ostracod valve has four sides: dorsal, ventral, posterior, and anterior margins/sides. This game starts with a general description of each of these margins to help students gain an intuition of how to identify each side of an ostracod. The user is tasked with rotating an ostracod to its position with the dorsal side up and all of its sides correctly labeled. To simplify the interaction, students rotate in one direction 90 degrees at a time by clicking or tapping once on the ostracod that is displayed in the center of the screen. When the student believes that the ostracod is oriented correctly, they submit their answer by selecting the "Finished" button on the center bottom of the screen.
78
+
79
+ As in the matching games, the orientation games are divided into three rounds. In this case, each round consists of one ostracod valve that needs to be rotated into the correct orientation. Answers are marked "correct" if they are rotated correctly the first time. If the submitted answer was incorrect, FossilSketch provides a hint on how to orient the valve correctly. Students are encouraged to use the knowledge gained from the hint by correcting their wrong answers. The star rating is based on the first submitted attempt for each round.
80
+
81
+ ![01963e08-e1ab-7045-aa43-1f4a105cad6d_2_926_1374_719_356_0.jpg](images/01963e08-e1ab-7045-aa43-1f4a105cad6d_2_926_1374_719_356_0.jpg)
82
+
83
+ Figure 3: Menu of the morphotype ID exercises. Students pick from any of the unidentified morphotypes marked with a "?", and afterwards are shown their performance on a 3-star rating system.
84
+
85
+ ### 3.4 Identification Exercises
86
+
87
+ In micropaleontology, microfossils are picked from sediment samples and the obtained variety of different species represents an assemblage characteristic of the sample and may point to the environmental setting or geologic age of the sample. A micropaleontologist would identify the species of microfossils in this assemblage based on their morphology, or their characteristic features. Primarily, FossilSketch offers a scaffolded learning experience to guide students through the steps needed to identify microfossils and their morphological characteristics.
88
+
89
+ Students are first presented with a menu depicted in Figure 3, where they can select to identify a specimen of Ostracoda or Foraminifera to genus level or a morphotype of Foraminifera. The Foraminifera identification steps to genus level can be seen in Figure 4 and are the following: 1) sketch the outline of the foraminifer image on the left; 2) sketch the outline of the last chamber on the image on the left; 3) select the type of shell this foraminifer has; 4) choose the overall shape of the organism from a menu; 5) choose the shape of the chambers; and 6) their number; 7) choose the type of chamber arrangement from the menu; 8) select the aperture location from the menu; 9) and select the aperture shape from a menu; 10) identify a genus based on the selected features. The Ostracoda genera identification exercise steps are shown in Figure 5 and include: 1) sketch the maximum length of the valve; 2) sketch the maximum height of the valve; 3) identify right vs left valve; 4) sketch the outline of the ostracod valve; 5) choose the type of outline from the menu; 6) measure approximate size of the valve and choose the size range from the menu; 7) choose the types of ornamentation, select any additional features when present; 8) and identify an ostracod genus based on the selected features.
90
+
91
+ Within each exercise the types of interactions are described below:
92
+
93
+ #### 3.4.1 Sketching Interactions
94
+
95
+ Sketching (steps 1-2 for Foraminifera, and steps 1-2 and 4 for Ostracoda) helps students retain and understand the various shapes and outlines they observe in different microfossils. It is the primary method of interaction after which the project is named. Sketching interactions integrate functionality from a library called paper.js to deliver flexible drawing interactions. Although the system is intended to be used with styli and touch to most naturally resemble a sketching activity, it is also possible to draw with a mouse or trackpad. Drawing interactions are usually integrated as the first steps of both kinds of identification exercises, as the overall shape of the sample is critical in identifying the microfossil.
96
+
97
+ The FossilSketch system checks for correctness using a template matching algorithm, outlined in Algorithm 2. The template recognizer coded specifically for FossilSketch uses the Hausdorff distance metric to determine the accuracy to the key for each microfossil. Before recognition, both the template and the input sketch are resampled to a lower sampling rate with roughly equidistant points as outlined in Algorithm 1. The formula followed for calculating the interspace distance is given in Eq. 1 where $c = {256}$ is a constant empirically derived to adjust the distance between the points for optimal calculation of the distance metric. The algorithm then iterates through each point in the input sketch, comparing it with the corresponding point for the template sketch and calculating the Euclidean distance between the two. Total distance is calculated across all the compared points and the cumulative sum is the overall "distance" between a template and the student input (see Figure 6). If the average deviation of the points is greater than the pixel with of the canvas divided by a constant, the algorithm concludes that the input sketch is too different from the template sketch. This constant was empirically determined after internal testing to match the desired student experience; students are meant to provide a relatively accurate, but not perfect, recreation of the template.
98
+
99
+ The template sketches are provided by [co-author names redacted for review] and coded directly into each foraminifer or ostracod image. Every foraminifer has a database entry containing template sketch data and the outline for its left view (see Figure 3 step 1), and its last chamber (Figure 3 step 2). For every ostracod in a database, there is a template sketch data for the outline, maximum length, and maximum height.
100
+
101
+ $$
102
+ S = \frac{\sqrt{{\left( {x}_{m} - {x}_{n}\right) }^{2} + {\left( {y}_{m} - {y}_{n}\right) }^{2}}}{c}, c = {256} \tag{1}
103
+ $$
104
+
105
+ Algorithm 1 Resampling Technique
106
+
107
+ ---
108
+
109
+ Require: Point list path, distance $S$
110
+
111
+ Ensure: Re-sampled point list out
112
+
113
+ $D \leftarrow 0$
114
+
115
+ for $i$ in path do
116
+
117
+ BetweenDist $\leftarrow \sqrt{{\left( {x}_{i + 1} - {x}_{i}\right) }^{2} + {\left( {y}_{i + 1} - {y}_{i}\right) }^{2}}$
118
+
119
+ $D \leftarrow D +$ BetweenDist
120
+
121
+ if $D > S$ then
122
+
123
+ $D \leftarrow$ BetweenDist
124
+
125
+ out $\leftarrow$ new point $\left( {{x}_{i},{y}_{i}}\right)$
126
+
127
+ end if
128
+
129
+ end for
130
+
131
+ ---
132
+
133
+ Algorithm 2 Compare Sketches
134
+
135
+ ---
136
+
137
+ Require: Student Spath, template Tpath
138
+
139
+ Ensure: Boolean result
140
+
141
+ totalDeviation $\leftarrow 0$
142
+
143
+ for $i$ in Spath do
144
+
145
+ closestDistance $\leftarrow$ INF
146
+
147
+ longestIndex $\leftarrow 0$
148
+
149
+ for $j$ in $T$ path do
150
+
151
+ tempDist $\leftarrow$ distance between ${\operatorname{Spath}}_{i}$ and ${\operatorname{Tpath}}_{j}$
152
+
153
+ if tempDist < closestDistance then
154
+
155
+ closestDist $\leftarrow$ tempDist
156
+
157
+ closestIndex $\leftarrow j$
158
+
159
+ end if
160
+
161
+ end for
162
+
163
+ end for
164
+
165
+ avgDeviation $\leftarrow \frac{\text{ totalDeviation }}{\text{ spathlength }}$
166
+
167
+ cwidth $\leftarrow$ pixel width of canvas
168
+
169
+ if avgDeviation $> \frac{\text{ cwidth }}{70}$ then
170
+
171
+ result $\leftarrow$ True
172
+
173
+ else
174
+
175
+ result $\leftarrow$ False
176
+
177
+ end if
178
+
179
+ ---
180
+
181
+ #### 3.4.2 Identification of Features from a menu
182
+
183
+ Identification of features (steps 3-5 for Foraminifera, and steps 3, 5-6 for Ostracoda) is presented to students as a horizontal multiple-choice menu along the bottom of the screen. During each of these steps, the student is asked to identify one of several characteristic features of the microfossils. For instance, the student might be asked "what is the overall shape of the organism?" and the possible answers might be "vase-like", "convex", "low-conical", "spherical" and "arch" among others. With each option, a sample sketched outline of each shape is shown, but it is important to note these are sketched examples and not photorealistic depictions of the choices. The student is tasked with remembering the particular physical properties of each characteristic feature, as well as matching the pictures with the closest choice from the menu. Of these, one is the correct answer. In this part of the exercise, the student does not receive immediate feedback to their submitted selections, and all of these answers are summarized for the student to use in order to make the final identification from the database of genera for Foraminifera and Ostracoda.
184
+
185
+ ![01963e08-e1ab-7045-aa43-1f4a105cad6d_4_165_149_1466_647_0.jpg](images/01963e08-e1ab-7045-aa43-1f4a105cad6d_4_165_149_1466_647_0.jpg)
186
+
187
+ Figure 4: Foraminifera Identification Steps (from left to right, top to bottom): 1) Sketch the Outline, 2) Sketch the Last Chamber, 3) Select the Shell Type, 4) Select the Overall Shape, 5) Select the Chamber Shape, 6) Select the Number of Chambers, 7) Select the Chamber Arrangement, 8) Select the Aperture Location, 9) Select the Aperture Shape, 10) Identify the Genus
188
+
189
+ ![01963e08-e1ab-7045-aa43-1f4a105cad6d_4_165_929_1466_433_0.jpg](images/01963e08-e1ab-7045-aa43-1f4a105cad6d_4_165_929_1466_433_0.jpg)
190
+
191
+ Figure 5: Ostracod Identification Steps (from left to right, top to bottom): 1: Sketch the Max Length, 2) Sketch the Max Height, 3) Identify right vs left valve, 4) Sketch the Outline, 5) Select the Valve Shape, 6) Select the Approximate Size, 7) Select the Ornamentation, 8) Identify the Genus
192
+
193
+ #### 3.4.3 Pointing Interaction
194
+
195
+ Pointing interactions (step 5 for Foraminifera morphotype ID) are a simplified form of "sketching interactions" that require students to click once in a general area of interest, and FossilSketch checks if the identified location is correct. Specifically, this interaction is used to identify the general location of the aperture of a given foraminifer. The student is asked to click once in the region where they believe the aperture is. Each foraminifer in the FossilSketch database contains data on a rectangular region that points to the general area of its aperture. When the student clicks "Submit" after identifying the aperture area, FossilSketch checks to see if the location of the click is within the predefined rectangular area. If it is, the answer is marked as correct. The location of the aperture is only used for identifying a foraminifer's morphotype.
196
+
197
+ #### 3.4.4 Summary Screen
198
+
199
+ The summary screen (step 10 for Foraminifera, and Ostracoda) is the last step for each identification exercise, asking the student to draw from their observations and make the final selection of the genus or morphotype for Foraminifera or Ostracoda. Each morphotype or genus has a list of characteristic features, and, based on student answers, each feature correctly marked during the identification steps would have a green check mark. The list of morphotypes or genera on the summary screen is ranked by the highest number of matching properties with student answers. If the student's answers are correct, the choice is easy since it has the most check marks and is the first item listed. Additionally, a picture of each morphotype of genus is included, letting students double-check to see if their best-ranked choice is the most accurate. This system allows students to develop self-assessment skills to see if their choices match up with any given morphotype or genus. At any time students are able to revisit any of the previous steps, so this final choice would be a good motivation to do so if they notice their prior choices did not yield a definitive conclusion. It also allows students to see different properties that might be common between some morphotypes or genera, but each foraminifer and ostracod specimen will have only one correct final answer.
200
+
201
+ ![01963e08-e1ab-7045-aa43-1f4a105cad6d_5_247_156_528_428_0.jpg](images/01963e08-e1ab-7045-aa43-1f4a105cad6d_5_247_156_528_428_0.jpg)
202
+
203
+ Figure 6: To evaluate answers, FossilSketch resamples and overlays both the student input and instructor-provided sketch, and a total distance metric is calculated by summing the Euclidean distance between sampled points.
204
+
205
+ ### 3.5 Assemblage exercise
206
+
207
+ One of the goals of this interface is to demonstrate to students the various applications of microfossils in geosciences. Once the students gain mastery of microfossil identification through practicing mini-games and microfossil identification, they proceed to the final type of exercise and assessment where they can apply their knowledge to reconstruct environments from an assemblage of different microfossils. In this exercise, the students view microfossil assemblages with approximately 20 foraminifer or ostracod individuals and identify the foraminiferal morphotypes or Ostracoda genera present. These assemblages imitate an actual microfossil "slide", as seen under a microscope that contains an assemblage of Foraminifera or Ostracoda. Students are asked to identify how many of each foraminiferal morphotype or ostracod genus specimens are present in the slide. Before students start working on the exercise, they can view a screen with a summary of the information on foraminiferal morphotypes or ostracod genera and how they can be used to interpret environmental properties, such as the oxygenation or salinity of the water. This exercise includes 3 rounds and a summary. The student then needs to identify the different genera or morphotypes and select from the menu on the right side of the screen the number of each morphotype. It is intended that students will draw on their knowledge from the previous exercises to quickly identify the morphotypes or genera they see in these assemblages. For the ostracod assemblages, the menu to select from includes both the genera that are and genera that are not present in the assemblage. For the foraminiferal morphotypes, the assemblage includes two morphotypes to select from and "Other" category. To answer correctly, the student must provide a correct number for all categories, i.e., for both of the morphotypes or genera and the "Other" category, in an assemblage.
208
+
209
+ Both assemblage exercises conclude with a summary page where the student is asked to make an overall conclusion about the environment-based morphotypes and genera present in the assemblages. For instance, the Foraminifera morphotype assemblage exercise uses assemblages to determine bottom water oxygenation. It has been shown that in environments where cylindrical- and flat-tapered morphotypes are found in abundance, the environments usually have low oxygenation [28]. The students are asked to rank each assemblage by relative oxygenation level. They should be able to do so when they consider the relative abundance of cylindrical-tapered and flat-tapered morphotypes they found in each of the three assemblages. Similarly, for Ostracoda genera, students count the number of individuals of each genera, and determine the bottom water salinity indicated by each of the assemblages. If a student makes a mistake, FossilSketch provides feedback, by showing which specimens correspond to which genera and morphotypes, so one can correct their response. These exercises show how microfossil research is applied and assess microfossil identification skills learned and honed across all exercises of the FossilSketch system.
210
+
211
+ ## 4 EVALUATION
212
+
213
+ To test the efficacy of FossilSketch as an effective means of teaching micropaleontology, we conducted a case-control experiment in a "Paleontology and Geobiology" course over two different semesters. As the control, students were taught micropaleontology without the use of FossilSketch in the Spring 2020 semester. As the case, students were taught micropaleontology using FossilSketch in the Spring 2023 semester. We describe the experience of the students from each of these semesters in more detail in the following subsections. As a side note, FossilSketch was used in other semesters in between our control and case groups; however, the tech stack for FossilSketch was completely overhauled prior to its deployment in Spring 2023.
214
+
215
+ ### 4.1 Spring 2020
216
+
217
+ During the Spring 2020 semester, students participated in three-hour-long laboratory sessions consisting of several specimen-based laboratory activities. Students used 3D physical models and labeled SEM images to study the main morphological features of various Foraminifera and Ostracoda respectively. After completing these activities, students were asked to select a microfossil and provide a labeled sketch of the specimen, identify its morphological features, and ultimately identify its genus. Students were encouraged to work in teams and were allowed to ask the teaching assistant or professor any questions they had.
218
+
219
+ ### 4.2 Spring 2023
220
+
221
+ During the Spring 2023 semester, students were asked to use FossilSketch along with the in-person specimen-based laboratory activities. Specifically, students were asked to watch the educational videos, play each of the four mini-games, and identify at least three different Ostracoda, and Foraminifera. After completing these activities, students were asked to select a microfossil and provide a labeled sketch of the specimen, identify its morphological features, and ultimately identify its genus.
222
+
223
+ ### 4.3 Participants
224
+
225
+ A total of 86 students, two TAs, and one instructor (who taught both courses) consented and took part in the study, of which 51 students represent the control group, and 35 represent the test group. The instructor is an author on this paper. Before data collection and using FossilSketch software, participants were given a quick overview of the project and signed consent forms (IRB2019-1218M, expiration date 02/05/2026).
226
+
227
+ ## 5 RESULTS
228
+
229
+ In both semesters we conducted surveys and focus groups with the students. We also conducted semi-structured interviews with the graduate TAs and the professor to get insights into their experience with FossilSketch. We discuss their feedback in the following subsections.
230
+
231
+ ### 5.1 Student Feedback
232
+
233
+ After using FossilSketch, students completed an engagement survey where they could give feedback about their experience, what they found effective, and what they found difficult. This survey contained open-ended questions regarding their expectations in the course, how they felt about the micropaleontology activities, and what strategies they employed to complete the coursework. To determine the impact of FossilSketch on student engagement and enjoyment, we conducted a deeper analysis of the responses to the question "Did you enjoy the micropaleontology activities in this class? Which ones? And what about them were enjoyable?". We coded the answers to this question based on whether the tone was positive, neutral, or negative, as students used this question to either describe things they enjoyed or complain about the things they did not. In the Spring 2020 semester, there were 25 answers to this question with 11 being positive, 8 being neutral, and 6 being negative. In the Spring 2023 semester, there were 22 answers to this question with 18 being positive, 2 being neutral, and 2 being negative. Conducting a $\chi$ -squared analysis showed that the answers are statistically significantly different with $p < {0.05}$ . As the two main changes were the increase in positive responses and the decrease in neutral responses, we hypothesize that FossilSketch won over students who had less initial buy-in for learning microfossils. There were students who were notably passionate and critical about learning the material in both groups, which can be expected in any course. Many students were being exposed to microfossils for the first time, so they had little expectation of the utility of learning this tool. For the traditional methods, some of these students left the unit lukewarm, saying that they did not hate the material but also did not enjoy it. By contrast, most students who used FossilSketch answered specific features they liked the most, and several also described the traditional lab activities that FossilSketch augments. In short, FossilSketch was more effective in engaging students to learn about micropaleontology when compared to using traditional methods alone.
234
+
235
+ Students also demonstrated engagement with FossilSketch through their usage patterns. Several students completed the genus identification exercises for additional practice, with a small number of students completing the exercises six times more than required to complete the lab assignments. The majority of students also indicated that the genera identification exercise was their favorite activity in FossilSketch, because they enjoyed sketching and following step-by-step instructions. Ostracoda exercises were notably more popular with half of the students completing extra identifications (students were required to complete 3 Foraminifera and 3 Ostracoda genera identifications), likely because they are easier to complete due to having fewer steps. A similar pattern arises when looking at the mini-game playing statistics. Half of the students would play matching games additional times. The most difficult exercise, the assemblage exercise, was only occasionally played additional times, but this result is expected due to its difficulty.
236
+
237
+ ### 5.2 Teaching Assistants' Feedback
238
+
239
+ We conducted a semi-structured interview with the Teaching Assistants (TA) from both the Spring 2020 and Spring 2023 courses to understand how FossilSketch impacted their experience.
240
+
241
+ #### 5.2.1 Spring 2020
242
+
243
+ Overall the TA was quite negative about the experience of teaching microfossils. The TA has to learn the material beforehand from the instructor in order to properly proctor the lab session. The instructor explains what the answers are to the lab questions and what to look for in the specimens to identify them so that the TA can answer questions during the lab. This preparation is necessary as recognizing the different microfossils is challenging without experience. To that point, the TA noted that the students found the topic difficult to grasp:
244
+
245
+ "Challenging, some students were very confused. Some students were okay, but some found it really hard to understand, as compared to other groups [macrofossils]. Microfossils were definitely more difficult for them."
246
+
247
+ She went on to note that, given the difficulty students have in learning about microfossils, more time needs to be spent on teaching the subject. Learning the different species requires gaining familiarity with the unique features and attributes, which involves getting exposure to samples and practicing identifying them. Furthermore, fully understanding and committing these concepts to memory can require significant creativity.
248
+
249
+ "You have to be creative talking to students, like coming up with some non-traditional ways to remember morphology features, like: Uvigerina looks like a banana bunch, just imagine that. I used a lot of imagination when I was trying to grasp that."
250
+
251
+ #### 5.2.2 Spring 2023
252
+
253
+ Overall the TA was positive about her experience with FossilSketch being used as part of the lab assignments. She felt that students benefited from its use and that it sped up the process of learning about microfossils.
254
+
255
+ "I did have one student tell me that this was the least confusing lab out of all of them. I thought that was pretty amazing. So I think it is very good for a kind of helping me kind of an abstract idea into actually something tangible for people to understand."
256
+
257
+ Regarding her experience as a TA, she noted that using FossilSketch lightened her workload, as students asked fewer questions overall and she could rely on FossilSketch as a tool for answering some of the questions that did arise. FossilSketch provided a database to look up visual aids as well as a medium to walk through the identification process.
258
+
259
+ "I think it made my work easier. People kind of just went off on their own, and they kind of worked through it on their own. [...] All I did was I put up the key that was in the corner of one of the mini-games. I just went up there and said like, "Look at this." So then they could actually figure it out from there. So that was really helpful."
260
+
261
+ She mentioned that the students did come to her with some bugs and issues with the software, but these did not detract meaningfully from the student's overall experience using FossilSketch. She noted that as a graduate student herself she could see herself using FossilSketch as a reference, and she felt that leaning into this idea of FossilSketch as a reference could make the website more broadly useful. For instance, she suggested adding a glossary of terms with images and examples for students to conveniently reference the basics.
262
+
263
+ ### 5.3 Instructor Feedback
264
+
265
+ We conducted a semi-structured interview with the professor who taught both the Spring 2020 and Spring 2023 courses to understand how FossilSketch impacted her workload and her teaching.
266
+
267
+ #### 5.3.1 Spring 2020
268
+
269
+ When asked about the attitudes of students towards microfossils in this class she noted that students were generally quite excited to learn about microfossils; however, there were several sticking points within the class. Students would become quite frustrated when looking at samples through a stereoscope as they were being asked to view and analyze tiny objects that are inherently difficult to see and parse. Furthermore, students complained about having the sketch the microfossils, finding the task quite tedious.
270
+
271
+ "So even with a stereoscope, they're often relatively difficult to see, and so students get very frustrated because we're asking them to notice things and see things, and they're not able to zoom in enough. [...] They also can't turn it over and manipulate it, and that also is a frustration because there's certain anatomical parts of it that you could see best if you could turn it. [...] So I think students find themselves very frustrated and as that level of frustration rises, their ability to learn goes down, right?”
272
+
273
+ When asked if there was a difference in experience between ostracods and forams, she expressed that because she is an expert in Foraminifera and is less personally interested in Ostracoda she felt that translated to how well students were learning about the two categories of microfossils. Not only did she teach the two categories differently, going into more detail on forams than ostracods, but she also expressed that she was likely better at making students more comfortable with the former given her own comfort with the subject.
274
+
275
+ "I think students feel the same way about each of them [ostracods and foraminifera] because they're tiny, mysterious things, but I'm probably better at making them comfortable with forams just because of my position."
276
+
277
+ #### 5.3.2 Spring 2023
278
+
279
+ To integrate FossilSketch into her classroom, the instructor replaced part of the lab assignments and the paper sketching assignments with FossilSketch exercises and games. FossilSketch was generally scheduled to be completed at the beginning of the lab session, although some students would complete the exercises before the lab in preparation. Students were excited to use a computer-based tool.
280
+
281
+ When asked how students responded to FossilSketch, she noted that students appreciated a number of its aspects. She noted that students enjoyed being able to go back and do something over again, reviewing microfossils as many times as they needed before dealing with the physical specimens. They also appreciated being able to do their assignments and review the material from anywhere and at their own pace, rather than having to complete the tasks under the pressure and constraints of being in a physical lab and a given time limit. She also explicitly noted that while students would frequently complain about physically sketching microfossils, they did not complain about sketching the fossils using FossilSketch.
282
+
283
+ She noted that while students still complained about the assignments, their complaints shifted.
284
+
285
+ "So I think the difference is where their frustration points are. Before, all of their frustration points were focused on the microscope, and then with the introduction of FossilSketch, their frustration points get focused on the computer. But what I found interesting was their frustration with the microscope declined, so I still had students do things in class looking at the specimens. But $I$ think because they had seen the specimens in another way, they felt more comfortable looking down the microscope."
286
+
287
+ She also commented that students' conceptions of how much they could learn changed due to the introduction of FossilSketch.
288
+
289
+ "So I've also found that the way that they think about how much they know changed. So like when they were doing the traditional teaching, I think they felt like they knew everything they could know, like the things that they didn't know were just not accessible to them, like the materials weren't good enough. [...] And now they've kind of - they shifted a little bit. To now, they feel like they don't know... I guess bigger things? So like instead of them feeling like they don't really know what a foram is, they feel like they're now focused more on: 'I don't know how to apply them'. [...] So I think they still they still have this feeling that - students always feel like 'I don't know anything, I don't know everything yet, I have to study more'. They always kind of have this feeling. But now that feeling has been transferred to kind of higher level ideas which is actually really useful."
290
+
291
+ When asked about the effect of FossilSketch on her own workload, she noted that initially, just like any change to the curriculum, it required effort to develop the materials and figure out how to incorporate them into her specific use case; however, after that initial set-up effort, it was just as easy to incorporate into her classroom as the traditional lab assignments. She did note that FossilSketch did make it easier for her to train the TA, as she could just ask the TA to use FossilSketch. In that sense, she also felt that it lowered TA anxiety, as the TA was not required to know as much of the material as they could (and did) point students with questions to FossilSketch in order to get answers.
292
+
293
+ Finally, when asked if she would want to utilize both FossilSketch and traditional teaching approaches to teaching microfossils, she mentioned that FossilSketch had distinct advantages in specific scenarios that would lead her to use FossilSketch exclusively, whereas in other scenarios she would want to rely more heavily on traditional approaches. If she were to teach microfossils in an online and/or remote course, she would use FossilSketch primarily. FossilSketch also could be used for students who need accommodations, such as those who cannot look into a stereoscope but can interact with a screen or those who are unable to physically attend in-person labs. She noted that for students that are geology majors, she would want them to physically look at specimens under a stereoscope; however, for students who are non-majors who will likely never look at specimens again, FossilSketch would offer them enough of the material without the frustration of looking through a stereoscope.
294
+
295
+ ## 6 CONCLUSION
296
+
297
+ FossilSketch is an intelligent tutoring system to support learning micropaleontology in undergraduate geoscience classrooms. The tool teaches students how to recognize Foraminifera and Ostracoda microfossils using sketch-based exercises and mini-games to practice identifying these specimens. We evaluated the effectiveness of FossilSketch in the classroom from the perspective of the instructors and students using qualitative and quantitative analysis. The results show that students respond better to FossilSketch and that the burden on the instructors is reduced, resulting in a better classroom experience for all parties.
298
+
299
+ ## REFERENCES
300
+
301
+ [1] L. Anthony and J. O. Wobbrock. A lightweight multistroke recognizer for user interface prototypes. In Proceedings of Graphics Interface 2010, pp. 245-252. ACM, 2010.
302
+
303
+ [2] L. Anthony and J. O. Wobbrock. \$ n-protractor: A fast and accurate multistroke recognizer. In Proceedings of Graphics Interface 2012, pp. 117-120. ACM, 2012.
304
+
305
+ [3] Arizona State University. Virtual field trips. https://vft.asu.edu/, 2020. Last accessed: 2021-12-02.
306
+
307
+ [4] H. A. Armstrong and M. D. Brasier. Foraminifera. Microfossils, Second Edition, pp. 142-187, 2005.
308
+
309
+ [5] A. Bhat, G. K. Kasiviswanathan, C. Mathew, S. Polsley, E. Prout, D. Goldberg, and T. Hammond. An intelligent sketching interface for education using geographic information systems. In T. Hammond, A. Adler, and M. Prasad, eds., Frontiers in Pen and Touch: Impact of Pen and Touch Technology on Education, Human-Computer Interaction Series, chap. 11, pp. 147-163. Springer, Switzerland, 2017. https: //doi.org/10.1007/978-3-319-64239-0_11.
310
+
311
+ [6] T. Bralower. Adapting an online course for a large student cohort. In GSA annual meeting. Seattle, WA, October 2017. https://gsa.confex.com/gsa/2017AM/meetingapp.cgi/Paper/298421.
312
+
313
+ [7] T. Bravo. Developing an online seismology course for alaska. In ${GSA}$ annual meeting. Seattle, WA, October 2017. https://gsa.confex.com/gsa/2017AM/meetingapp.cgi/Paper/308093.
314
+
315
+ [8] N. Bursztyn, A. Walker, B. Shelton, and J. Pederson. Increasing undergraduate interest to learn geoscience with gps-based augmented reality field trips on students' own smartphones. GSA Today, 27(5):4- 11, 2017.
316
+
317
+ [9] L. Capotondi, C. Bergami, G. Orsini, M. Ravaioli, P. Colantoni, and S. Galeotti. Benthic foraminifera for environmental monitoring: a case study in the central adriatic continental shelf. Environmental Science and Pollution Research, 22(8):6034-6049, Apr 2015. doi: 10. 1007/s11356-014-3778-7
318
+
319
+ [10] L. Carvalho, G. Fauth, S. B. Fauth, G. Krahl, A. Moreira, C. Fernandes, and A. Von Wangenheim. Automated microfossil identification and segmentation using a deep learning approach. Marine Micropaleontology, 158:101890, 2020.
320
+
321
+ [11] A. J. Cawood and C. E. Bond. erock: An open-access repository of virtual outcrops for geoscience education. GSA Today, 2019.
322
+
323
+ [12] S. Cheema and J. LaViola. Physicsbook: a sketch-based interface for animating physics diagrams. In Proceedings of the 2012 ACM international conference on Intelligent User Interfaces, pp. 51-60. ACM, Lisbon, Portugal, 2012.
324
+
325
+ [13] T. Cronin, L. Gemery, E. Brouwers, W. Briggs Jr, A. Wood, A. Stepanova, E. Schornikov, J. Farmer, and K. Smith. Modern arctic ostracode database. IGBP PAGES/WDCA contribution series number: 2010-081. ftp.ncdc.noaa.gov/pub/data/paleo/ contributions_by_author/cronin2010/cronin2@10.txt, 2010. Accessed: 2021-11-05.
326
+
327
+ [14] Flanders Marine Institute. World Foraminifera Database. http://www.marinespecies.org/foraminifera/.Accessed: 2021-08-06.
328
+
329
+ [15] Flanders Marine Institute. World Ostracoda Database. http://www.marinespecies.org/ostracoda/.Accessed: 2021-12-02.
330
+
331
+ [16] K. Forbus, K. Lockwood, M. Klenk, E. Tomai, and J. Usher. Open-domain sketch understanding: The nusketch approach. In AAAI Fall Symposium on Making Pen-based Interaction Intelligent and Natural, pp. 58-63. AAAI Press, Arlington, VA, 2004.
332
+
333
+ [17] K. Forbus, J. Usher, A. Lovett, K. Lockwood, and J. Wetzel. Cogsketch: Sketch understanding for cognitive science research and for education. Topics in Cognitive Science, 3(4):648-666, 2011.
334
+
335
+ [18] K. D. Forbus, M. Chang, M. McLure, and M. Usher. The cognitive science of sketch worksheets. Topics in cognitive science, 9(4):921- 942, 2017.
336
+
337
+ [19] J. French, M. A. Segado, and P. Z. Ai. Sketching graphs in a calculus mooc: Preliminary results. In T. Hammond, A. Adler, and M. Prasad, eds., Frontiers in Pen and Touch: Impact of Pen and Touch Technology on Education, p. 93-102. Springer International Publishing, Cham, 2017. doi: 10.1007/978-3-319-64239-0_7
338
+
339
+ [20] P. Frenzel and I. Boomer. The use of ostracods from marginal marine, brackish waters as bioindicators of modern and quaternary environmental change. Palaeogeography, Palaeoclimatology, Palaeoecology, 225(1-4):68-92, 2005.
340
+
341
+ [21] T. Hammond. Dialectical creativity: Sketch-negate-create. In Studying Visual and Spatial Reasoning for Design Creativity, pp. 91-108. Springer, Dordrecht, England, 2015.
342
+
343
+ [22] T. Hammond and R. Davis. Ladder, a sketching language for user interface developers. In Computers & Graphics, vol. 29-4, pp. 518- 532. Elsevier, Amsterdam, The Netherlands, 2005. doi: 10.1016/j.cag. 2005.05.005
344
+
345
+ [23] T. Hammond, S. P. A. Kumar, M. Runyon, J. Cherian, B. Williford, S. Keshavabhotla, S. Valentine, W. Li, and J. Linsey. It's not just about accuracy: Metrics that matter when modeling expert sketching ability. ACM Trans. Interact. Intell. Syst., 8(3), jul 2018. doi: 10.1145/3181673
346
+
347
+ [24] A. Holbourn, W. Kuhnt, M. Lyle, L. Schneider, O. Romero, and N. Andersen. Middle miocene climate cooling linked to intensification of eastern equatorial pacific upwelling. Geology, 42(1):19-22, 2014.
348
+
349
+ [25] A. Y. Hsiang, A. Brombacher, M. C. Rillo, M. J. Mleneck-Vautravers, S. Conn, S. Lordsmith, A. Jentzen, M. J. Henehan, B. Metcalfe,
350
+
351
+ I. S. Fenton, et al. Endless forams:> 34,000 modern planktonic foraminiferal images for taxonomic training and automated species recognition using convolutional neural networks. Paleoceanography and Paleoclimatology, 34(7):1157-1177, 2019.
352
+
353
+ [26] R. W. Jones. Foraminifera and their Applications. Cambridge University Press, 2013.
354
+
355
+ [27] F. J. Jorissen, C. Fontanier, and E. Thomas. Chapter seven paleoceanographical proxies based on deep-sea benthic foraminiferal assemblage characteristics. Developments in Marine Geology, 1:263- 325, 2007.
356
+
357
+ [28] K. Kaiho. Benthic foraminiferal dissolved-oxygen index and dissolved-oxygen levels in the modern ocean. Geology, 22(8):719-722, 1994.
358
+
359
+ [29] C. Lee, J. Jordan, T. F. Stahovich, and J. Herold. Newtons pen ii: an intelligent, sketch-based tutoring system and its sketch processing techniques. In Proceedings of the International Symposium on Sketch-Based Interfaces and Modeling, pp. 57-65. ACM, Annecy, France, 2012.
360
+
361
+ [30] W. Lee, R. de Silva, E. J. Peterson, R. C. Calfee, and T. F. Stahovich. Newton's pen: A pen-based tutoring system for statics. Computers & Graphics, 32(5):511-524, 2008.
362
+
363
+ [31] K. Milliken, J. Barufaldi, E. McBride, and S.-J. Choh. Design and assessment of an interactive digital tutorial for undergraduate-level sandstone petrology. Journal of Geoscience Education, 51(4):381-386, 2003.
364
+
365
+ [32] R. Mitra, T. Marchitto, Q. Ge, B. Zhong, B. Kanakiya, M. Cook, J. Fehrenbacher, J. Ortiz, A. Tripati, and E. Lobaton. Automated species-level identification of planktic foraminifera using convolutional neural networks, with comparison to human performance. Marine Micropaleontology, 147:16-24, 2019.
366
+
367
+ [33] P. A. Mueller and D. M. Oppenheimer. The pen is mightier than the keyboard: Advantages of longhand over laptop note taking. Psychological science, 25(6):1159-1168, 2014.
368
+
369
+ [34] J. W. Murray. Ecology and applications of benthic foraminifera. Cambridge University Press, 2006.
370
+
371
+ [35] K. Nakakoji, A. Tanaka, and D. Fallman. "sketching" nurturing creativity: commonalities in art, design, engineering and research. In CHI'06 extended abstracts on Human factors in computing systems, pp. 1715-1718. ACM, Montreal, Canada, 2006.
372
+
373
+ [36] T. Nelligan, S. Polsley, J. Ray, M. Helms, J. Linsey, and T. Hammond. Mechanix: a sketch-based educational interface. In Proceedings of the 20th International Conference on Intelligent User Interfaces Companion, pp. 53-56. ACM, Atlanta, Georgia, 2015.
374
+
375
+ [37] A. Noorafshan, L. Hoseini, M. Amini, M.-R. Dehghani, J. Kojuri, and L. Bazrafkan. Simultaneous anatomical sketching as learning by doing method of teaching human anatomy. Journal of education and health promotion, 3, 2014.
376
+
377
+ [38] B. J. O'Neill. USING MICROFOSSILS IN PETROLEUM EXPLORATION, chap. 17. The University of California, Berkeley, 2000. https://ucmp.berkeley.edu/fosrec/ONeill.html.
378
+
379
+ [39] M. A. O'Neill and M. Denos. Automating biostratigraphy in oil and gas exploration: Introducing geodaisy. Journal of Petroleum Science and Engineering, 149:851-859, 2017.
380
+
381
+ [40] M. Pache, A. Römer, U. Lindemann, and W. Hacker. Sketching behaviour and creativity in conceptual engineering design. In Proceedings of the International Conference on Engineering Design (ICED'01), pp. 243-252. Springer, Berlin, Germany, 2001.
382
+
383
+ [41] B. Paulson and T. Hammond. Paleosketch: accurate primitive sketch recognition and beautification. In Proceedings of the ${13}^{\text{th }}$ International Conference on Intelligent User Interfaces, pp. 1-10. ACM, Gran Canaria, Spain, 2008.
384
+
385
+ [42] R. K. Poirier, T. M. Cronin, W. M. Briggs Jr, and R. Lockwood. Central arctic paleoceanography for the last ${50}\mathrm{{kyr}}$ based on ostracode faunal assemblages. Marine Micropaleontology, 88:65-76, 2012.
386
+
387
+ [43] F. Project. Foraminifera Gallery. http://www.foraminifera.eu/.Accessed: 2021-12-02.
388
+
389
+ [44] K. Quillin and S. Thomas. Drawing-to-learn: a framework for using drawings to promote model-based reasoning in biology. CBE-Life Sciences Education, 14(1):es2, 2015.
390
+
391
+ [45] D. Rubine. Specifying gestures by example. In Proceedings of the 18th Annual Conference on Computer Graphics and Interactive Techniques,
392
+
393
+ SIGGRAPH '91, pp. 329-337. ACM, New York, NY, USA, 1991.
394
+
395
+ [46] B. J. Tewksbury, C. A. Manduca, D. W. Mogk, R. H. Macdonald, and M. Bickford. Geoscience education for the anthropocene. Geological Society of America Special Papers, 501:189-201, 2013.
396
+
397
+ [47] S. Valentine, F. Vides, G. Lucchese, D. Turner, H.-H. A. Kim, W. Li, J. Linsey, and T. Hammond. Mechanix: A sketch-based tutoring system for statics courses. In Proceedings of the Twenty-Fourth Innovative Applications of Artificial Intelligence Conference (IAAI), pp. 2253- 2260. AAAI, Toronto, Canada, July 22-26, 2012.
398
+
399
+ [48] R.-D. Vatavu, L. Anthony, and J. O. Wobbrock. Gestures as point clouds: a $\$$ p recognizer for user interface prototypes. In Proceedings of the 14th ACM international conference on Multimodal interaction, pp. 273-280, 2012.
400
+
401
+ [49] R.-D. Vatavu, L. Anthony, and J. O. Wobbrock. \$ q: A super-quick, articulation-invariant stroke-gesture recognizer for low-resource devices. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services, pp. $1 - {12},{2018}$ .
402
+
403
+ [50] I. M. Verstijnen, C. van Leeuwen, G. Goldschmidt, R. Hamel, and J. Hennessey. Sketching and creative discovery. Design studies, 19(4):519-546, 1998.
404
+
405
+ [51] M. B. Weaver, S. Ray, E. C. Hilton, D. Dorozhkin, K. Douglas, T. Hammond, and J. Linsey. Improving engineering sketching education through perspective techniques and an ai-based tutoring platform. International Journal of Engineering Education, 38(6):15, 2022.
406
+
407
+ [52] C. Widjaja and S. S. Sumali. Short-term memory comparison of students of faculty of medicine pelita harapan university batch 2015 between the handwriting and typing method. Medicinus, 7(4):108-111, 2020.
408
+
409
+ [53] B. Williford. Sketchtivity: Improving creativity by learning sketching with an intelligent tutoring system. In Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition, pp. 477-483. ACM, Singapore, 2017.
410
+
411
+ [54] J. O. Wobbrock, A. D. Wilson, and Y. Li. Gestures without libraries, toolkits or training: a $\$ 1$ recognizer for user interface prototypes. In Proceedings of the 20th annual ACM symposium on User interface software and technology, pp. 159-168. ACM, 2007.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/AME0sErWj0j/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,289 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § FOSSILSKETCH: A NOVEL INTERACTIVE WEB INTERFACE FOR TEACHING UNIVERSITY-LEVEL MICROPALEONTOLOGY
2
+
3
+ Category: Research
4
+
5
+ § ABSTRACT
6
+
7
+ Although the demand for geoscientists is projected to grow and the current population of experts is aging, few students are trained in using micropaleontology. Applications of micropaleontology in solving geologic problems are diverse, and include such areas of research as estimating sea level fluctuations, understanding the causes of past climate upheavals, and finding economically important resources like oil and gas. To aid in teaching micropaleontology in undergraduate classrooms, we developed FossilSketch, a web-based interactive learning tool for the basics of micropaleontology. FossilSketch teaches microfossil identification for Foraminifera and Ostracoda through automatically assessing sketch-based exercises and other practice activities. Results from deploying this system in an undergraduate geology class indicate that FossilSketch benefits both students and instructors. Students find FossilSketch more engaging and less stressful than traditional methods, and instructors have their workload reduced in terms of course preparation.
8
+
9
+ Index Terms: H.5.2 [User Interfaces]: User Interfaces-Graphical user interfaces (GUI); H.5.m [Information Interfaces and Presentation]: Miscellaneous
10
+
11
+ § 1 INTRODUCTION
12
+
13
+ Micropaleontology is a critical tool for determining the ages of sedimentary rocks for both industrial and scientific applications [26]. Microfossil species are sensitive to specific environmental parameters and are often used to reconstruct past changes in ocean temperature, coastal sea-level, and seafloor oxygenation [34]. Further, microfossils are used in modern, real-time, environmental monitoring because they respond quickly to environmental change [9]. Additionally, micropaleontology can be used in oil exploration to locate reservoirs [38].
14
+
15
+ Despite the importance of micropaleontology for geoscience research and industry, most geoscience students are not exposed to this topic. Micropaleontology is rarely taught at the undergraduate level because of the number of contact hours necessary and the amount of instructor feedback required to train students at the necessary level of detail. Thus, although the field of geology has broadened over the last several decades, micropaleontology is being dropped from the curriculum and students' training in the field has correspondingly declined $\left\lbrack {4,{46}}\right\rbrack$ . This trend becomes problematic as experts in micropaleontology are aging and fewer students are being trained in microfossil identification techniques [39].
16
+
17
+ To enable and enhance the training of undergraduates in the basics of micropaleontology in remote, hybrid, and in-class conditions, we developed FossilSketch. FossilSketch, depicted in Figure 1, is an interactive, intelligent digital tool that introduces students to micropaleontology through educational videos, mini-games, sketch-based identification exercises, and assessments focused on applications of microfossils in the geosciences. Sketch recognition algorithms are used to automatically evaluate sketches and provide feedback to help students internalize various morphological features and identify microfossils from two common microfossil groups, Foraminifera and Ostracoda. FossilSketch reduces the burden on the instructors by providing feedback for these activities and games to help students learn and practice micropaleontology skills. This paper outlines the design of FossilSketch as well as its impact from being deployed in an undergraduate geology classroom.
18
+
19
+ < g r a p h i c s >
20
+
21
+ Figure 1: A participant using the FossilSketch educational web app.
22
+
23
+ § 2 RELATED WORKS
24
+
25
+ § 2.1 GEOSCIENCE EDUCATIONAL TOOLS
26
+
27
+ Geosciences have been rapidly adopting online and remote-based educational tools over the last five years, including various online resources, pedagogical practices, and course curricula, including high-resolution digital imaging for mapping and documenting geological outcrops, 3D virtual simulations, digitization of fossil collections, and augmented reality field trip games for smartphones and tablets (e.g., $\left\lbrack {3,6 - 8,{11}}\right\rbrack$ ).
28
+
29
+ Successful implementation of software in geoscience education includes sketching software, virtual microscopes, and field experience simulations $\left\lbrack {8,{18},{31}}\right\rbrack$ . For example, CogSketch is a sketching-based application with a series of introductory geoscience worksheets on key geoscience concepts [18] that aids students in solving discipline-specific spatial problems while providing instructors with insights into student thinking and learning.
30
+
31
+ As for micropaleontology, researchers note a lack of human experts and decline in micropalentology training $\left\lbrack {{10},{25},{32}}\right\rbrack$ . That said, most software development has been aimed at automated identification of microfossils, with the most recent approaches focusing on machine learning and using 3D models for planktic and benthic foraminifera identification $\left\lbrack {{10},{25},{32}}\right\rbrack$ . Several large microfossil databases were built $\left\lbrack {{13} - {15},{43}}\right\rbrack )$ . However, these online resources are designed for an advanced user and are difficult to use for entry-level specialists and students without prior instruction on microfossils.
32
+
33
+ Until recently, there were no applications supporting active learning in micropaleontology. FossilSketch is the first application that supports active learning in undergraduate micropaleontology [references redacted for anonymization] To summarize, there is clearly a need and growing interest in developing automated AI tools for geoscience education and microfossil identification. To address this need we designed FossilSketch, a novel, universally accessible, and academically rigorous educational tool for undergraduate geoscience education.
34
+
35
+ § 2.2 DIGITAL SKETCH RECOGNITION IN THE CLASSROOM
36
+
37
+ Sketching activities in the classroom have pedagogically been linked to enhanced student creativity and learning $\left\lbrack {{33},{35},{40},{50},{52}}\right\rbrack$ . Researchers find that sketching benefits learning in a wide range of disciplines, from human anatomy and biology to engineering, geography, and math $\left\lbrack {5,{18},{19},{37},{44}}\right\rbrack$ . Studies have confirmed that information retention and learning outcomes are significantly improved when engaging in drawing and writing activities vs. using a keyboard as the primary input modality [33]. To that point, sketch-based learning tools have been linked to a higher retention of information and improved skill compared to students who do not learn with sketch-based activities [21,53].
38
+
39
+ Early gesture recognition systems developed by Rubine [45] have led to improved recognition systems including template-matching algorithms from the "Dollar" family of recognizers $\left\lbrack {1,2,{48},{49},{54}}\right\rbrack$ that produced lightweight recognition systems easily added to existing software. The "Dollar" recognizers perform classification tasks by using different methods of calculating distance from user-generated input compared against several samples of trained data. Despite these recognizers being used for classification techniques rather than grading sketch accuracy, we use this work as a basis for our recognition system due to synergy in design. Both feature-based classification techniques and template matching techniques were later expanded into more robust systems for scaffolded recognition via systems like PaleoSketch [41] and LADDER [22], the second of which is notable for its integration of domain-specific shapes to better describe relationships between sketch properties to assist in recognition. More recent works like nuSketch [17] and COGSketch [16] integrate sketch recognition algorithms into educational tools to assist with the learning experience to measurable success.
40
+
41
+ Mechanix [36, 47], Newton's Pen [30] and Newton's Pen II [29], Physics Book [12], and SketchTivity [23, 51] are systems specifically written to leverage the educational advantage of drawing and sketching into the core interactions of their tools. Indeed, these systems serve as the primary conceptual basis from which FossilSketch is designed. We aimed at adapting the educational techniques presented by these tools to the domain of micropaleontology in the classroom. This led to a variety of changes and design considerations taken in the teaching approach outlined in the next section.
42
+
43
+ § 3 DESIGN
44
+
45
+ § 3.1 DESIGN CONSIDERATIONS
46
+
47
+ FossilSketch is a web-based educational tool for teaching students techniques for identifying microfossils. FossilSketch focuses on Foraminifera and Ostracoda due to their utility and accessibility in undergraudate lab settings. Foraminifera and Ostracoda are two of the most commonly used groups of microfossils in industrial, environmental, and scientific applications. The morphology of species in both groups is closely related to the environments in which they live $\left\lbrack {{20},{27},{42}}\right\rbrack$ and these two groups are often used in species-specific geochemical studies [24]. However, accurate species identification is required for using this micropaleontological tool effectively. For additional context, Foraminifera are amoeboid protists with shells made of calcium carbonate or agglutinated sediment grains and are often abundant in marine environments [4], and Ostracoda are micro-crustaceans with a bivalved calcareous carapace that are found in all aquatic environments from fresh water lakes to to the deep-sea [4]. These are also some of the larger microfossils, which allows students to view them with standard stereoscopes.
48
+
49
+ To use industry standard packages and tools, the website is built using the Next.js framework and a MySQL database. Educational materials for FossilSketch were developed to supplement various geoscience courses in the College of Arts & Sciences at a large R1 university. Traditionally, undergraduate students learn about micropaleontology through lectures, diagrams, specimens viewed through a stereoscope, and hand-sized models in upper-level courses as part of paleontology courses. FossilSketch educational materials include the following: 1) educational videos; 2) instructional mini-games; 3) microfossil identification exercises; and 4) microfossil assemblage reconstruction exercises. All four types of activities consist of content specifically created for FossilSketch, based on real-life scientific study cases, and tailored to support the educational exercises in traditional and FossilSketch-based courses.
50
+
51
+ Exercises were developed based on the courses' learning objectives, the microfossil collections available, and the expertise of [co-author names redacted for review]. FossilSketch can be used in courses of different levels, from lower-level non-geology majors to upper-level courses for geology majors, and thus, the difficulty and number of activities included in a class vary depending on the teaching goals and the activities assigned to students. Modules, and microfossils can be added, or removed depending on the class or activity in which FossilSketch is deployed. The self-contained nature of the exercises and the flexibility of the landing page interface offers the versatility of rearranging the website experience depending on the course learning objectives.
52
+
53
+ § 3.2 EDUCATIONAL VIDEOS
54
+
55
+ Educational videos were created specifically for FossilSketch and were created to provide introductory information to help contextualize concepts covered in the rest of FossilSketch's activity types [links redacted for anonymization] When users click on these modules, an overlay with an embedded YouTube link is displayed. Students are free to change playback with the standard embedded YouTube video controls, and captions, and the overlay can be dismissed at any time by clicking outside of the video area. No progress data is recorded for this type of activity.
56
+
57
+ FossilSketch is intended to augment instructor lectures, meaning the videos are not intended to serve as a replacement for lecture material as is usually the case with typical instructional videos in an online learning interface. The FossilSketch system uses instructional videos to provide necessary information for students to engage with the rest of the modules if the students have not yet received instructor lectures, while at the same time emphasising concepts most directly relevant to the activities if they have attended in-depth lectures in the classroom.
58
+
59
+ § 3.3 INSTRUCTIONAL MINI-GAMES
60
+
61
+ FossilSketch integrates various kinds of interactive instructional tools. In order to improve student comprehension of microfossil identification, we broke identification tasks into mini-games that students could repeat to develop mastery. Each mini-game consists of one or more types of interactions intended to highlight the visual-morphology aspect of learning about microfossil identification. We currently have three matching games and one orientation game.
62
+
63
+ § 3.3.1 MATCHING GAMES
64
+
65
+ Matching games require the participants to match morphological features, such as the outline shape for Ostracoda, or morphototype and type of chamber arrangement for Foraminifera. At the beginning of the game, the students are presented with a reference image that lists each morphotype along with a sketched example, and students are able to return to this reference image again, when needed, by clicking on the zoomed-out image on the bottom right corner of the screen. When the game starts, the screen displays a small number of draggable "discs" or rectangular "cards" with actual microfossil photomicrographs that the user can move into slots with sketched categories for each feature used in this game. At the moment, three different mini-games are created with this kind of interaction: Ostracoda lateral outline identification, Foraminifera chamber arrangement, and Foraminifera morphotype identification.
66
+
67
+ < g r a p h i c s >
68
+
69
+ did on the exercise overall.
70
+
71
+ Figure 2: FossilSketch mini-games
72
+
73
+ All matching games include three rounds, with each level contributing to a final star score. The Foraminifera chamber arrangement mini-game randomly pulls images of Foraminifera from the database for matching to the corresponding chamber arrangement types, with each round of the game having four cards to match. In the morphotype mini-game, the number of draggable items and slots in later rounds increases from 4 in the first round, to 8 in the third round to increase difficulty. If the answer is incorrect, FossilSketch provides a hint by showing a hint or indicating which of the cards were matched incorrectly, and a user can try again to submit a correct answer. Students receive a star rating from one to three based on how many rounds they got correct on their first attempt.
74
+
75
+ § 3.3.2 ORIENTATION GAME
76
+
77
+ The orientation game integrates a rotation interaction to help students gain an understanding of how to correctly orient the ostracod valve for identification. An ostracod valve has four sides: dorsal, ventral, posterior, and anterior margins/sides. This game starts with a general description of each of these margins to help students gain an intuition of how to identify each side of an ostracod. The user is tasked with rotating an ostracod to its position with the dorsal side up and all of its sides correctly labeled. To simplify the interaction, students rotate in one direction 90 degrees at a time by clicking or tapping once on the ostracod that is displayed in the center of the screen. When the student believes that the ostracod is oriented correctly, they submit their answer by selecting the "Finished" button on the center bottom of the screen.
78
+
79
+ As in the matching games, the orientation games are divided into three rounds. In this case, each round consists of one ostracod valve that needs to be rotated into the correct orientation. Answers are marked "correct" if they are rotated correctly the first time. If the submitted answer was incorrect, FossilSketch provides a hint on how to orient the valve correctly. Students are encouraged to use the knowledge gained from the hint by correcting their wrong answers. The star rating is based on the first submitted attempt for each round.
80
+
81
+ < g r a p h i c s >
82
+
83
+ Figure 3: Menu of the morphotype ID exercises. Students pick from any of the unidentified morphotypes marked with a "?", and afterwards are shown their performance on a 3-star rating system.
84
+
85
+ § 3.4 IDENTIFICATION EXERCISES
86
+
87
+ In micropaleontology, microfossils are picked from sediment samples and the obtained variety of different species represents an assemblage characteristic of the sample and may point to the environmental setting or geologic age of the sample. A micropaleontologist would identify the species of microfossils in this assemblage based on their morphology, or their characteristic features. Primarily, FossilSketch offers a scaffolded learning experience to guide students through the steps needed to identify microfossils and their morphological characteristics.
88
+
89
+ Students are first presented with a menu depicted in Figure 3, where they can select to identify a specimen of Ostracoda or Foraminifera to genus level or a morphotype of Foraminifera. The Foraminifera identification steps to genus level can be seen in Figure 4 and are the following: 1) sketch the outline of the foraminifer image on the left; 2) sketch the outline of the last chamber on the image on the left; 3) select the type of shell this foraminifer has; 4) choose the overall shape of the organism from a menu; 5) choose the shape of the chambers; and 6) their number; 7) choose the type of chamber arrangement from the menu; 8) select the aperture location from the menu; 9) and select the aperture shape from a menu; 10) identify a genus based on the selected features. The Ostracoda genera identification exercise steps are shown in Figure 5 and include: 1) sketch the maximum length of the valve; 2) sketch the maximum height of the valve; 3) identify right vs left valve; 4) sketch the outline of the ostracod valve; 5) choose the type of outline from the menu; 6) measure approximate size of the valve and choose the size range from the menu; 7) choose the types of ornamentation, select any additional features when present; 8) and identify an ostracod genus based on the selected features.
90
+
91
+ Within each exercise the types of interactions are described below:
92
+
93
+ § 3.4.1 SKETCHING INTERACTIONS
94
+
95
+ Sketching (steps 1-2 for Foraminifera, and steps 1-2 and 4 for Ostracoda) helps students retain and understand the various shapes and outlines they observe in different microfossils. It is the primary method of interaction after which the project is named. Sketching interactions integrate functionality from a library called paper.js to deliver flexible drawing interactions. Although the system is intended to be used with styli and touch to most naturally resemble a sketching activity, it is also possible to draw with a mouse or trackpad. Drawing interactions are usually integrated as the first steps of both kinds of identification exercises, as the overall shape of the sample is critical in identifying the microfossil.
96
+
97
+ The FossilSketch system checks for correctness using a template matching algorithm, outlined in Algorithm 2. The template recognizer coded specifically for FossilSketch uses the Hausdorff distance metric to determine the accuracy to the key for each microfossil. Before recognition, both the template and the input sketch are resampled to a lower sampling rate with roughly equidistant points as outlined in Algorithm 1. The formula followed for calculating the interspace distance is given in Eq. 1 where $c = {256}$ is a constant empirically derived to adjust the distance between the points for optimal calculation of the distance metric. The algorithm then iterates through each point in the input sketch, comparing it with the corresponding point for the template sketch and calculating the Euclidean distance between the two. Total distance is calculated across all the compared points and the cumulative sum is the overall "distance" between a template and the student input (see Figure 6). If the average deviation of the points is greater than the pixel with of the canvas divided by a constant, the algorithm concludes that the input sketch is too different from the template sketch. This constant was empirically determined after internal testing to match the desired student experience; students are meant to provide a relatively accurate, but not perfect, recreation of the template.
98
+
99
+ The template sketches are provided by [co-author names redacted for review] and coded directly into each foraminifer or ostracod image. Every foraminifer has a database entry containing template sketch data and the outline for its left view (see Figure 3 step 1), and its last chamber (Figure 3 step 2). For every ostracod in a database, there is a template sketch data for the outline, maximum length, and maximum height.
100
+
101
+ $$
102
+ S = \frac{\sqrt{{\left( {x}_{m} - {x}_{n}\right) }^{2} + {\left( {y}_{m} - {y}_{n}\right) }^{2}}}{c},c = {256} \tag{1}
103
+ $$
104
+
105
+ Algorithm 1 Resampling Technique
106
+
107
+ Require: Point list path, distance $S$
108
+
109
+ Ensure: Re-sampled point list out
110
+
111
+ $D \leftarrow 0$
112
+
113
+ for $i$ in path do
114
+
115
+ BetweenDist $\leftarrow \sqrt{{\left( {x}_{i + 1} - {x}_{i}\right) }^{2} + {\left( {y}_{i + 1} - {y}_{i}\right) }^{2}}$
116
+
117
+ $D \leftarrow D +$ BetweenDist
118
+
119
+ if $D > S$ then
120
+
121
+ $D \leftarrow$ BetweenDist
122
+
123
+ out $\leftarrow$ new point $\left( {{x}_{i},{y}_{i}}\right)$
124
+
125
+ end if
126
+
127
+ end for
128
+
129
+ Algorithm 2 Compare Sketches
130
+
131
+ Require: Student Spath, template Tpath
132
+
133
+ Ensure: Boolean result
134
+
135
+ totalDeviation $\leftarrow 0$
136
+
137
+ for $i$ in Spath do
138
+
139
+ closestDistance $\leftarrow$ INF
140
+
141
+ longestIndex $\leftarrow 0$
142
+
143
+ for $j$ in $T$ path do
144
+
145
+ tempDist $\leftarrow$ distance between ${\operatorname{Spath}}_{i}$ and ${\operatorname{Tpath}}_{j}$
146
+
147
+ if tempDist < closestDistance then
148
+
149
+ closestDist $\leftarrow$ tempDist
150
+
151
+ closestIndex $\leftarrow j$
152
+
153
+ end if
154
+
155
+ end for
156
+
157
+ end for
158
+
159
+ avgDeviation $\leftarrow \frac{\text{ totalDeviation }}{\text{ spathlength }}$
160
+
161
+ cwidth $\leftarrow$ pixel width of canvas
162
+
163
+ if avgDeviation $> \frac{\text{ cwidth }}{70}$ then
164
+
165
+ result $\leftarrow$ True
166
+
167
+ else
168
+
169
+ result $\leftarrow$ False
170
+
171
+ end if
172
+
173
+ § 3.4.2 IDENTIFICATION OF FEATURES FROM A MENU
174
+
175
+ Identification of features (steps 3-5 for Foraminifera, and steps 3, 5-6 for Ostracoda) is presented to students as a horizontal multiple-choice menu along the bottom of the screen. During each of these steps, the student is asked to identify one of several characteristic features of the microfossils. For instance, the student might be asked "what is the overall shape of the organism?" and the possible answers might be "vase-like", "convex", "low-conical", "spherical" and "arch" among others. With each option, a sample sketched outline of each shape is shown, but it is important to note these are sketched examples and not photorealistic depictions of the choices. The student is tasked with remembering the particular physical properties of each characteristic feature, as well as matching the pictures with the closest choice from the menu. Of these, one is the correct answer. In this part of the exercise, the student does not receive immediate feedback to their submitted selections, and all of these answers are summarized for the student to use in order to make the final identification from the database of genera for Foraminifera and Ostracoda.
176
+
177
+ < g r a p h i c s >
178
+
179
+ Figure 4: Foraminifera Identification Steps (from left to right, top to bottom): 1) Sketch the Outline, 2) Sketch the Last Chamber, 3) Select the Shell Type, 4) Select the Overall Shape, 5) Select the Chamber Shape, 6) Select the Number of Chambers, 7) Select the Chamber Arrangement, 8) Select the Aperture Location, 9) Select the Aperture Shape, 10) Identify the Genus
180
+
181
+ < g r a p h i c s >
182
+
183
+ Figure 5: Ostracod Identification Steps (from left to right, top to bottom): 1: Sketch the Max Length, 2) Sketch the Max Height, 3) Identify right vs left valve, 4) Sketch the Outline, 5) Select the Valve Shape, 6) Select the Approximate Size, 7) Select the Ornamentation, 8) Identify the Genus
184
+
185
+ § 3.4.3 POINTING INTERACTION
186
+
187
+ Pointing interactions (step 5 for Foraminifera morphotype ID) are a simplified form of "sketching interactions" that require students to click once in a general area of interest, and FossilSketch checks if the identified location is correct. Specifically, this interaction is used to identify the general location of the aperture of a given foraminifer. The student is asked to click once in the region where they believe the aperture is. Each foraminifer in the FossilSketch database contains data on a rectangular region that points to the general area of its aperture. When the student clicks "Submit" after identifying the aperture area, FossilSketch checks to see if the location of the click is within the predefined rectangular area. If it is, the answer is marked as correct. The location of the aperture is only used for identifying a foraminifer's morphotype.
188
+
189
+ § 3.4.4 SUMMARY SCREEN
190
+
191
+ The summary screen (step 10 for Foraminifera, and Ostracoda) is the last step for each identification exercise, asking the student to draw from their observations and make the final selection of the genus or morphotype for Foraminifera or Ostracoda. Each morphotype or genus has a list of characteristic features, and, based on student answers, each feature correctly marked during the identification steps would have a green check mark. The list of morphotypes or genera on the summary screen is ranked by the highest number of matching properties with student answers. If the student's answers are correct, the choice is easy since it has the most check marks and is the first item listed. Additionally, a picture of each morphotype of genus is included, letting students double-check to see if their best-ranked choice is the most accurate. This system allows students to develop self-assessment skills to see if their choices match up with any given morphotype or genus. At any time students are able to revisit any of the previous steps, so this final choice would be a good motivation to do so if they notice their prior choices did not yield a definitive conclusion. It also allows students to see different properties that might be common between some morphotypes or genera, but each foraminifer and ostracod specimen will have only one correct final answer.
192
+
193
+ < g r a p h i c s >
194
+
195
+ Figure 6: To evaluate answers, FossilSketch resamples and overlays both the student input and instructor-provided sketch, and a total distance metric is calculated by summing the Euclidean distance between sampled points.
196
+
197
+ § 3.5 ASSEMBLAGE EXERCISE
198
+
199
+ One of the goals of this interface is to demonstrate to students the various applications of microfossils in geosciences. Once the students gain mastery of microfossil identification through practicing mini-games and microfossil identification, they proceed to the final type of exercise and assessment where they can apply their knowledge to reconstruct environments from an assemblage of different microfossils. In this exercise, the students view microfossil assemblages with approximately 20 foraminifer or ostracod individuals and identify the foraminiferal morphotypes or Ostracoda genera present. These assemblages imitate an actual microfossil "slide", as seen under a microscope that contains an assemblage of Foraminifera or Ostracoda. Students are asked to identify how many of each foraminiferal morphotype or ostracod genus specimens are present in the slide. Before students start working on the exercise, they can view a screen with a summary of the information on foraminiferal morphotypes or ostracod genera and how they can be used to interpret environmental properties, such as the oxygenation or salinity of the water. This exercise includes 3 rounds and a summary. The student then needs to identify the different genera or morphotypes and select from the menu on the right side of the screen the number of each morphotype. It is intended that students will draw on their knowledge from the previous exercises to quickly identify the morphotypes or genera they see in these assemblages. For the ostracod assemblages, the menu to select from includes both the genera that are and genera that are not present in the assemblage. For the foraminiferal morphotypes, the assemblage includes two morphotypes to select from and "Other" category. To answer correctly, the student must provide a correct number for all categories, i.e., for both of the morphotypes or genera and the "Other" category, in an assemblage.
200
+
201
+ Both assemblage exercises conclude with a summary page where the student is asked to make an overall conclusion about the environment-based morphotypes and genera present in the assemblages. For instance, the Foraminifera morphotype assemblage exercise uses assemblages to determine bottom water oxygenation. It has been shown that in environments where cylindrical- and flat-tapered morphotypes are found in abundance, the environments usually have low oxygenation [28]. The students are asked to rank each assemblage by relative oxygenation level. They should be able to do so when they consider the relative abundance of cylindrical-tapered and flat-tapered morphotypes they found in each of the three assemblages. Similarly, for Ostracoda genera, students count the number of individuals of each genera, and determine the bottom water salinity indicated by each of the assemblages. If a student makes a mistake, FossilSketch provides feedback, by showing which specimens correspond to which genera and morphotypes, so one can correct their response. These exercises show how microfossil research is applied and assess microfossil identification skills learned and honed across all exercises of the FossilSketch system.
202
+
203
+ § 4 EVALUATION
204
+
205
+ To test the efficacy of FossilSketch as an effective means of teaching micropaleontology, we conducted a case-control experiment in a "Paleontology and Geobiology" course over two different semesters. As the control, students were taught micropaleontology without the use of FossilSketch in the Spring 2020 semester. As the case, students were taught micropaleontology using FossilSketch in the Spring 2023 semester. We describe the experience of the students from each of these semesters in more detail in the following subsections. As a side note, FossilSketch was used in other semesters in between our control and case groups; however, the tech stack for FossilSketch was completely overhauled prior to its deployment in Spring 2023.
206
+
207
+ § 4.1 SPRING 2020
208
+
209
+ During the Spring 2020 semester, students participated in three-hour-long laboratory sessions consisting of several specimen-based laboratory activities. Students used 3D physical models and labeled SEM images to study the main morphological features of various Foraminifera and Ostracoda respectively. After completing these activities, students were asked to select a microfossil and provide a labeled sketch of the specimen, identify its morphological features, and ultimately identify its genus. Students were encouraged to work in teams and were allowed to ask the teaching assistant or professor any questions they had.
210
+
211
+ § 4.2 SPRING 2023
212
+
213
+ During the Spring 2023 semester, students were asked to use FossilSketch along with the in-person specimen-based laboratory activities. Specifically, students were asked to watch the educational videos, play each of the four mini-games, and identify at least three different Ostracoda, and Foraminifera. After completing these activities, students were asked to select a microfossil and provide a labeled sketch of the specimen, identify its morphological features, and ultimately identify its genus.
214
+
215
+ § 4.3 PARTICIPANTS
216
+
217
+ A total of 86 students, two TAs, and one instructor (who taught both courses) consented and took part in the study, of which 51 students represent the control group, and 35 represent the test group. The instructor is an author on this paper. Before data collection and using FossilSketch software, participants were given a quick overview of the project and signed consent forms (IRB2019-1218M, expiration date 02/05/2026).
218
+
219
+ § 5 RESULTS
220
+
221
+ In both semesters we conducted surveys and focus groups with the students. We also conducted semi-structured interviews with the graduate TAs and the professor to get insights into their experience with FossilSketch. We discuss their feedback in the following subsections.
222
+
223
+ § 5.1 STUDENT FEEDBACK
224
+
225
+ After using FossilSketch, students completed an engagement survey where they could give feedback about their experience, what they found effective, and what they found difficult. This survey contained open-ended questions regarding their expectations in the course, how they felt about the micropaleontology activities, and what strategies they employed to complete the coursework. To determine the impact of FossilSketch on student engagement and enjoyment, we conducted a deeper analysis of the responses to the question "Did you enjoy the micropaleontology activities in this class? Which ones? And what about them were enjoyable?". We coded the answers to this question based on whether the tone was positive, neutral, or negative, as students used this question to either describe things they enjoyed or complain about the things they did not. In the Spring 2020 semester, there were 25 answers to this question with 11 being positive, 8 being neutral, and 6 being negative. In the Spring 2023 semester, there were 22 answers to this question with 18 being positive, 2 being neutral, and 2 being negative. Conducting a $\chi$ -squared analysis showed that the answers are statistically significantly different with $p < {0.05}$ . As the two main changes were the increase in positive responses and the decrease in neutral responses, we hypothesize that FossilSketch won over students who had less initial buy-in for learning microfossils. There were students who were notably passionate and critical about learning the material in both groups, which can be expected in any course. Many students were being exposed to microfossils for the first time, so they had little expectation of the utility of learning this tool. For the traditional methods, some of these students left the unit lukewarm, saying that they did not hate the material but also did not enjoy it. By contrast, most students who used FossilSketch answered specific features they liked the most, and several also described the traditional lab activities that FossilSketch augments. In short, FossilSketch was more effective in engaging students to learn about micropaleontology when compared to using traditional methods alone.
226
+
227
+ Students also demonstrated engagement with FossilSketch through their usage patterns. Several students completed the genus identification exercises for additional practice, with a small number of students completing the exercises six times more than required to complete the lab assignments. The majority of students also indicated that the genera identification exercise was their favorite activity in FossilSketch, because they enjoyed sketching and following step-by-step instructions. Ostracoda exercises were notably more popular with half of the students completing extra identifications (students were required to complete 3 Foraminifera and 3 Ostracoda genera identifications), likely because they are easier to complete due to having fewer steps. A similar pattern arises when looking at the mini-game playing statistics. Half of the students would play matching games additional times. The most difficult exercise, the assemblage exercise, was only occasionally played additional times, but this result is expected due to its difficulty.
228
+
229
+ § 5.2 TEACHING ASSISTANTS' FEEDBACK
230
+
231
+ We conducted a semi-structured interview with the Teaching Assistants (TA) from both the Spring 2020 and Spring 2023 courses to understand how FossilSketch impacted their experience.
232
+
233
+ § 5.2.1 SPRING 2020
234
+
235
+ Overall the TA was quite negative about the experience of teaching microfossils. The TA has to learn the material beforehand from the instructor in order to properly proctor the lab session. The instructor explains what the answers are to the lab questions and what to look for in the specimens to identify them so that the TA can answer questions during the lab. This preparation is necessary as recognizing the different microfossils is challenging without experience. To that point, the TA noted that the students found the topic difficult to grasp:
236
+
237
+ "Challenging, some students were very confused. Some students were okay, but some found it really hard to understand, as compared to other groups [macrofossils]. Microfossils were definitely more difficult for them."
238
+
239
+ She went on to note that, given the difficulty students have in learning about microfossils, more time needs to be spent on teaching the subject. Learning the different species requires gaining familiarity with the unique features and attributes, which involves getting exposure to samples and practicing identifying them. Furthermore, fully understanding and committing these concepts to memory can require significant creativity.
240
+
241
+ "You have to be creative talking to students, like coming up with some non-traditional ways to remember morphology features, like: Uvigerina looks like a banana bunch, just imagine that. I used a lot of imagination when I was trying to grasp that."
242
+
243
+ § 5.2.2 SPRING 2023
244
+
245
+ Overall the TA was positive about her experience with FossilSketch being used as part of the lab assignments. She felt that students benefited from its use and that it sped up the process of learning about microfossils.
246
+
247
+ "I did have one student tell me that this was the least confusing lab out of all of them. I thought that was pretty amazing. So I think it is very good for a kind of helping me kind of an abstract idea into actually something tangible for people to understand."
248
+
249
+ Regarding her experience as a TA, she noted that using FossilSketch lightened her workload, as students asked fewer questions overall and she could rely on FossilSketch as a tool for answering some of the questions that did arise. FossilSketch provided a database to look up visual aids as well as a medium to walk through the identification process.
250
+
251
+ "I think it made my work easier. People kind of just went off on their own, and they kind of worked through it on their own. [...] All I did was I put up the key that was in the corner of one of the mini-games. I just went up there and said like, "Look at this." So then they could actually figure it out from there. So that was really helpful."
252
+
253
+ She mentioned that the students did come to her with some bugs and issues with the software, but these did not detract meaningfully from the student's overall experience using FossilSketch. She noted that as a graduate student herself she could see herself using FossilSketch as a reference, and she felt that leaning into this idea of FossilSketch as a reference could make the website more broadly useful. For instance, she suggested adding a glossary of terms with images and examples for students to conveniently reference the basics.
254
+
255
+ § 5.3 INSTRUCTOR FEEDBACK
256
+
257
+ We conducted a semi-structured interview with the professor who taught both the Spring 2020 and Spring 2023 courses to understand how FossilSketch impacted her workload and her teaching.
258
+
259
+ § 5.3.1 SPRING 2020
260
+
261
+ When asked about the attitudes of students towards microfossils in this class she noted that students were generally quite excited to learn about microfossils; however, there were several sticking points within the class. Students would become quite frustrated when looking at samples through a stereoscope as they were being asked to view and analyze tiny objects that are inherently difficult to see and parse. Furthermore, students complained about having the sketch the microfossils, finding the task quite tedious.
262
+
263
+ "So even with a stereoscope, they're often relatively difficult to see, and so students get very frustrated because we're asking them to notice things and see things, and they're not able to zoom in enough. [...] They also can't turn it over and manipulate it, and that also is a frustration because there's certain anatomical parts of it that you could see best if you could turn it. [...] So I think students find themselves very frustrated and as that level of frustration rises, their ability to learn goes down, right?”
264
+
265
+ When asked if there was a difference in experience between ostracods and forams, she expressed that because she is an expert in Foraminifera and is less personally interested in Ostracoda she felt that translated to how well students were learning about the two categories of microfossils. Not only did she teach the two categories differently, going into more detail on forams than ostracods, but she also expressed that she was likely better at making students more comfortable with the former given her own comfort with the subject.
266
+
267
+ "I think students feel the same way about each of them [ostracods and foraminifera] because they're tiny, mysterious things, but I'm probably better at making them comfortable with forams just because of my position."
268
+
269
+ § 5.3.2 SPRING 2023
270
+
271
+ To integrate FossilSketch into her classroom, the instructor replaced part of the lab assignments and the paper sketching assignments with FossilSketch exercises and games. FossilSketch was generally scheduled to be completed at the beginning of the lab session, although some students would complete the exercises before the lab in preparation. Students were excited to use a computer-based tool.
272
+
273
+ When asked how students responded to FossilSketch, she noted that students appreciated a number of its aspects. She noted that students enjoyed being able to go back and do something over again, reviewing microfossils as many times as they needed before dealing with the physical specimens. They also appreciated being able to do their assignments and review the material from anywhere and at their own pace, rather than having to complete the tasks under the pressure and constraints of being in a physical lab and a given time limit. She also explicitly noted that while students would frequently complain about physically sketching microfossils, they did not complain about sketching the fossils using FossilSketch.
274
+
275
+ She noted that while students still complained about the assignments, their complaints shifted.
276
+
277
+ "So I think the difference is where their frustration points are. Before, all of their frustration points were focused on the microscope, and then with the introduction of FossilSketch, their frustration points get focused on the computer. But what I found interesting was their frustration with the microscope declined, so I still had students do things in class looking at the specimens. But $I$ think because they had seen the specimens in another way, they felt more comfortable looking down the microscope."
278
+
279
+ She also commented that students' conceptions of how much they could learn changed due to the introduction of FossilSketch.
280
+
281
+ "So I've also found that the way that they think about how much they know changed. So like when they were doing the traditional teaching, I think they felt like they knew everything they could know, like the things that they didn't know were just not accessible to them, like the materials weren't good enough. [...] And now they've kind of - they shifted a little bit. To now, they feel like they don't know... I guess bigger things? So like instead of them feeling like they don't really know what a foram is, they feel like they're now focused more on: 'I don't know how to apply them'. [...] So I think they still they still have this feeling that - students always feel like 'I don't know anything, I don't know everything yet, I have to study more'. They always kind of have this feeling. But now that feeling has been transferred to kind of higher level ideas which is actually really useful."
282
+
283
+ When asked about the effect of FossilSketch on her own workload, she noted that initially, just like any change to the curriculum, it required effort to develop the materials and figure out how to incorporate them into her specific use case; however, after that initial set-up effort, it was just as easy to incorporate into her classroom as the traditional lab assignments. She did note that FossilSketch did make it easier for her to train the TA, as she could just ask the TA to use FossilSketch. In that sense, she also felt that it lowered TA anxiety, as the TA was not required to know as much of the material as they could (and did) point students with questions to FossilSketch in order to get answers.
284
+
285
+ Finally, when asked if she would want to utilize both FossilSketch and traditional teaching approaches to teaching microfossils, she mentioned that FossilSketch had distinct advantages in specific scenarios that would lead her to use FossilSketch exclusively, whereas in other scenarios she would want to rely more heavily on traditional approaches. If she were to teach microfossils in an online and/or remote course, she would use FossilSketch primarily. FossilSketch also could be used for students who need accommodations, such as those who cannot look into a stereoscope but can interact with a screen or those who are unable to physically attend in-person labs. She noted that for students that are geology majors, she would want them to physically look at specimens under a stereoscope; however, for students who are non-majors who will likely never look at specimens again, FossilSketch would offer them enough of the material without the frustration of looking through a stereoscope.
286
+
287
+ § 6 CONCLUSION
288
+
289
+ FossilSketch is an intelligent tutoring system to support learning micropaleontology in undergraduate geoscience classrooms. The tool teaches students how to recognize Foraminifera and Ostracoda microfossils using sketch-based exercises and mini-games to practice identifying these specimens. We evaluated the effectiveness of FossilSketch in the classroom from the perspective of the instructors and students using qualitative and quantitative analysis. The results show that students respond better to FossilSketch and that the burden on the instructors is reduced, resulting in a better classroom experience for all parties.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/AUa_CiMnZ9/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,273 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FaceZip: Automatic Texture Compression for Facial Blendshapes
2
+
3
+ Category: Research
4
+
5
+ ![01963e00-33cf-703c-93c5-e771b5219dc0_0_276_327_1372_451_0.jpg](images/01963e00-33cf-703c-93c5-e771b5219dc0_0_276_327_1372_451_0.jpg)
6
+
7
+ Figure 1: We present a method to compress the information of the texture that dynamically changes with the underlying blendshape mesh. Compared to the naïve method of storing all the textures on the blendshape mesh (left), our approach (right) significantly reduces the size with only a small loss of quality.
8
+
9
+ ## Abstract
10
+
11
+ Recently, numerous cinematic and interactive entertainment production companies have adopted advanced capture systems for acquiring faithful facial geometries and their corresponding textures. However, animating these captured models in a controllable way for real-time application is difficult. While blendshape is typically used for parameterizing facial geometries, dynamically changing the texture of the geometry is challenging. Since texture data is significantly larger than the vertex coordinates of the meshes, storing the texture of all the blendshape meshes is impractical. We present a method to compress the texture data in a way compatible with blendshape for real-time applications, such as video games. Our method takes advantage of the locality of the difference between facial textures by blending a few textures with spatially different weights. Our method achieved more accurate reconstructions of the original textures comparing the baseline of principal component analysis.
12
+
13
+ Index Terms: Computing methodologies-Computer graphics-Image Compression; Computing methodologies-Computer graphics—Texturing
14
+
15
+ ## 1 INTRODUCTION
16
+
17
+ Recently, the demand has significantly increased for realistic digital human models in various applications such as cinema, interactive entertainment, and the metaverse. Photogrammetry is typically used to automatically capture the $3\mathrm{D}$ geometries and materials of the actors. However, a wide range of skilled artists in modeling, sculpting, texture painting, rigging, and animation is still necessary to bring life to the captured model. This difficulty largely originated from the parameterization of the acquired model to efficiently represent the surface material and deformation in an efficient and controllable way. The overall facial deformation is typically parameterized using the blendshape model and the detailed surface appearance is represented in the 2D textures. However, the coupling of the two - dynamically changing the texture according to the blendshape parameter - has been challenging. It has been difficult to represent details such as wrinkles, dimples, and furrows, that are dynamically created with various expressions.
18
+
19
+ The blendshape is a popular facial animation technique because it can create complicated deformation by the simple linear combination of the semantically meaningful (typically 50 to 100) blendshape meshes. Theoretically, the texture on the blendshape can also be represented using the linear combination of the texture on the blend-shape meshes. However, storing all the textures of the blendshape mesh is not practical, as the memory requirement of the texture is significantly larger than that of the vertex coordinates of the blend-shape mesh. Especially in real-time applications such as games, the current hardware requires keeping the number of texture accesses to 10 or fewer.
20
+
21
+ A typical technique used in the gaming industry is to store a texture of a neutral face and several textures that collect wrinkles of large expressions (e.g., [26]). These textures are blended nonuniformly using the weight map i.e., the spatially changing weights. The advantage of this approach is that the artist has control over all textures, allowing for authoring exaggerated facial expressions. On the other hand, it is time-consuming and requires a lot of skills to manually create such weight maps and textures of wrinkles.
22
+
23
+ We present a technique to automatically generate the compressed texture for the blendshape model (see Figure 1). Our algorithm leverages the locality of the wrinkles by dividing the entire texture into many fragments. These fragments are seamlessly stitched back to several textures while avoiding blur using our optimized selection of the combination of the fragments. Furthermore, we extend our model to the blendshape generated from the example-based blendshape technique [13], which is a popular way to generate the blendshape meshes of Ekman's Facial Action Coding System (FACS) [6] from fewer example meshes.
24
+
25
+ We demonstrate our approach by the comparison against the baseline of principal component analysis (PCA). The contributions to our proposed method include
26
+
27
+ - Automatic compression of textures for the application of facial blendshape.
28
+
29
+ - The extension of the texture generation for the blendshape generated by example-based blendshape technique [13].
30
+
31
+ ## 2 RELATED WORK
32
+
33
+ The generation of facial models has been studied for many years. We refer to the surveys $\left\lbrack {{18},{29}}\right\rbrack$ for the comprehensive review. This paper focuses on the texture compression of the application of blendshape in a real-time environment.
34
+
35
+ ![01963e00-33cf-703c-93c5-e771b5219dc0_1_158_159_1478_366_0.jpg](images/01963e00-33cf-703c-93c5-e771b5219dc0_1_158_159_1478_366_0.jpg)
36
+
37
+ Figure 2: Overview of our method. For each texture, we first compute the difference from the neutral texture and apply a mask to decompose the texture into localized fragments. Then, the fragments from the same mask are split into four clusters. The fragments from different masks are reassembled into difference textures with a combination that reduces reconstruction error. In the run-time computation, the weight maps are blended with the blendshape weights. For the blendshape generated by the example-based rigging, we augment the weight map.
38
+
39
+ Parametric Facial Deformation Skinning [11] is one of the simplest methods to animate faces by placing fictitious facial bones under the skin. Because it is difficult to faithfully reproduce facial deformation with bones, blendshape deformation [20] is often used instead for high-quality animation. The blendshape typically deforms the vertices of the meshes with the linear combination of the differences from the neutral mesh [12]. Neumann et al. [19] propose a method that extracts sparse and localized deformation modes from an animated mesh sequence so that extracted dimensions often have interpretable meaning. These parameterizations work well for vertices of meshes but are often difficult to handle textures because the amount of data is huge.
40
+
41
+ Expression Capture High-quality facial capture setups are increasingly common in the industry. These systems typically use the photometric stereo technique with polarizers to obtain albedo, specular, normal, and roughness textures from vertical and parallel polarization images [8]. Riviere et al. [22] avoid the sequential light flashing in estimating these textures by applying inverse rendering for cross- and parallel-polarized images. Zhang et al. [28] modified the Light Stage [4] for high-speed cameras and developed a mechanism to directly take the animation sequence itself at video rate instead of the discrete expressions. While expression capture requires multiple separated workflows such as mesh reconstruction, fitting to a base mesh, and computation of each texture, Liu et al. [15] proposed a single end-to-end neural network acquisition framework.
42
+
43
+ FACS Blendshape Generation Productions typically use the meshes of FACS poses to create the blendshapes for the facial animation. As FACS are typically made up of 50 to over 100 independent meshes, there is a demand to build all FACS blendshape meshes from the limited number of captures. Li et al. [13] present a retarget-ing technique to synthesize blendshape meshes of FACS poses from a small number of expression capture. However, this work focuses on the facial geometry, not the texture. Li et al. [14] proposed a neural network to generate FACS expressions and their corresponding textures from a single neural facial scan. However, the resulting textures are too expensive to be directly blended in real time.
44
+
45
+ Facial Material Representation Facial animation has been studied for many years, but only a few researches focused on the compression of facial texture. The 3D morphable model (3DMM) represents the detailed 3D model in lower dimensional parameters. We refer to the recent survey on the morphable facial model by Egger et al. [5]. These parametric models can also decompose the animation data and change the facial expressions by changing the parameters. The pioneering work by Blanz et al. [3] parameterizes both vertex coordinates and the RGB texture value using the principal component analysis (PCA).
46
+
47
+ Recent studies using neural networks are still expensive to evaluate in real time for high-resolution material generation. Machine learning approaches that use convolutional neural networks (CNN) $\left\lbrack {{21},{24}}\right\rbrack$ to directly output meshes and textures. But they all need a large amount of training data and can not be used for real-time purposes. Lombardi et al. [16] represented the facial data as a set of neural radiance field models instead of a mesh and textures. The performance of the learning method can be very poor if the real data is greatly different from the training data, in age, skin color, etc.
48
+
49
+ Garridol et al. [7] and Shi et al. [25] represented small surface detail by shape-from-shading techniques which generate highly detailed surface geometry dynamically according to the expressions However, these approaches require extremely high computational complexity and are not suitable for real-time application. Huang et al. [9] and Ma et al. [17] generate the detail of different expressions by detail maps while keeping the diffuse texture not changed However, the diffuse map typically changes as the wrinkles appear depending on the expressions.
50
+
51
+ ## 3 METHODS
52
+
53
+ Texture Compression Let us assume we have $N$ number of RGB textures ${\mathcal{I}}_{n}$ where $n \in \{ 1,\ldots , N\}$ and one RGB "neutral texture" ${\mathcal{I}}_{0}$ . Our algorithm computes four RGB "difference textures" ${\mathcal{D}}_{i}$ where $i \in \{ 1,\ldots ,4\}$ and weight maps ${\mathcal{W}}_{in} \in \mathbb{R}$ . With these computed values, we efficiently approximate the $n$ -th original texture ${\mathcal{I}}_{n}$ as
54
+
55
+ $$
56
+ {\mathcal{I}}_{n} \simeq {\overline{\mathcal{I}}}_{n} = {\mathcal{I}}_{0} + \mathop{\sum }\limits_{{i = 1}}^{4}{\mathcal{D}}_{i} \odot {\mathcal{W}}_{in} \tag{1}
57
+ $$
58
+
59
+ where the $\odot$ symbol stands for the Hadamart product, i.e., pixel-wise product. Figure 3 visually explains the reconstruction formulation in (1). Note that we have four source textures because the weight maps can be efficiently stored in the RGBA texture space. The weight $\mathcal{W}$ is smoothly defined, thus it can be stored in a down-sampled image without significant loss of quality. In this paper, the ratio of down-sampling is eight, thus the size of the weight is 64 times smaller than the original one. With input texture size $W \times H$ , the original textures require $N \times W \times H$ space, while the compressed model requires $\left( {5 + N/{64}}\right) \times W \times H$ resulting in a significantly smaller memory footprint especially when $N$ is large.
60
+
61
+ This paper presents a method to compute the decomposition of the input textures in (1) for the application of facial blendshape. Specifically, we generate textures for the example-based facial rigging [13], where the input textures are given in a small number of example shapes. When the weights of the blendshape change and thus the expression changes, our method outputs the texture with corresponding fine detail such as wrinkles.
62
+
63
+ ![01963e00-33cf-703c-93c5-e771b5219dc0_2_156_150_712_559_0.jpg](images/01963e00-33cf-703c-93c5-e771b5219dc0_2_156_150_712_559_0.jpg)
64
+
65
+ Figure 3: Our texture compression approach. For each blendshape mesh, we have four corresponding weight maps. These weight maps are applied to the four difference textures. The summation of all the weighted difference textures and the neutral texture reconstructs the texture of the blendshape mesh.
66
+
67
+ We present an overview of our method in Figure 2. We first decompose the textures into fragments. Then, all the fragments are clustered into four groups. Finally, we find an optimal arrangement to store the fragments in the source textures and generate weight maps for every blendshape mesh. Section 3.1 explains this precomputation and Section 3.2 explains the following run-time computation.
68
+
69
+ ### 3.1 Compression of Textures
70
+
71
+ The deformation of the entire face has rich varieties, but if we focus on a specific location, the deformation can be approximated with several modes. For example, a forehead exhibits horizontal furrows when the character raises the eyebrows and vertical furrows when the character frowns. To take advantage of such a locality, we first divide the input texture ${\mathcal{I}}_{j}$ into small fragments. This fragment can be simply computed by applying the mask ${\mathcal{M}}_{k}$ in the input texture
72
+
73
+ $$
74
+ {\widehat{\mathcal{D}}}_{nk} = \left( {{I}_{n} - {I}_{0}}\right) \odot {\mathcal{M}}_{k}, \tag{2}
75
+ $$
76
+
77
+ where $k \in \{ 1,\ldots ,\#$ mask $\}$ is the index of the mask. Note that we apply the mask to the difference between the neutral texture $\left( {{I}_{j} - {I}_{0}}\right)$ here.
78
+
79
+ The mask takes the value between $\left\lbrack {0,1}\right\rbrack$ where the value outside the fragment is zero. To make the seam less visible, the masks need to smoothly change their value in the 3D space. Moreover, we define the masks such that they add up to one $\sum \mathcal{M} = 1$ .
80
+
81
+ Mask Computation We compute such smooth masks by solving the biharmonic equation (see Figure 4) inspired by the computation of the rigging weights in [10]. For the input mesh of the head, we first manually extract the set of triangles that corresponds to the face. Then we randomly sample the vertices of the extracted triangles. To sample uniformly over the 3D mesh, we use the Poisson disk sampling with the dart-throwing algorithm. Here, we reject samples within a $3\mathrm{\;{cm}}$ radius. The number of the sampled vertices is the number of the masks #mask. In total, we sampled 53 vertices.
82
+
83
+ Let $\phi \in \mathbb{R}$ are the values defined on the vertices of extracted face triangles. We solve the biharmonic equation by minimizing $\parallel {\Delta \phi }{\parallel }^{2}$ with the fixed boundary condition where $\phi = 1$ at one vertex and $\phi =$ 0 at the other vertices. We use combinatorial Laplacian on the 3D mesh to robustly compute the minimization using an iterative solver for a sparse linear system. Finally, the $k$ -th mask is computed by normalizing the solution in the UV space ${\mathcal{M}}_{k} = {\phi }_{k}/\mathop{\sum }\limits_{{{k}^{\prime } = 1}}^{{\# \text{ mask }}}{\phi }_{{k}^{\prime }}$ .
84
+
85
+ ![01963e00-33cf-703c-93c5-e771b5219dc0_2_926_167_719_285_0.jpg](images/01963e00-33cf-703c-93c5-e771b5219dc0_2_926_167_719_285_0.jpg)
86
+
87
+ Figure 4: Our mask computation approach. Given an input mesh (a), we first manually select a region of interest (b). Then, the vertices are uniformly sampled over the selected region (c). Finally, the biharmonic equation is solved on mesh while fixing the value at one of the sampled vertex as one and the others zero (d).
88
+
89
+ Clustering The fragment of the input texture has several dominant modes. We extract such modes using K-means clustering method where the number of the cluster is four. Since the K-means clustering minimizes the variance inside the cluster, we can best select the four representative fragments of all fragments with the same mask. For each mask $k$ , we record the four cluster centers of each cluster ${\widehat{\mathcal{D}}}_{ik}^{\prime }$ where $i \in \{ 1,\ldots ,4\}$ .
90
+
91
+ Difference Texture Generation We now have four fragments for every mask, we are stitching these fragments back to four full textures. Naïvely adding up the fragments with the same texture index $i$ will lead to blurring the texture around the border of the fragments. Hence, we optimize the combination of the fragments for each texture such that the adjacent fragments match together. Let $\sigma \left( {i, k}\right)$ be a permutation of index $i$ for each mask $k$ . We define the cost of a certain permutation index $\sigma$ as
92
+
93
+ $$
94
+ C\left( \sigma \right) = \mathop{\sum }\limits_{{\left\{ {{k}_{1},{k}_{2}}\right\} = 1}}^{{\# \text{ mask }}}\mathop{\sum }\limits_{{i = 1}}^{4}{\begin{Vmatrix}{\widehat{\mathcal{D}}}_{i{k}_{1}{k}_{2}}^{\prime \prime }\left( \sigma \right) - {\widehat{\mathcal{D}}}_{i{k}_{2}{k}_{1}}^{\prime \prime }\left( \sigma \right) \end{Vmatrix}}^{2}, \tag{3}
95
+ $$
96
+
97
+ $$
98
+ \text{where}{\widehat{\mathcal{D}}}_{i{k}_{1}{k}_{2}}^{\prime \prime }\left( \sigma \right) = {\widehat{\mathcal{D}}}_{\sigma \left( {i,{k}_{1}}\right) {k}_{1}}^{\prime } \odot {\mathcal{M}}_{{k}_{2}}\text{.} \tag{4}
99
+ $$
100
+
101
+ Note that in (4), we apply the two masks ${\mathcal{M}}_{{k}_{1}} \odot {\mathcal{M}}_{{k}_{2}}$ to the difference texture (see the definition of the fragment in (2)). The Hadamard product of the two masks takes non-zero positive value only around the intersection of the mask ${k}_{1}$ and ${k}_{2}$ .
102
+
103
+ The number of the possible permutation indexes $\sigma$ is finite but it is too large to compute by exhaustive search (i.e., $4{!}^{\# \text{ mask }}$ ). Thus, we present a method to iteratively reduce the cost to find an approximate minimizer. In each iteration, we look at a mask one by one. For each mask, we compute all the possible 4 ! index permutations for the mask and update the index to the minimizer while the indexes of other masks are fixed. By iterating this procedure 10 times, we reach convergence (see Figure 7-left). We repeat this procedure 100 times and record the permutation with the smallest cost.
104
+
105
+ Finally, we compute the four difference texture and the weights
106
+
107
+ as
108
+
109
+ $$
110
+ {\mathcal{D}}_{i} = \mathop{\sum }\limits_{{k = 1}}^{{\# \text{ mask }}}{\widehat{\mathcal{D}}}_{\sigma \left( {i, k}\right) k}^{\prime } \tag{5}
111
+ $$
112
+
113
+ $$
114
+ {\mathcal{W}}_{\text{in }} = \mathop{\sum }\limits_{{k = 1}}^{{\# \text{ mask }}}{\alpha }_{\text{ink }}{\mathcal{M}}_{k} \tag{6}
115
+ $$
116
+
117
+ where ${\alpha }_{ink}$ takes the value 1 if the fragmented image ${\widehat{\mathcal{D}}}_{nk}$ belongs to the ${\sigma }^{-1}\left( {i, k}\right)$ -th cluster and otherwise it takes the value 0 .
118
+
119
+ ### 3.2 Texture Generation for Blendshapes
120
+
121
+ In this section, we describe texture generation for blendshape using our texture compression technique. The blendshape is often used to make a controllable face. Suppose we have a list of the vertices for the mesh of neutral face ${V}_{0}$ and the vertices for example face meshes ${V}_{j}$ the blendshape computes the vertex from the parameter ${\beta }_{n} \in \left\lbrack {0,\overline{1}}\right\rbrack$ where $n \in \{ 1,\ldots , N\}$ as
122
+
123
+ $$
124
+ V\left( \mathbf{\beta }\right) = {V}_{0} + \mathop{\sum }\limits_{{n = 1}}^{N}{\beta }_{n}\left( {{V}_{n} - {V}_{0}}\right) \tag{7}
125
+ $$
126
+
127
+ Note that the blendshape formulation in (7) is specifically called delta blendshape [12] (see Figure 5-left).
128
+
129
+ ![01963e00-33cf-703c-93c5-e771b5219dc0_3_156_970_710_348_0.jpg](images/01963e00-33cf-703c-93c5-e771b5219dc0_3_156_970_710_348_0.jpg)
130
+
131
+ Figure 5: We blend difference textures with the same weight as the blendshape weight.
132
+
133
+ With the textures that correspond neutral mesh ${\mathcal{I}}_{0}$ and the other example meshes ${I}_{n}$ , our compressed texture model compute the texture for the list of blendshape parameter $\mathbf{\beta }$ as
134
+
135
+ $$
136
+ \mathcal{I}\left( \mathbf{\beta }\right) = {\mathcal{I}}_{0} + \mathop{\sum }\limits_{{i = 1}}^{4}{\mathcal{D}}_{i} \odot \left( {\mathop{\sum }\limits_{{n = 1}}^{N}{\beta }_{n}{\mathcal{W}}_{in}}\right) . \tag{8}
137
+ $$
138
+
139
+ Note that in (8), we blend the texture with the ratio $\beta$ in a similar manner as the blending for the vertex positions in (7) (see Figure 5). By blending the weight map $\mathcal{W}$ , which is actually stored in the lower resolution, we can reduce the amount of computation.
140
+
141
+ Texture for Example-based Facial Rigging We extend our texture compression for blendshape in (8) to the blendshape in example-based facial rigging in [13]. The example-based facial rigging generates blendshape meshes with new vertex positions ${V}_{m}^{ * }$ where $m \in \{ 1,\ldots , M\}$ from the training meshes ${V}_{n}$ where $n \in \{ 1,\ldots , N\}$ . We aim to synthesize the texture for the generated blendshape ${V}_{m}^{ * }$ when the texture ${\mathcal{I}}_{n}$ is given for the training meshes ${V}_{n}$ . We first compute the weight map ${\mathcal{W}}_{in}$ and the difference texture ${\mathcal{D}}_{i}$ for the example meshes.
142
+
143
+ We refer to the original paper [13] for the detail of the example-based facial rigging. The basic idea of example-based facial rigging is to optimize the generated blendshape mesh ${V}_{m}^{ * }$ such that the training mesh ${V}_{n}$ can be reconstructed as
144
+
145
+ $$
146
+ {V}_{n} \simeq {V}_{0} + \mathop{\sum }\limits_{{m = 1}}^{M}{\bar{\beta }}_{nm}\left( {{V}_{m}^{ * } - {V}_{0}}\right) , \tag{9}
147
+ $$
148
+
149
+ where ${\bar{\beta }}_{nm} \in \mathbb{R}$ is the blending parameters that are optimized in the example-based facial rigging algorithm.
150
+
151
+ We synthesize the texture for generated blendshape such that it will reproduce the original compressed texture for the training meshes. This can be done by choosing the weight map for generated blendshape ${\mathcal{W}}_{im}^{ * }$ to satisfy
152
+
153
+ $$
154
+ {\mathcal{W}}_{in} = \mathop{\sum }\limits_{{m = 1}}^{M}{\bar{\beta }}_{nm}{\mathcal{W}}_{im}^{ * }. \tag{10}
155
+ $$
156
+
157
+ Unfortunately, the equation (10) alone cannot specify the weight map ${\mathcal{W}}_{im}^{ * }$ , since the equations are underdetermined (i.e., $N < M$ ). Our observation is that the weight maps should be sparse to avoid blending many weights. Hence, we minimize a regularizer ${\begin{Vmatrix}\mathop{\sum }\limits_{{m = 1}}^{M}{\mathcal{W}}_{im}^{ * }\end{Vmatrix}}^{2}$ to make the weight maps as small as possible while satisfying the constraint (10). This results in the weight map
158
+
159
+ $$
160
+ {\mathcal{W}}_{im}^{ * } = \mathop{\sum }\limits_{{n = 1}}^{N}{B}_{mn}^{ + }{\mathcal{W}}_{in} \tag{11}
161
+ $$
162
+
163
+ where ${B}^{ + } = {B}^{T}{\left( B{B}^{T}\right) }^{-1}$ is the pseudo-inverse of the matrix $B = {\left\lbrack {\bar{\beta }}_{nm}\right\rbrack }_{N \times M}$ . Finally, the texture for the example-based facial rigging is synthesized as
164
+
165
+ $$
166
+ \mathcal{I}\left( {\mathbf{\beta }}^{ * }\right) = {\mathcal{I}}_{0} + \mathop{\sum }\limits_{{i = 1}}^{4}{\mathcal{D}}_{i} \odot \left( {\mathop{\sum }\limits_{{m = 1}}^{M}{\beta }_{m}^{ * }{\mathcal{W}}_{im}^{ * }}\right) , \tag{12}
167
+ $$
168
+
169
+ where ${\mathbf{\beta }}^{ * }$ is the set of coefficients for example-based blendshape.
170
+
171
+ ## 4 RESULTS
172
+
173
+ Evaluation Data We evaluate our algorithm using the high-resolution multi-view photos of 20 expressions we purchased at the website [1]. For each expression model, we reconstruct the 3D mesh using the multi-view stereo software, Metashape [2]. Since these models are not represented by a consistent mesh, we fit a base mesh to these models using the commercial software, Wrap [23]. The resulting mesh has roughly ${52}\mathrm{k}$ triangles.
174
+
175
+ In the supplementary video, we demonstrate our real-time dynamically changing texture for the blendshape implemented on Unity. All the weight maps are combined into one big RGBA texture and a simple shader program blends these textures on GPU fully in parallel. Hence the cost of texture synthesis in (8) is negligible. For the example-based facial rigging [13], we use the implementation in the FaceScape [27] to generate the blendshape of 52 meshes that is compatible with Apple's ARKit.
176
+
177
+ Performance Table 1 lists the information on computation time for offline precomputation and run-time memory consumption These performance numbers are measured on a machine with Intel ${12600}\mathrm{\;K}$ and Windows 11 OS. The pre-computation is implemented in Python language. Specifically, we use Scikit-learn library for K-means clustering.
178
+
179
+ For the example-based blendshape of 52 meshes, our method uses a similar amount of memory compared with the PCA baseline. Note that our method requires slightly more memory because we need to store the weight maps. Compared to the naïve approach of storing all the textures for the blendshape meshes our algorithm work on about 8 times smaller memory, which agrees with the estimation in Section 3.
180
+
181
+ Online Submission ID: 0
182
+
183
+ <table><tr><td rowspan="2">Resolution</td><td colspan="2">Computation Time (s)</td><td colspan="3">Memory Consumption (MB)</td></tr><tr><td>Clustering</td><td>Permutation</td><td>Ours</td><td>PCA</td><td>Raw data</td></tr><tr><td>${512} \times {512}$</td><td>113</td><td>989</td><td>4.77</td><td>3.93</td><td>40.89</td></tr><tr><td>${1024} \times {1024}$</td><td>347</td><td>3447</td><td>19.07</td><td>15.72</td><td>163.58</td></tr><tr><td>${2048} \times {2048}$</td><td>1714</td><td>14502</td><td>76.28</td><td>62.91</td><td>654.31</td></tr></table>
184
+
185
+ Table 1: Timing of offline computation and the comparison of memory consumption for different methods measured for different texture resolutions for example-based blendshape (blending 53 meshes). The "Clustering" stands for time for K-means clustering for all the fragments, "Permutation" stands for time to find the permutation with a small cost value.
186
+
187
+ Texture Quality To evaluate our texture compression performance, we compare Root Mean Square Error (RMSE) reconstruction errors against the compression using PCA, which is often used in the texture compression of blendshape [3]. We illustrate our comparison result in Figure 6. The error of PCA is very large in areas such as eyebrows and lips where there are a lot of fine details. As the ablation study, we also performed the comparison against the compression without permutation optimization. We observed that our approach without permutation is generally better than the naïve approach of PCA. Our approach with the weight maps and permutation optimization constantly produced the smallest errors against the other approaches.
188
+
189
+ ![01963e00-33cf-703c-93c5-e771b5219dc0_4_165_937_701_809_0.jpg](images/01963e00-33cf-703c-93c5-e771b5219dc0_4_165_937_701_809_0.jpg)
190
+
191
+ Figure 6: Reconstruction error of 19 blendshape textures computed by three different methods. Our method achieves the lowest error of three in all the texture reconstructions.
192
+
193
+ Choice of the Number of the Masks Figure 7 shows the statistics of errors with respect to different numbers of masks. We record the distribution of the RMSE reconstruction error for 1000 different random initializations. We observe that the global minimum decreases as #mask goes up. However, if #mask is too large, reaching the global minimum becomes harder. Generally, the global minimum can be reached within 20 tries when $\#$ mask $= 5$ but 1000 tries will not be enough when $\#$ mask $= {63}$ . We choose 53 to be the final choice of #mask as it keeps a good balance of the reconstruction accuracy and the complexity to find the overall minimum. While our permutation optimization reduces the error a lot, proposing an efficient algorithm to find a near-optimal permutation is one of our future works.
194
+
195
+ ![01963e00-33cf-703c-93c5-e771b5219dc0_4_924_735_718_303_0.jpg](images/01963e00-33cf-703c-93c5-e771b5219dc0_4_924_735_718_303_0.jpg)
196
+
197
+ Figure 7: Left: Convergence of the cost function for permutation optimization for 100 different initialization. Right: Box and the whisker plot of the sum of the reconstruction errors when we change the number of masks and run the permutation optimization with random initialization 1000 times.
198
+
199
+ Textures for Example-based Blendshapes In Figure 8, we compare the quality of the textures generated for example-based blendshape (blending 52 meshes). As for the baseline, we compute the texture for one of the example-based meshes by the weighted sum of the difference texture ${\bar{I}}_{m} = {I}_{0} + \mathop{\sum }\limits_{{n = 1}}^{N}{B}_{mn}^{ + }\left( {{I}_{n} - {I}_{0}}\right)$ . Here we choose 6-th blendshape (i.e., $m = 6$ ) of the ARKit. Since the pixel is synthesized pixel by pixel independently, the texture of the baseline model is full of noise. Moreover, unnatural wrinkles appear since the baseline model does not use smooth weight maps. Our method achieves better results with significantly smaller memory consumption as shown in Table 1.
200
+
201
+ ## 5 CONCLUSION AND FUTURE WORK
202
+
203
+ Conclusion We present a method that can automatically compress the blendshape textures into one neutral texture and four difference textures. The difference textures can then be added up by spatially non-uniform weight maps which are smooth and stored in a low resolution. In this way, our method can significantly reduce memory consumption.
204
+
205
+ By combining the high-resolution difference textures and the low-resolution weight maps, the result of our method provides better-localized details than the global compression method such as PCA. The smoothness of the weight maps also prevents artifacts of extreme values and noise when blending the weights in the blendshape model. Our method does not need any prior knowledge about the texture image which is often required in data-driven methods and is fully compatible with blendshape for real-time applications.
206
+
207
+ ![01963e00-33cf-703c-93c5-e771b5219dc0_5_189_147_639_576_0.jpg](images/01963e00-33cf-703c-93c5-e771b5219dc0_5_189_147_639_576_0.jpg)
208
+
209
+ Figure 8: Reconstruction of the texture on one of the example-based blendshape mesh. The naïve approach blends the example textures while our approach blend the weight maps. The naïve approach results in high-frequency noise and false wrinkles in white color.
210
+
211
+ Future Work Currently, our work only compresses the albedo textures ignoring the other textures such as normal maps, specular maps, and roughness maps. Although extending our compression method to other textures is straightforward, there is potential to further compress the set of textures by leveraging the correlation between them.
212
+
213
+ We are also interested in applying our method other than facial textures. Especially, we are interested in efficiently representing the wrinkles of the garment in the texture space.
214
+
215
+ ## REFERENCES
216
+
217
+ [1] Triplegangers. https://triplegangers.com/.Accessed: March 30th, 2023.
218
+
219
+ [2] Agisoft. Metashape. https://www.agisoft.com/.Accessed: March 30th, 2023.
220
+
221
+ [3] V. Blanz and T. Vetter. A morphable model for the synthesis of 3D faces. In Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '99, p. 187-194. ACM Press/Addison-Wesley Publishing Co., USA, 1999.
222
+
223
+ [4] P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, W. Sarokin, and M. Sagar. Acquiring the reflectance field of a human face. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '00, p. 145-156. ACM Press/Addison-Wesley Publishing Co., USA, 2000. doi: 10.1145/344779.344855
224
+
225
+ [5] B. Egger, W. A. P. Smith, A. Tewari, S. Wuhrer, M. Zollhoefer, T. Beeler, F. Bernard, T. Bolkart, A. Kortylewski, S. Romdhani, C. Theobalt, V. Blanz, and T. Vetter. 3D morphable face models-past, present, and future. ACM Trans. Graph., 39(5), jun 2020.
226
+
227
+ [6] P. Ekman and W. V. Friesen. Facial action coding system. Environmental Psychology & Nonverbal Behavior, 1978.
228
+
229
+ [7] P. Garrido, L. Valgaert, C. Wu, and C. Theobalt. Reconstructing detailed dynamic face geometry from monocular video. ACM Trans. Graph., 32(6), nov 2013. doi: 10.1145/2508363.2508380
230
+
231
+ [8] A. Ghosh, G. Fyffe, B. Tunwattanapong, J. Busch, X. Yu, and P. De-bevec. Multiview face capture using polarized spherical gradient illumination. In Proceedings of the 2011 SIGGRAPH Asia Conference, SA '11. Association for Computing Machinery, New York, NY, USA, 2011.
232
+
233
+ [9] H. Huang, K. Yin, L. Zhao, Y. Qi, Y. Yu, and X. Tong. Detail-preserving controllable deformation from sparse examples. IEEE Transactions on Visualization and Computer Graphics, 18(8):1215-1227, 2012. doi: 10 .1109/TVCG.2012.88
234
+
235
+ [10] A. Jacobson, I. Baran, J. Popović, and O. Sorkine. Bounded biharmonic weights for real-time deformation. ACM Trans. Graph., 30(4), jul 2011. doi: 10.1145/2010324.1964973
236
+
237
+ [11] A. Jacobson, Z. Deng, L. Kavan, and J. P. Lewis. Skinning: Real-time shape deformation (full text not available). In ACM SIGGRAPH 2014 Courses, SIGGRAPH '14. Association for Computing Machinery, New York, NY, USA, 2014.
238
+
239
+ [12] J. P. Lewis, K. Anjyo, T. Rhee, M. Zhang, F. Pighin, and Z. Deng. Practice and Theory of Blendshape Facial Models. In S. Lefebvre and M. Spagnuolo, eds., Eurographics 2014 - State of the Art Reports. The Eurographics Association, 2014. doi: 10.2312/egst.20141042
240
+
241
+ [13] H. Li, T. Weise, and M. Pauly. Example-based facial rigging. In ACM SIGGRAPH 2010 Papers, SIGGRAPH '10. Association for Computing Machinery, New York, NY, USA, 2010.
242
+
243
+ [14] J. Li, Z. Kuang, Y. Zhao, M. He, K. Bladin, and H. Li. Dynamic facial asset and rig generation from a single scan. ACM Trans. Graph., 39(6), nov 2020. doi: 10.1145/3414685.3417817
244
+
245
+ [15] S. Liu, Y. Cai, H. Chen, Y. Zhou, and Y. Zhao. Rapid face asset acquisition with recurrent feature alignment. ACM Trans. Graph., 41(6), nov 2022. doi: 10.1145/3550454.3555509
246
+
247
+ [16] S. Lombardi, T. Simon, G. Schwartz, M. Zollhoefer, Y. Sheikh, and J. Saragih. Mixture of volumetric primitives for efficient neural rendering. ACM Trans. Graph., 40(4), jul 2021.
248
+
249
+ [17] W.-C. Ma, A. Jones, J.-Y. Chiang, T. Hawkins, S. Frederiksen, P. Peers, M. Vukovic, M. Ouhyoung, and P. Debevec. Facial performance synthesis using deformation-driven polynomial displacement maps. In ACM SIGGRAPH Asia 2008 Papers, SIGGRAPH Asia '08. Association for Computing Machinery, New York, NY, USA, 2008. doi: 10.1145/ 1457515.1409074
250
+
251
+ [18] A. Morales, G. Piella, and F. M. Sukno. Survey on 3D face reconstruction from uncalibrated images. Computer Science Review, 40:100400, 2021. doi: 10.1016/j.cosrev.2021.100400
252
+
253
+ [19] T. Neumann, K. Varanasi, S. Wenger, M. Wacker, M. Magnor, and C. Theobalt. Sparse localized deformation components. ACM Trans. Graph., 32(6), nov 2013.
254
+
255
+ [20] F. I. Parke and K. Waters. Computer Facial Animation. A K Peters/CRC Press, hardcover ed., 9 2008.
256
+
257
+ [21] E. Richardson, M. Sela, and R. Kimmel. 3D face reconstruction by learning from synthetic data. 092016.
258
+
259
+ [22] J. Riviere, P. Gotardo, D. Bradley, A. Ghosh, and T. Beeler. Single-shot high-quality facial geometry and skin appearance capture. ACM Trans. Graph., 39(4), aug 2020.
260
+
261
+ [23] Russian3DScanner. Wrap. https://www.russian3dscanner.com/.Accessed: March 30th, 2023.
262
+
263
+ [24] M. Sela, E. Richardson, and R. Kimmel. Unrestricted facial geometry reconstruction using image-to-image translation. pp. 1585-1594, 10 2017. doi: 10.1109/ICCV.2017.175
264
+
265
+ [25] F. Shi, H.-T. Wu, X. Tong, and J. Chai. Automatic acquisition of high-fidelity facial performances using monocular videos. ACM Trans. Graph., 33(6), nov 2014. doi: 10.1145/2661229.2661290
266
+
267
+ [26] A. Spring. Facs rigging & texture blending, 2020. https://adamspring.co.uk/2020/05/25/ facs-rigging-texture-blending-digital-humans/ [Accessed: (March 30th, 2023)].
268
+
269
+ [27] H. Yang, H. Zhu, Y. Wang, M. Huang, Q. Shen, R. Yang, and X. Cao. Facescape: a large-scale high quality $3\mathrm{\;d}$ face dataset and detailed riggable 3d face prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
270
+
271
+ [28] L. Zhang, C. Zeng, Q. Zhang, H. Lin, R. Cao, W. Yang, L. Xu, and J. Yu. Video-driven neural physically-based facial asset for production, 2022.
272
+
273
+ [29] M. Zollhöfer, J. Thies, P. Garrido, D. Bradley, T. Beeler, P. Pérez, M. Stamminger, M. Nießner, and C. Theobalt. State of the art on monocular 3D face reconstruction, tracking, and applications. Computer Graphics Forum, 37(2):523-550, 2018. doi: 10.1111/cgf.13382
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/AUa_CiMnZ9/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,229 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FaceZip: Automatic Texture Compression for Facial Blendshapes
2
+
3
+ Category: Research
4
+
5
+ < g r a p h i c s >
6
+
7
+ Figure 1: We present a method to compress the information of the texture that dynamically changes with the underlying blendshape mesh. Compared to the naïve method of storing all the textures on the blendshape mesh (left), our approach (right) significantly reduces the size with only a small loss of quality.
8
+
9
+ § ABSTRACT
10
+
11
+ Recently, numerous cinematic and interactive entertainment production companies have adopted advanced capture systems for acquiring faithful facial geometries and their corresponding textures. However, animating these captured models in a controllable way for real-time application is difficult. While blendshape is typically used for parameterizing facial geometries, dynamically changing the texture of the geometry is challenging. Since texture data is significantly larger than the vertex coordinates of the meshes, storing the texture of all the blendshape meshes is impractical. We present a method to compress the texture data in a way compatible with blendshape for real-time applications, such as video games. Our method takes advantage of the locality of the difference between facial textures by blending a few textures with spatially different weights. Our method achieved more accurate reconstructions of the original textures comparing the baseline of principal component analysis.
12
+
13
+ Index Terms: Computing methodologies-Computer graphics-Image Compression; Computing methodologies-Computer graphics—Texturing
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ Recently, the demand has significantly increased for realistic digital human models in various applications such as cinema, interactive entertainment, and the metaverse. Photogrammetry is typically used to automatically capture the $3\mathrm{D}$ geometries and materials of the actors. However, a wide range of skilled artists in modeling, sculpting, texture painting, rigging, and animation is still necessary to bring life to the captured model. This difficulty largely originated from the parameterization of the acquired model to efficiently represent the surface material and deformation in an efficient and controllable way. The overall facial deformation is typically parameterized using the blendshape model and the detailed surface appearance is represented in the 2D textures. However, the coupling of the two - dynamically changing the texture according to the blendshape parameter - has been challenging. It has been difficult to represent details such as wrinkles, dimples, and furrows, that are dynamically created with various expressions.
18
+
19
+ The blendshape is a popular facial animation technique because it can create complicated deformation by the simple linear combination of the semantically meaningful (typically 50 to 100) blendshape meshes. Theoretically, the texture on the blendshape can also be represented using the linear combination of the texture on the blend-shape meshes. However, storing all the textures of the blendshape mesh is not practical, as the memory requirement of the texture is significantly larger than that of the vertex coordinates of the blend-shape mesh. Especially in real-time applications such as games, the current hardware requires keeping the number of texture accesses to 10 or fewer.
20
+
21
+ A typical technique used in the gaming industry is to store a texture of a neutral face and several textures that collect wrinkles of large expressions (e.g., [26]). These textures are blended nonuniformly using the weight map i.e., the spatially changing weights. The advantage of this approach is that the artist has control over all textures, allowing for authoring exaggerated facial expressions. On the other hand, it is time-consuming and requires a lot of skills to manually create such weight maps and textures of wrinkles.
22
+
23
+ We present a technique to automatically generate the compressed texture for the blendshape model (see Figure 1). Our algorithm leverages the locality of the wrinkles by dividing the entire texture into many fragments. These fragments are seamlessly stitched back to several textures while avoiding blur using our optimized selection of the combination of the fragments. Furthermore, we extend our model to the blendshape generated from the example-based blendshape technique [13], which is a popular way to generate the blendshape meshes of Ekman's Facial Action Coding System (FACS) [6] from fewer example meshes.
24
+
25
+ We demonstrate our approach by the comparison against the baseline of principal component analysis (PCA). The contributions to our proposed method include
26
+
27
+ * Automatic compression of textures for the application of facial blendshape.
28
+
29
+ * The extension of the texture generation for the blendshape generated by example-based blendshape technique [13].
30
+
31
+ § 2 RELATED WORK
32
+
33
+ The generation of facial models has been studied for many years. We refer to the surveys $\left\lbrack {{18},{29}}\right\rbrack$ for the comprehensive review. This paper focuses on the texture compression of the application of blendshape in a real-time environment.
34
+
35
+ < g r a p h i c s >
36
+
37
+ Figure 2: Overview of our method. For each texture, we first compute the difference from the neutral texture and apply a mask to decompose the texture into localized fragments. Then, the fragments from the same mask are split into four clusters. The fragments from different masks are reassembled into difference textures with a combination that reduces reconstruction error. In the run-time computation, the weight maps are blended with the blendshape weights. For the blendshape generated by the example-based rigging, we augment the weight map.
38
+
39
+ Parametric Facial Deformation Skinning [11] is one of the simplest methods to animate faces by placing fictitious facial bones under the skin. Because it is difficult to faithfully reproduce facial deformation with bones, blendshape deformation [20] is often used instead for high-quality animation. The blendshape typically deforms the vertices of the meshes with the linear combination of the differences from the neutral mesh [12]. Neumann et al. [19] propose a method that extracts sparse and localized deformation modes from an animated mesh sequence so that extracted dimensions often have interpretable meaning. These parameterizations work well for vertices of meshes but are often difficult to handle textures because the amount of data is huge.
40
+
41
+ Expression Capture High-quality facial capture setups are increasingly common in the industry. These systems typically use the photometric stereo technique with polarizers to obtain albedo, specular, normal, and roughness textures from vertical and parallel polarization images [8]. Riviere et al. [22] avoid the sequential light flashing in estimating these textures by applying inverse rendering for cross- and parallel-polarized images. Zhang et al. [28] modified the Light Stage [4] for high-speed cameras and developed a mechanism to directly take the animation sequence itself at video rate instead of the discrete expressions. While expression capture requires multiple separated workflows such as mesh reconstruction, fitting to a base mesh, and computation of each texture, Liu et al. [15] proposed a single end-to-end neural network acquisition framework.
42
+
43
+ FACS Blendshape Generation Productions typically use the meshes of FACS poses to create the blendshapes for the facial animation. As FACS are typically made up of 50 to over 100 independent meshes, there is a demand to build all FACS blendshape meshes from the limited number of captures. Li et al. [13] present a retarget-ing technique to synthesize blendshape meshes of FACS poses from a small number of expression capture. However, this work focuses on the facial geometry, not the texture. Li et al. [14] proposed a neural network to generate FACS expressions and their corresponding textures from a single neural facial scan. However, the resulting textures are too expensive to be directly blended in real time.
44
+
45
+ Facial Material Representation Facial animation has been studied for many years, but only a few researches focused on the compression of facial texture. The 3D morphable model (3DMM) represents the detailed 3D model in lower dimensional parameters. We refer to the recent survey on the morphable facial model by Egger et al. [5]. These parametric models can also decompose the animation data and change the facial expressions by changing the parameters. The pioneering work by Blanz et al. [3] parameterizes both vertex coordinates and the RGB texture value using the principal component analysis (PCA).
46
+
47
+ Recent studies using neural networks are still expensive to evaluate in real time for high-resolution material generation. Machine learning approaches that use convolutional neural networks (CNN) $\left\lbrack {{21},{24}}\right\rbrack$ to directly output meshes and textures. But they all need a large amount of training data and can not be used for real-time purposes. Lombardi et al. [16] represented the facial data as a set of neural radiance field models instead of a mesh and textures. The performance of the learning method can be very poor if the real data is greatly different from the training data, in age, skin color, etc.
48
+
49
+ Garridol et al. [7] and Shi et al. [25] represented small surface detail by shape-from-shading techniques which generate highly detailed surface geometry dynamically according to the expressions However, these approaches require extremely high computational complexity and are not suitable for real-time application. Huang et al. [9] and Ma et al. [17] generate the detail of different expressions by detail maps while keeping the diffuse texture not changed However, the diffuse map typically changes as the wrinkles appear depending on the expressions.
50
+
51
+ § 3 METHODS
52
+
53
+ Texture Compression Let us assume we have $N$ number of RGB textures ${\mathcal{I}}_{n}$ where $n \in \{ 1,\ldots ,N\}$ and one RGB "neutral texture" ${\mathcal{I}}_{0}$ . Our algorithm computes four RGB "difference textures" ${\mathcal{D}}_{i}$ where $i \in \{ 1,\ldots ,4\}$ and weight maps ${\mathcal{W}}_{in} \in \mathbb{R}$ . With these computed values, we efficiently approximate the $n$ -th original texture ${\mathcal{I}}_{n}$ as
54
+
55
+ $$
56
+ {\mathcal{I}}_{n} \simeq {\overline{\mathcal{I}}}_{n} = {\mathcal{I}}_{0} + \mathop{\sum }\limits_{{i = 1}}^{4}{\mathcal{D}}_{i} \odot {\mathcal{W}}_{in} \tag{1}
57
+ $$
58
+
59
+ where the $\odot$ symbol stands for the Hadamart product, i.e., pixel-wise product. Figure 3 visually explains the reconstruction formulation in (1). Note that we have four source textures because the weight maps can be efficiently stored in the RGBA texture space. The weight $\mathcal{W}$ is smoothly defined, thus it can be stored in a down-sampled image without significant loss of quality. In this paper, the ratio of down-sampling is eight, thus the size of the weight is 64 times smaller than the original one. With input texture size $W \times H$ , the original textures require $N \times W \times H$ space, while the compressed model requires $\left( {5 + N/{64}}\right) \times W \times H$ resulting in a significantly smaller memory footprint especially when $N$ is large.
60
+
61
+ This paper presents a method to compute the decomposition of the input textures in (1) for the application of facial blendshape. Specifically, we generate textures for the example-based facial rigging [13], where the input textures are given in a small number of example shapes. When the weights of the blendshape change and thus the expression changes, our method outputs the texture with corresponding fine detail such as wrinkles.
62
+
63
+ < g r a p h i c s >
64
+
65
+ Figure 3: Our texture compression approach. For each blendshape mesh, we have four corresponding weight maps. These weight maps are applied to the four difference textures. The summation of all the weighted difference textures and the neutral texture reconstructs the texture of the blendshape mesh.
66
+
67
+ We present an overview of our method in Figure 2. We first decompose the textures into fragments. Then, all the fragments are clustered into four groups. Finally, we find an optimal arrangement to store the fragments in the source textures and generate weight maps for every blendshape mesh. Section 3.1 explains this precomputation and Section 3.2 explains the following run-time computation.
68
+
69
+ § 3.1 COMPRESSION OF TEXTURES
70
+
71
+ The deformation of the entire face has rich varieties, but if we focus on a specific location, the deformation can be approximated with several modes. For example, a forehead exhibits horizontal furrows when the character raises the eyebrows and vertical furrows when the character frowns. To take advantage of such a locality, we first divide the input texture ${\mathcal{I}}_{j}$ into small fragments. This fragment can be simply computed by applying the mask ${\mathcal{M}}_{k}$ in the input texture
72
+
73
+ $$
74
+ {\widehat{\mathcal{D}}}_{nk} = \left( {{I}_{n} - {I}_{0}}\right) \odot {\mathcal{M}}_{k}, \tag{2}
75
+ $$
76
+
77
+ where $k \in \{ 1,\ldots ,\#$ mask $\}$ is the index of the mask. Note that we apply the mask to the difference between the neutral texture $\left( {{I}_{j} - {I}_{0}}\right)$ here.
78
+
79
+ The mask takes the value between $\left\lbrack {0,1}\right\rbrack$ where the value outside the fragment is zero. To make the seam less visible, the masks need to smoothly change their value in the 3D space. Moreover, we define the masks such that they add up to one $\sum \mathcal{M} = 1$ .
80
+
81
+ Mask Computation We compute such smooth masks by solving the biharmonic equation (see Figure 4) inspired by the computation of the rigging weights in [10]. For the input mesh of the head, we first manually extract the set of triangles that corresponds to the face. Then we randomly sample the vertices of the extracted triangles. To sample uniformly over the 3D mesh, we use the Poisson disk sampling with the dart-throwing algorithm. Here, we reject samples within a $3\mathrm{\;{cm}}$ radius. The number of the sampled vertices is the number of the masks #mask. In total, we sampled 53 vertices.
82
+
83
+ Let $\phi \in \mathbb{R}$ are the values defined on the vertices of extracted face triangles. We solve the biharmonic equation by minimizing $\parallel {\Delta \phi }{\parallel }^{2}$ with the fixed boundary condition where $\phi = 1$ at one vertex and $\phi =$ 0 at the other vertices. We use combinatorial Laplacian on the 3D mesh to robustly compute the minimization using an iterative solver for a sparse linear system. Finally, the $k$ -th mask is computed by normalizing the solution in the UV space ${\mathcal{M}}_{k} = {\phi }_{k}/\mathop{\sum }\limits_{{{k}^{\prime } = 1}}^{{\# \text{ mask }}}{\phi }_{{k}^{\prime }}$ .
84
+
85
+ < g r a p h i c s >
86
+
87
+ Figure 4: Our mask computation approach. Given an input mesh (a), we first manually select a region of interest (b). Then, the vertices are uniformly sampled over the selected region (c). Finally, the biharmonic equation is solved on mesh while fixing the value at one of the sampled vertex as one and the others zero (d).
88
+
89
+ Clustering The fragment of the input texture has several dominant modes. We extract such modes using K-means clustering method where the number of the cluster is four. Since the K-means clustering minimizes the variance inside the cluster, we can best select the four representative fragments of all fragments with the same mask. For each mask $k$ , we record the four cluster centers of each cluster ${\widehat{\mathcal{D}}}_{ik}^{\prime }$ where $i \in \{ 1,\ldots ,4\}$ .
90
+
91
+ Difference Texture Generation We now have four fragments for every mask, we are stitching these fragments back to four full textures. Naïvely adding up the fragments with the same texture index $i$ will lead to blurring the texture around the border of the fragments. Hence, we optimize the combination of the fragments for each texture such that the adjacent fragments match together. Let $\sigma \left( {i,k}\right)$ be a permutation of index $i$ for each mask $k$ . We define the cost of a certain permutation index $\sigma$ as
92
+
93
+ $$
94
+ C\left( \sigma \right) = \mathop{\sum }\limits_{{\left\{ {{k}_{1},{k}_{2}}\right\} = 1}}^{{\# \text{ mask }}}\mathop{\sum }\limits_{{i = 1}}^{4}{\begin{Vmatrix}{\widehat{\mathcal{D}}}_{i{k}_{1}{k}_{2}}^{\prime \prime }\left( \sigma \right) - {\widehat{\mathcal{D}}}_{i{k}_{2}{k}_{1}}^{\prime \prime }\left( \sigma \right) \end{Vmatrix}}^{2}, \tag{3}
95
+ $$
96
+
97
+ $$
98
+ \text{ where }{\widehat{\mathcal{D}}}_{i{k}_{1}{k}_{2}}^{\prime \prime }\left( \sigma \right) = {\widehat{\mathcal{D}}}_{\sigma \left( {i,{k}_{1}}\right) {k}_{1}}^{\prime } \odot {\mathcal{M}}_{{k}_{2}}\text{ . } \tag{4}
99
+ $$
100
+
101
+ Note that in (4), we apply the two masks ${\mathcal{M}}_{{k}_{1}} \odot {\mathcal{M}}_{{k}_{2}}$ to the difference texture (see the definition of the fragment in (2)). The Hadamard product of the two masks takes non-zero positive value only around the intersection of the mask ${k}_{1}$ and ${k}_{2}$ .
102
+
103
+ The number of the possible permutation indexes $\sigma$ is finite but it is too large to compute by exhaustive search (i.e., $4{!}^{\# \text{ mask }}$ ). Thus, we present a method to iteratively reduce the cost to find an approximate minimizer. In each iteration, we look at a mask one by one. For each mask, we compute all the possible 4 ! index permutations for the mask and update the index to the minimizer while the indexes of other masks are fixed. By iterating this procedure 10 times, we reach convergence (see Figure 7-left). We repeat this procedure 100 times and record the permutation with the smallest cost.
104
+
105
+ Finally, we compute the four difference texture and the weights
106
+
107
+ as
108
+
109
+ $$
110
+ {\mathcal{D}}_{i} = \mathop{\sum }\limits_{{k = 1}}^{{\# \text{ mask }}}{\widehat{\mathcal{D}}}_{\sigma \left( {i,k}\right) k}^{\prime } \tag{5}
111
+ $$
112
+
113
+ $$
114
+ {\mathcal{W}}_{\text{ in }} = \mathop{\sum }\limits_{{k = 1}}^{{\# \text{ mask }}}{\alpha }_{\text{ ink }}{\mathcal{M}}_{k} \tag{6}
115
+ $$
116
+
117
+ where ${\alpha }_{ink}$ takes the value 1 if the fragmented image ${\widehat{\mathcal{D}}}_{nk}$ belongs to the ${\sigma }^{-1}\left( {i,k}\right)$ -th cluster and otherwise it takes the value 0 .
118
+
119
+ § 3.2 TEXTURE GENERATION FOR BLENDSHAPES
120
+
121
+ In this section, we describe texture generation for blendshape using our texture compression technique. The blendshape is often used to make a controllable face. Suppose we have a list of the vertices for the mesh of neutral face ${V}_{0}$ and the vertices for example face meshes ${V}_{j}$ the blendshape computes the vertex from the parameter ${\beta }_{n} \in \left\lbrack {0,\overline{1}}\right\rbrack$ where $n \in \{ 1,\ldots ,N\}$ as
122
+
123
+ $$
124
+ V\left( \mathbf{\beta }\right) = {V}_{0} + \mathop{\sum }\limits_{{n = 1}}^{N}{\beta }_{n}\left( {{V}_{n} - {V}_{0}}\right) \tag{7}
125
+ $$
126
+
127
+ Note that the blendshape formulation in (7) is specifically called delta blendshape [12] (see Figure 5-left).
128
+
129
+ < g r a p h i c s >
130
+
131
+ Figure 5: We blend difference textures with the same weight as the blendshape weight.
132
+
133
+ With the textures that correspond neutral mesh ${\mathcal{I}}_{0}$ and the other example meshes ${I}_{n}$ , our compressed texture model compute the texture for the list of blendshape parameter $\mathbf{\beta }$ as
134
+
135
+ $$
136
+ \mathcal{I}\left( \mathbf{\beta }\right) = {\mathcal{I}}_{0} + \mathop{\sum }\limits_{{i = 1}}^{4}{\mathcal{D}}_{i} \odot \left( {\mathop{\sum }\limits_{{n = 1}}^{N}{\beta }_{n}{\mathcal{W}}_{in}}\right) . \tag{8}
137
+ $$
138
+
139
+ Note that in (8), we blend the texture with the ratio $\beta$ in a similar manner as the blending for the vertex positions in (7) (see Figure 5). By blending the weight map $\mathcal{W}$ , which is actually stored in the lower resolution, we can reduce the amount of computation.
140
+
141
+ Texture for Example-based Facial Rigging We extend our texture compression for blendshape in (8) to the blendshape in example-based facial rigging in [13]. The example-based facial rigging generates blendshape meshes with new vertex positions ${V}_{m}^{ * }$ where $m \in \{ 1,\ldots ,M\}$ from the training meshes ${V}_{n}$ where $n \in \{ 1,\ldots ,N\}$ . We aim to synthesize the texture for the generated blendshape ${V}_{m}^{ * }$ when the texture ${\mathcal{I}}_{n}$ is given for the training meshes ${V}_{n}$ . We first compute the weight map ${\mathcal{W}}_{in}$ and the difference texture ${\mathcal{D}}_{i}$ for the example meshes.
142
+
143
+ We refer to the original paper [13] for the detail of the example-based facial rigging. The basic idea of example-based facial rigging is to optimize the generated blendshape mesh ${V}_{m}^{ * }$ such that the training mesh ${V}_{n}$ can be reconstructed as
144
+
145
+ $$
146
+ {V}_{n} \simeq {V}_{0} + \mathop{\sum }\limits_{{m = 1}}^{M}{\bar{\beta }}_{nm}\left( {{V}_{m}^{ * } - {V}_{0}}\right) , \tag{9}
147
+ $$
148
+
149
+ where ${\bar{\beta }}_{nm} \in \mathbb{R}$ is the blending parameters that are optimized in the example-based facial rigging algorithm.
150
+
151
+ We synthesize the texture for generated blendshape such that it will reproduce the original compressed texture for the training meshes. This can be done by choosing the weight map for generated blendshape ${\mathcal{W}}_{im}^{ * }$ to satisfy
152
+
153
+ $$
154
+ {\mathcal{W}}_{in} = \mathop{\sum }\limits_{{m = 1}}^{M}{\bar{\beta }}_{nm}{\mathcal{W}}_{im}^{ * }. \tag{10}
155
+ $$
156
+
157
+ Unfortunately, the equation (10) alone cannot specify the weight map ${\mathcal{W}}_{im}^{ * }$ , since the equations are underdetermined (i.e., $N < M$ ). Our observation is that the weight maps should be sparse to avoid blending many weights. Hence, we minimize a regularizer ${\begin{Vmatrix}\mathop{\sum }\limits_{{m = 1}}^{M}{\mathcal{W}}_{im}^{ * }\end{Vmatrix}}^{2}$ to make the weight maps as small as possible while satisfying the constraint (10). This results in the weight map
158
+
159
+ $$
160
+ {\mathcal{W}}_{im}^{ * } = \mathop{\sum }\limits_{{n = 1}}^{N}{B}_{mn}^{ + }{\mathcal{W}}_{in} \tag{11}
161
+ $$
162
+
163
+ where ${B}^{ + } = {B}^{T}{\left( B{B}^{T}\right) }^{-1}$ is the pseudo-inverse of the matrix $B = {\left\lbrack {\bar{\beta }}_{nm}\right\rbrack }_{N \times M}$ . Finally, the texture for the example-based facial rigging is synthesized as
164
+
165
+ $$
166
+ \mathcal{I}\left( {\mathbf{\beta }}^{ * }\right) = {\mathcal{I}}_{0} + \mathop{\sum }\limits_{{i = 1}}^{4}{\mathcal{D}}_{i} \odot \left( {\mathop{\sum }\limits_{{m = 1}}^{M}{\beta }_{m}^{ * }{\mathcal{W}}_{im}^{ * }}\right) , \tag{12}
167
+ $$
168
+
169
+ where ${\mathbf{\beta }}^{ * }$ is the set of coefficients for example-based blendshape.
170
+
171
+ § 4 RESULTS
172
+
173
+ Evaluation Data We evaluate our algorithm using the high-resolution multi-view photos of 20 expressions we purchased at the website [1]. For each expression model, we reconstruct the 3D mesh using the multi-view stereo software, Metashape [2]. Since these models are not represented by a consistent mesh, we fit a base mesh to these models using the commercial software, Wrap [23]. The resulting mesh has roughly ${52}\mathrm{k}$ triangles.
174
+
175
+ In the supplementary video, we demonstrate our real-time dynamically changing texture for the blendshape implemented on Unity. All the weight maps are combined into one big RGBA texture and a simple shader program blends these textures on GPU fully in parallel. Hence the cost of texture synthesis in (8) is negligible. For the example-based facial rigging [13], we use the implementation in the FaceScape [27] to generate the blendshape of 52 meshes that is compatible with Apple's ARKit.
176
+
177
+ Performance Table 1 lists the information on computation time for offline precomputation and run-time memory consumption These performance numbers are measured on a machine with Intel ${12600}\mathrm{\;K}$ and Windows 11 OS. The pre-computation is implemented in Python language. Specifically, we use Scikit-learn library for K-means clustering.
178
+
179
+ For the example-based blendshape of 52 meshes, our method uses a similar amount of memory compared with the PCA baseline. Note that our method requires slightly more memory because we need to store the weight maps. Compared to the naïve approach of storing all the textures for the blendshape meshes our algorithm work on about 8 times smaller memory, which agrees with the estimation in Section 3.
180
+
181
+ Online Submission ID: 0
182
+
183
+ max width=
184
+
185
+ 2*Resolution 2|c|Computation Time (s) 3|c|Memory Consumption (MB)
186
+
187
+ 2-6
188
+ Clustering Permutation Ours PCA Raw data
189
+
190
+ 1-6
191
+ ${512} \times {512}$ 113 989 4.77 3.93 40.89
192
+
193
+ 1-6
194
+ ${1024} \times {1024}$ 347 3447 19.07 15.72 163.58
195
+
196
+ 1-6
197
+ ${2048} \times {2048}$ 1714 14502 76.28 62.91 654.31
198
+
199
+ 1-6
200
+
201
+ Table 1: Timing of offline computation and the comparison of memory consumption for different methods measured for different texture resolutions for example-based blendshape (blending 53 meshes). The "Clustering" stands for time for K-means clustering for all the fragments, "Permutation" stands for time to find the permutation with a small cost value.
202
+
203
+ Texture Quality To evaluate our texture compression performance, we compare Root Mean Square Error (RMSE) reconstruction errors against the compression using PCA, which is often used in the texture compression of blendshape [3]. We illustrate our comparison result in Figure 6. The error of PCA is very large in areas such as eyebrows and lips where there are a lot of fine details. As the ablation study, we also performed the comparison against the compression without permutation optimization. We observed that our approach without permutation is generally better than the naïve approach of PCA. Our approach with the weight maps and permutation optimization constantly produced the smallest errors against the other approaches.
204
+
205
+ < g r a p h i c s >
206
+
207
+ Figure 6: Reconstruction error of 19 blendshape textures computed by three different methods. Our method achieves the lowest error of three in all the texture reconstructions.
208
+
209
+ Choice of the Number of the Masks Figure 7 shows the statistics of errors with respect to different numbers of masks. We record the distribution of the RMSE reconstruction error for 1000 different random initializations. We observe that the global minimum decreases as #mask goes up. However, if #mask is too large, reaching the global minimum becomes harder. Generally, the global minimum can be reached within 20 tries when $\#$ mask $= 5$ but 1000 tries will not be enough when $\#$ mask $= {63}$ . We choose 53 to be the final choice of #mask as it keeps a good balance of the reconstruction accuracy and the complexity to find the overall minimum. While our permutation optimization reduces the error a lot, proposing an efficient algorithm to find a near-optimal permutation is one of our future works.
210
+
211
+ < g r a p h i c s >
212
+
213
+ Figure 7: Left: Convergence of the cost function for permutation optimization for 100 different initialization. Right: Box and the whisker plot of the sum of the reconstruction errors when we change the number of masks and run the permutation optimization with random initialization 1000 times.
214
+
215
+ Textures for Example-based Blendshapes In Figure 8, we compare the quality of the textures generated for example-based blendshape (blending 52 meshes). As for the baseline, we compute the texture for one of the example-based meshes by the weighted sum of the difference texture ${\bar{I}}_{m} = {I}_{0} + \mathop{\sum }\limits_{{n = 1}}^{N}{B}_{mn}^{ + }\left( {{I}_{n} - {I}_{0}}\right)$ . Here we choose 6-th blendshape (i.e., $m = 6$ ) of the ARKit. Since the pixel is synthesized pixel by pixel independently, the texture of the baseline model is full of noise. Moreover, unnatural wrinkles appear since the baseline model does not use smooth weight maps. Our method achieves better results with significantly smaller memory consumption as shown in Table 1.
216
+
217
+ § 5 CONCLUSION AND FUTURE WORK
218
+
219
+ Conclusion We present a method that can automatically compress the blendshape textures into one neutral texture and four difference textures. The difference textures can then be added up by spatially non-uniform weight maps which are smooth and stored in a low resolution. In this way, our method can significantly reduce memory consumption.
220
+
221
+ By combining the high-resolution difference textures and the low-resolution weight maps, the result of our method provides better-localized details than the global compression method such as PCA. The smoothness of the weight maps also prevents artifacts of extreme values and noise when blending the weights in the blendshape model. Our method does not need any prior knowledge about the texture image which is often required in data-driven methods and is fully compatible with blendshape for real-time applications.
222
+
223
+ < g r a p h i c s >
224
+
225
+ Figure 8: Reconstruction of the texture on one of the example-based blendshape mesh. The naïve approach blends the example textures while our approach blend the weight maps. The naïve approach results in high-frequency noise and false wrinkles in white color.
226
+
227
+ Future Work Currently, our work only compresses the albedo textures ignoring the other textures such as normal maps, specular maps, and roughness maps. Although extending our compression method to other textures is straightforward, there is potential to further compress the set of textures by leveraging the correlation between them.
228
+
229
+ We are also interested in applying our method other than facial textures. Especially, we are interested in efficiently representing the wrinkles of the garment in the texture space.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/CrkHdts-KT/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,327 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # OpenTeleView: An Open 3D Teleconferencing Research Platform
2
+
3
+ Category: Research
4
+
5
+ ## Abstract
6
+
7
+ Recent demonstrations of 3D telepresence provide a glimpse into a future where $2\mathrm{D}$ video communication is replaced with photo-realistic virtual avatars rendered on $3\mathrm{D}$ displays. However, the existing technology demonstrations typically run on expensive dedicated devices that require the calibration of multiple cameras by experts and the underlying reconstruction, compression, transmission, and rendering methods remain proprietary. We describe our open platform for real-time end-to-end 3D teleconferencing using commodity hardware coupled with a modular software structure for inserting advanced computer vision algorithms supporting research and development. We demonstrate the utility of our modular end-to-end approach by integrating state-of-the art modules and improving them based on an analysis of current bottlenecks targeting low-latency processing. We include a baseline implementation supporting real-time 3D teleconferencing that provides a new benchmark for evaluation of current and future algorithms. We demonstrate the practicality of our approach with a baseline, a 3D teleconferencing system running at 25 frames per second with ${172}\mathrm{\;{ms}}$ latency on consumer GPUs that applies to a single RGB camera input and various 3D display technologies. Our 3D teleconferencing platform is open source, which paves the way for computer vision, computer graphics and HCI research to continue innovating together to make 3D teleconferencing the telecommunication standard.
8
+
9
+ ## 1 INTRODUCTION
10
+
11
+ With the dramatically accelerated shift to online meetings from the impact of the COVID-19 pandemic, there has been a resurgence in the need of new teleconferencing technology that creates a more real and in-person experience. One major challenge is to make tele-conferencing have feeling of presence including eye contact and situational awareness of each person's real-world space, such that pointing, and gesture are coordinated. Hence, more research effort is appearing for teleconferencing that allows the user to appear in $3\mathrm{D}$ and maintain direct eye contact with multiple speakers to enhance the overall communication experience and improve the information transmission efficiency [25]. Virtual Reality (VR) and Augmented Reality (AR) are the two main trends to create 3D experiences in recent years. These trends use three different types of hardware: headsets (HMDs) that connect to your PC, 2D semi-transparent displays like Google Glasses, and standalone 3D display devices. These displays support view-dependent rendering such as used in Fish Tank Virtual Reality (FTVR) that creates an effective method to support presence with stereo and motion parallax depth cues. However, these systems require rendering a person's likeness from different viewpoints which is not available without using some mechanism to capture and transmit the users' 3D characteristics. A number of proprietary systems have been proposed to achieve this goal, e.g., Google Starline project [26], Microsoft Holoportation [35], and [36], but each has either closed systems or large scale proprietary or prohibitively expensive hardware. Likewise, they are unavailable for researchers to perform perceptual evaluation to determine how well they achieve a sense of presence. Furthermore, the complex infrastructure to test proposed new research algorithms for supporting different aspects of the 3D teleconferencing pipeline is not readily accessible; thus, researcher results are typically reported in isolation without the opportunity to stress test it within the ecosystem of an end-to-end system. Our contribution fills this missing piece.
12
+
13
+ We describe OpenTeleView (actual name hidden for review) platform that provides an end-to-end platform that supports researchers to include specific contributions to different parts of the pipeline in a 3D teleconferencing system. Within the platform, each component's performance can be tested within a perceptually suitable 3D telecon-ferencing system for benchmarking and optimization. We provide the end-to-end system that uses off the shelf (OTS) components along with our own adaptations of existing algorithmic approaches in the literature to demonstrate: a) an accessible, low-cost, replicable end-to-end 3D teleconferencing system with the latest advances in research included as a benchmark; b) interface descriptions that provide connections for research as well as the needed scaffolding to enable end-to-end functional and perceptual performance testing; c) a modular interface for researchers to connect to common development platforms like PyTorch and Unity; and, d) a high-resolution offline recording at ${60}\mathrm{{fps}}$ with novel-view ground truth to establish a public benchmark for $3\mathrm{D}$ teleconferencing quality. Figure 1 shows an example of a user talking while showing her 3D image at the receiver's view-dependent display.
14
+
15
+ ![01963e0a-72f6-7d90-be25-5f08193d72d3_0_933_354_706_416_0.jpg](images/01963e0a-72f6-7d90-be25-5f08193d72d3_0_933_354_706_416_0.jpg)
16
+
17
+ Figure 1: OpenTeleView modular End-to-End 3D teleconferencing in action. The Sender side camera captured image (left) is encoded to a neural 3D model. Its parameters are sent to the receiver side where a photo-realistic view-dependent rendering is shown on the Receiver's 3D display (right). Being modular, research results on different encoders can be substituted for analysis and comparison on real-world 3D teleconferencing experiences.
18
+
19
+ We provide results from experiments with the baseline implementation and variations to demonstrate how the platform can be used to help identify and optimize different types of algorithmic bottlenecks. Our implementation has an end-to-end latency of ${172}\mathrm{\;{ms}}$ with a sustained frame rate on average of 25 frames per second (FPS) providing an excellent reference point for innovative algorithms to be tested against. Besides as an algorithmic research platform, the technical performance is suitable for qualitative perceptual testing allowing different modules to be compared with each other in real-world user testing.
20
+
21
+ ## 2 RELATED WORK
22
+
23
+ Research in teleconferencing has moved from 2D video to 3D. While significant research has gone into developing algorithms to make these systems feasible, we focus on the systems as a whole.
24
+
25
+ ### 2.1 Talking Head Models
26
+
27
+ Parametric head models $\left\lbrack {2,{28}}\right\rbrack$ are widely used in face generation [16,42] and reenactment [46-48]. These parametric models consume a low dimension vector that drives avatars to control the subjects. Following this line of work, we leverage the parametric model FLAME [28] in our baseline implementation and surround it with communication and rendering modules.
28
+
29
+ ### 2.2 Neural Rendering
30
+
31
+ Different from traditional rendering methods [23], neural rendering does not necessarily need the explicit mesh and texture. It can be achieved by implicit neural representation [32], and Generative Adversarial Networks [24]. However, they usually focus on image quality for novel view synthesis [33,34] and object editing [7,14,43], both of which rely on very deep neural networks that only run at low frame rates. We utilize a parametric mesh model with the deferred neural rendering method $\left\lbrack {{44},{45}}\right\rbrack$ , aiming at high-resolution high-fidelity face synthesis at high frame rates and extend it to work alongside the other modules to form a complete teleconferencing system.
32
+
33
+ ### 2.3 3D Teleconferencing
34
+
35
+ Gibbs et al. design a room-scale system which uses a single camera, a view tracking system, and IR emitter to render perspectively correct mono or stereo images on a wall-sized display [17]. Following that, [22] leverage a fast-rotating, convex mirror as a 3D display along with a high-speed projector to display a 3D image of a user. $\left\lbrack {{29},{54}}\right\rbrack$ design a fully GPU-accelerated data processing and rendering pipeline and use a set of Microsoft Kinect color-plus-depth cameras to allow head-tracked stereo views to be rendered for a parallax barrier autostereoscopic display. $\left\lbrack {9,{36}}\right\rbrack$ design a room-scale telepresence setup which uses an array of color and depth cameras, and displays in two locations to synthesize images of users in both rooms with correct eye gaze. [25] use a single Microsoft Kinect depth camera and an RGB camera to render users from novel views without the need of a large camera array. This rendering is then shown on a 3D display over a 3D background. [53] use an array of IR cameras and lasers, RGB and Microsoft Kinect depth cameras to develop a system for three-person teleconferencing with proper eye gazes. Another line of work uses avatars or figures [6] as surrogates that circumvents the challenge of rendering a virtual avatar. More recently, [27] developed an end-to-end system which utilizes an array of cameras (IR, RGB, and tracking) and an autostereoscopic display among other contributions to enable face-to-face teleconfer-encing better than $2\mathrm{D}$ alternatives. [30] uses a depth camera, and its 'inpainting' only supports moderate view changes. [52] and [31] are 2D, not capable of novel view synthesis. [50] could replace our FLAME-based encoder-decoder, but is not open source and the runtime is not stated. However, all of the recent live systems are proprietary and there is no publicly available offline benchmark.
36
+
37
+ ## 3 END-TO-END PIPELINE
38
+
39
+ The challenge of 3D teleconferencing is finding compatible modules and connecting them to efficiently infer, transmit, and render a realistic 3D head model so that convincing 3D motion parallax and stereo depth cues are maintained as if the Sender appears at the Receiver's location [57]. Figure 2 illustrates the main components of our OpenTeleView platform, with the Sender/Receiver hardware configuration, Encoder and Decoder, Persistent Data Storage (PDS) and communication module. The diagram shows the data flow from a Sender to a Receiver which would be duplicated for the bi-directional system; though they may have different camera and display configurations. The heart of the research for end-to-end 3D teleconferencing are the matched Encoder and Decoder for the encoding/compression of the input video signal and the subsequent decoding and view dependent rendering.
40
+
41
+ ![01963e0a-72f6-7d90-be25-5f08193d72d3_1_953_159_672_420_0.jpg](images/01963e0a-72f6-7d90-be25-5f08193d72d3_1_953_159_672_420_0.jpg)
42
+
43
+ Figure 2: The main components needed by our OpenTeleView platform to define a 3D teleconferencing system are: 1. Sender/Receiver hardware, 2. Encoder, 3. Decoder, and 4. Persistent Data Storage needs. Our platform provides network scaffolding and communication interfaces, including optional access to the Receiver's tracked position by the Encoder and Decoder, to support a range of end-to-end 3D teleconferencing research for performance testing, analysis and comparison.
44
+
45
+ We structured the OpenTeleView system to capture the main components that are necessary for an end-to-end 3D teleconferencing system and designed it to be modular, with the expectation that researchers will be able to add their own hardware assumptions with associated encoding and decoding approaches to strike different tradeoffs between quality and resources, e.g., for real-world perception testing as well as measurements of efficiency and quality of service. We provide sufficient scaffolding to accommodate a range of hardware assumptions, such as different display types, camera inputs and tracking technologies for rendering while providing software interfaces for supporting encoders and decoders doing frame-by-frame processing but also have access to persistent memory, accessed at start up when a connection is made between Sender and Receiver to exchange pre-trained models.
46
+
47
+ The communication infrastructure provides interfaces for interprocess communication to support modules to be run on different computers as well be written in different languages appropriate for the research.
48
+
49
+ We provide a baseline implementation with the OpenTeleView platform using a pre-trained head model and a neural render trained on Sender video data collected offline. Figure 3 shows the different components, each explained in detail in the subsequent sections. The Encoder generates a small set of $3\mathrm{D}$ head parameters of the Sender that is sent to the Decoder. The head parameters capture enough 3D content so that the Decoder can recreate the head of the user along with a neural renderer trained on the Sender's data that provides a photo-realistic, view-dependent render that can appear on the Receiver's display. The neural renderer can continue rendering different view-points as the Receiver moves around their display as needed. To represent the Sender 3D head parameters, we use the FLAME [28] model because it is low-dimensional and more expressive than other representations, e.g., FaceWarehouse model [4] and Basel Face Model [37]. It is easy to fit to data and commonly used by many algorithms (e.g. RingNet [41]; DECA [13]; CoMA [38]).
50
+
51
+ FLAME's head representation include geometry parameters Since FLAME does not have an appearance model, like previous method [13], we adapt the Basel Face Model [37] to be compatible with FLAME to give albedo parameters $\alpha \in {R}^{50}$ . Together, the Encoder (see Figure 3.3) computes these head parameters for every frame of the Sender and transmits these along with camera matrix $c$ and lighting parameters $l$ to the Decoder. The Decoder (see Figure 3.4) then used them to reconstruct the 3D head model of the Sender. The neural renderer then maps the 3D head model to a photo-realistic version of the Sender, though, from the viewpoint of the Receiver.
52
+
53
+ ![01963e0a-72f6-7d90-be25-5f08193d72d3_2_191_147_1413_811_0.jpg](images/01963e0a-72f6-7d90-be25-5f08193d72d3_2_191_147_1413_811_0.jpg)
54
+
55
+ Figure 3: Baseline 3D Teleconferencing Architecture: Encoder and Decoder use a compact (2.5kB/frame) 3D head model that represents the Sender’s head using shape $\left( \beta \right)$ , expression $\left( \psi \right)$ , pose $\left( \theta \right)$ , albedo $\left( \alpha \right)$ , camera matrix(c), and lighting parameters(i). These are computed every frame by the Encoder, transmitted to the Decoder and decoded from a single RGB image. Using the Receiver’s viewpoint, the neural renderer renders a view-dependent photo-realistic image of the Sender on the Receiver's 3D display.
56
+
57
+ $3\mathrm{D}$ head models and their rendering is an active research area for 3D teleconferencing, thus, our OpenTeleView platform makes it easy to analyse different approaches relative to each other in a real-world end-to-end 3D teleconferencing scenario.
58
+
59
+ ### 3.1 Sender Hardware
60
+
61
+ Our example implementation uses a single RGB camera (Logitech C920 Webcam HD Pro, 30 FPS, 1080p) and one computer with a GPU (NVIDIA GeForce RTX 3080) on the Sender side. The Sender side camera gives an RGB image per frame to the Encoder to perform face detection and head parameter extraction with neural networks executed on the GPU.
62
+
63
+ ### 3.2 Head Model and Persistent Data Storage
64
+
65
+ Our OpenTeleView platform provides a Persistent Data Storage (PDS) model for data created by processes that are not run synchronously with the frame-by-frame streaming, such as a personalized head model; however, it can be accessed synchronously if desired; with the corresponding potential impact to performance.
66
+
67
+ Figure 2 illustrates one of the main use cases we envision: the Encoder is generic, trained once on a large dataset, and its parameters stored in the PDS and loaded at installation time; the Decoder is personalized (to the Sender), trained on the Sender side or external cloud, stored on the PDS, and network weights (354.4MB total size) are transmitted when a connection is made.
68
+
69
+ #### 3.2.1 Head Model Predictor Training
70
+
71
+ In our illustration, the Encoder is generic as it is trained on a public data set with a range of people set rather than on a specific user. To show the modularity of our platform, we use either the self-supervised AutoLink [20] method or DECA [13], a pre-trained model for a generic 3D head model predictor. DECA is trained on over ${21}\mathrm{k}$ subjects and 2 Million images from three publicly available datasets: VGGFACE2 [5], BUPT-Balancedface [49] and VoxCeleb2 [8]. The DECA model is learned in an analysis-by-synthesis way: input a 2D image $I$ , encode the image to a latent code, decode this to synthesize a 2D image ${I}_{r}$ , and minimize the difference between the synthesized image and the input.
72
+
73
+ #### 3.2.2 Neural Renderer Training
74
+
75
+ The Decoder is personalized and we experiment with the architectures in [20] and [51]. The former is using a UNet neural network and the latter uses a more complex deferred renderer [45] with a caching mechanism to improve speed and runtime [51] (see Section 3.4). Both are trained using a short RGB video (approximately $5\mathrm{\;{min}}$ ) of the Sender. Videos are shot with a single fixed camera with the subject talking casually while performing small head motions, with a resolution of ${1920} \times {1080}$ at ${60}\mathrm{{FPS}}$ . The previously introduced Encoder models are used to obtain the driving motion from the Sender's talking head video, specifically, AutoLink [20] extracts 2D keypoints and DECA [13] extracts the 3D head parameters. These head parameters are passed as inputs to the 2022 to reconstruct the encoded RGB image. Once training on this autoen-coder objective is complete, the parameters of the neural renderer and the FLAME head shape parameters are stored in the PDS.
76
+
77
+ ### 3.3 Encoder-Sender
78
+
79
+ The Encoder is a two-step process for each Sender frame to compute the Head Parameters: 1. finding the face of the Sender in the image and 2. using a pre-trained head model to compute the head parameters from the cropped Sender's face image.
80
+
81
+ #### 3.3.1 Step 1: 2D Face Tracking
82
+
83
+ We extend a common approach to find a face bounding box around the Sender’s face in the input image from a set of ${682}\mathrm{D}$ face key-points [18].
84
+
85
+ Previous methods $\left\lbrack {{10},{11},{13},{15},{39},{47}}\right\rbrack$ , run face detection, such as FAN [3], on every single frame, which is time-consuming and computationally heavy, leading to increased latency as 2D detection has to run before $3\mathrm{D}$ reconstruction.
86
+
87
+ Instead, to achieve high FPS and low-latency head reconstruction on videos, we utilize
88
+
89
+ that there is a high temporal coherence of video data and propose to reuse the 2D face keypoints extracted from our reconstructed 3D head model of the previous frame to draw the face bounding box of the current frame. As this can lead to misalignment for fast motions, we further approximate the movement of the keypoints using a velocity estimate from the past two frames to extrapolate the position of current bounding box. A full face detection is performed when the bounding box displacement exceeds a threshold. This approach is robust to mispredictions and significantly reduces the time needed to detect and crop the face.
90
+
91
+ ices are projected into the image as $v = {s\Pi }\left( {M}_{i}\right) + t$ , where ${M}_{i} \in {R}^{3}$ is a vertex in $M,\Pi \in {R}^{2 \times 3}$ is the orthographic 3D-2D projection matrix, and $s \in R$ and $t \in {R}^{2}$ denote isotropic scale and 2D translation respectively. The parameters $s$ and $t$ are summarized as an orthographic camera model $c$ .
92
+
93
+ #### 3.3.2 Step 2: Extracting Head Parameters
94
+
95
+ With the cropped Sender face as input, a Head Parameter Extractor estimates fine-grained keypoint locations using a ResNet50 [19] followed by a fully connected layer to produce a latent code $e$ , dependent on the used model,2D keypoint locations $\mathbf{p} \in {R}^{32}$ and edge weights $\mathbf{w} \in {64}$ or FLAME parameters, consisting of geometry $\left( {\beta ,\psi ,\theta }\right) \in {R}^{156}$ , albedo coefficients $\alpha \in {R}^{50}$ , camera matrix $c$ , and lighting parameters $l$ . This amounts to at most 2.5 KBytes/frame for encoding the $3\mathrm{D}$ head model of a Sender’s image. As only the time-varying pose information need to be sent every frame, the information sent for the $3\mathrm{D}$ reconstruction is substantially less than what would be needed to send a whole $3\mathrm{D}$ model of the Sender, greatly reducing network transmission time.
96
+
97
+ ### 3.4 Decoder-Receiver
98
+
99
+ The Decoder is responsible for using the Receiver’s position $p$ and parameters $e$ predicted by the Encoder to reconstruct a view-dependent RGB image of the Sender. For the simpler 2D case, the decoder is a single network. Below we explain the 3D version that includes additional, view-dependent rendering steps.
100
+
101
+ There are two main steps in the process. First, the latent code $e$ is used to reconstruct the 3D head mesh of the Sender. Second, we use the personalized neural renderer to take the coarse 3D Head mesh, rotate it to the Receiver's position, and generate a photo-realistic image of the Sender, view-dependent to appear on the view-dependent display.
102
+
103
+ #### 3.4.1 3D Neural Head Renderer
104
+
105
+ In the 3D setting, given the estimated FLAME parameters from the Encoder, the Decoder reconstructs the FLAME 3D head mesh using linear blend skinning (LBS) on parameters $e$ . To ensure that head is consistently centered in the Receiver's display, we rotate the mesh to the viewpoint $p$ and subtract the midpoint of vertices on each ear from all vertices on the mesh. One of our preliminary baselines uses the coarse albedo parameters to texture and render the mesh. However, simple texture mapping is not photorealistic Hence, we apply deferred neural rendering and first render the 3D mesh with UV coordinates as the texture. This UV map rendering then conditions the subsequent neural renderer along with a subset of the $e$ parameters. Lastly, because the FLAME parameters are predicted from a single image, we apply a small, one-sided box-filter to the pose $\left( \theta \right)$ and the shape $\left( \beta \right)$ parameters during online system evaluation.
106
+
107
+ #### 3.4.2 Cached 3D Neural Renderer
108
+
109
+ To accommodate for the the lower latency required for 3D telecon-ferencing, we use an optimized version [51] of the deferred neural renderer [45]. It is composed of two neural networks: a deep caching network that turns personalized neural textures to frame specific neural feature maps and a lightweight warping network that warps the feature maps cached from the previous frame.
110
+
111
+ The larger caching network can therefore be run sparingly, allowing to reduce the latency while minimally decreasing the visual quality of the generated image. On a multi-GPU machine, this method parallelizes and also increases the rendering frame-rate Note that because this neural renderer is grounded with a $3\mathrm{D}$ mesh. we are able to rotate the mesh (and thereby the UV map) to perform viewpoint-dependent rendering at inference time, based on the Receiver's tracking data. Multiple viewpoints can be rendered for different display configurations, such as right/left perspectives for stereo.
112
+
113
+ ### 3.5 Receiver Hardware
114
+
115
+ For our proof-of-concept implementation, the Receiver side hardware includes a spherical view-dependent display [55], a computer with a GPU, and a tracking system. In our current implementation, we explore the modularity of the platform by running the Decoder and Display processes on separate computers to illustrate that the display may be a self-contained system or the Decoder may be running using cloud services. However, they can also be run on a single computer. In Section 4.2, we analyze the timings of the different system components; thus, separating them allows us to consider this particular scenario.
116
+
117
+ #### 3.5.1 Spherical View-dependent Display + Computer
118
+
119
+ We use a large spherical view-dependent display [55], also known as a fish-tank virtual reality (FTVR) display. It uses a 24-inch plexiglass spherical screen with a mosaic of 4 registered mini projectors projecting through an 18-inch diameter hole at the bottom. This particular display is well suited for showing a view-dependent rendering of a Sender's head because the spherical shape allows the Receiver to walk around the display and there are no seams. The mosaic of projectors provides a high-resolution, bright image. It has also been shown to be the most effective type of display for representing size and shape constancy which are important for human faces [57] Lastly, the size of the sphere is large enough that a 1:1 aspect ratio is possible for human heads allowing for investigating whether the size of a 3D rendering of a speaker plays a role in perceived quality of presence. The display supports both view dependent and stereo depth cues. If such display is not available, our system also supports rendering to a flat screen.
120
+
121
+ #### 3.5.2 Tracking System
122
+
123
+ The tracking system provides Receiver's position and viewing angle to the view-dependent display to achieve view-dependent rendering The quality of view-dependent rendering is sensitive to errors in viewpoint tracking since it contributes significantly to the eye angular error pixels on a spherical view-dependent display [12, 56]. For our proof-of-concept implementation, we use OptiTrack (NaturalPoint Inc., Corvallis, OR) Prime-41 cameras to capture Receiver's position and orientation. This system uses retroreflective markers mounted on the Receiver's shutter glasses. The current tracking system has less than ${0.2}\mathrm{\;{mm}}$ of measurement error and the real-time streaming application connected with Unity has less than ${10}\mathrm{\;{ms}}$ latency. The tracker data is used both by the Decoder and the Display Renderer (see 3.5.3). The Decoder uses Receiver's position and orientation to render perspective dependent images for display.
124
+
125
+ #### 3.5.3 Receiver Display Render
126
+
127
+ The rendering pipeline for the spherical display [12] is implemented in Unity (Unity Technologies, San Francisco, CA). It features a two-pass rendering approach: 1 . render the image from a Receiver's perspective, and 2. render the pixels on the output display. This separation enables the neural renderer to be trained display agnostic for planar frontal views while mapping to the desired display at runtime. For the spherical display, the second pass involves a mapping between 2D projector pixels to $3\mathrm{D}$ surface positions on a non-planar surface. This warping transformation is achieved by sampling the $2\mathrm{D}$ image texture in a shader program and using of the multiple-projector calibration matrix [55]. The same rendering pipeline also supports several different display modes, including mosaic display on the FTVR sphere, flatscreen display, and virtual display where you can freely move around the rendered objects in a virtual scene; thus, is versatile for researchers to experiment with different view-dependent display types.
128
+
129
+ We build on top of the two-pass rendering to further integrate the neural rendering into the pipeline by adding a rotating plane in the scene that is always normal facing the user and vertical. The neural renderer only requires the Receiver's position and the thereby requested view is always up-right and onto a virtual planar image plane without distortion. To the user, they will always see the view corrected image based on their tracked position and display geometry. When they move around, this image and its orientation will be updated in real time by different aspects of the reconstructed talking head through neural rendering. This technique creates a sense of viewing 3D object while only rendering flat 2D images.
130
+
131
+ ### 3.6 Tele-Communication Network
132
+
133
+ The goal of the telecommuncation network is to provide flexibility for where the computational resources are for each of the modules while at the same time providing an end-to-end infrastructure that mimics real-world conditions to support stress testing different modules used for 3D teleconferencing. Thus, we use a WebRTC backbone for communication with a ZMQ wrapper for each of the components in the platform. These are described next.
134
+
135
+ #### 3.6.1 Internet backbone
136
+
137
+ We use the WebRTC protocol [40] using the libdatachannel [1] implementation to negotiate a direct peer-to-peer connection between the Encoder-Sender and the Decoder-Receiver over the internet. A WebRTC UDP configured data channel [21] facilitates the real-time transfer of 3D head parameters between the Sender and the Receiver. The 3D head parameters corresponding to a single frame are serialized using Protocol Buffers in order to be transmissible over the data channel. The UDP data channels are also used for data transfer between and Persistent Data Storage that is not local as well as the Tracker data to the Decoder. All the data channels are wrapped with a ZeroMQ [58] wrapper to provide a common interface for all the interprocess communication including support for different languages.
138
+
139
+ As the communication channels use UDP/IP with a ZeroMQ wrapper for all the communication interfaces, all the components of the end-to-end system can run on different machines as needed. Likewise, the interfaces between components have definitions for different language support enabling researchers to have flexibility in using $\mathrm{C} + +$ , python or other languages to implement specific algorithms. For example, in our current proof-of-concept implementation the networking is $\mathrm{C} + +$ , the Encoder is implemented in Python/PyTorch, and the Decoder is Python/PyTorch.
140
+
141
+ ![01963e0a-72f6-7d90-be25-5f08193d72d3_4_934_151_704_456_0.jpg](images/01963e0a-72f6-7d90-be25-5f08193d72d3_4_934_151_704_456_0.jpg)
142
+
143
+ Figure 4: View-dependent rendering examples at different viewpoints: the first row shows the Receiver's perspective and the second row shows the different positions of the Receiver by a fixed camera location. (a) Viewpoint at left of origin, seeing the right side of the Sender's face (b) Viewpoint at origin, seeing the front side of the Sender's face (c) Viewpoint at the right side of origin, seeing the left side of the Sender's face.
144
+
145
+ Wrapping the communication channels supports the ability to send data structures seamlessly between different processes with different languages freeing the researcher to focus on using their preferred tool while the infrastructure takes care of the scaffolding needed to get the end-to-end system working for doing the analysis. Using this approach also ensures that components that are running on the same machine will do the data exchange locally.
146
+
147
+ ## 4 MODULE EVALUATION IN OpenTeleView
148
+
149
+ We present results from analysis of each of our baseline modules when operating independently and as a part of the end-to-end system. The intent is to illustrate that performance analytics available within the end-to-end platform are effective to uncover inter-dependencies between components within the overall system and help to determine where bottlenecks in performance are coming from to guide algorithm development. The experiments reported here demonstrate the utility of testing modules in the OpenTeleView framework to address limitations otherwise unseen in isolated modules. We also show that our baseline end-to-end 3D teleconferencing implementation, along with the variations used for illustrating affects of changes to different modules, provide a good baseline for comparing future encoders/decoders/cameras and displays.
150
+
151
+ ### 4.1 Module Evaluation Dataset
152
+
153
+ To illustrate evaluating performance of our individual modules, we recorded a 1920x1080 resolution, 60 FPS, stereo-view talking head dataset (main and side views) of one woman test subject. A second view is recorded to evaluate the Decoder's view-point dependent rendering capabilities. We include 5 sub-sequences in this dataset used for training, validation, and testing the Decoder, a sequence of fast-moving head motions for the Encoder, and a sequence for calibrating the cameras. We will make this dataset publicly available so others can evaluate their modules on the same data.
154
+
155
+ ### 4.2 Baseline System Latency
156
+
157
+ Figure 5 shows the end-to-end live transmission system pipeline with FPS and latency of each corresponding component. The FPS results are generated by measuring the run time of each individual component. The theoretical latency is computed directly by taking the reciprocal of FPS. For comparison, we also estimated the perceptual latency by computing the time difference between the same movement of a real human and the rendered image on the display. To measure this, we use another high-speed camera to capture, in the same frame, the eye blink motion of a Sender talking and their image in the view-dependent display to calculate the time difference between blink motion. Using this approach we also take into account the OS and camera dependent delays to get an estimate of the overall system latency that would be in a real-system. With our particular camera hardware and OS, the perceptual latency is approximately ${280}\mathrm{\;{ms}}$ , thus, non-encoder/decoder related elements contribute around ${100}\mathrm{\;{ms}}$ of latency. The additional latency in the perceptual measurement comes from the time between eye-blink to the next frame capture (half a frame delay on average), asynchronous queue, and minimal smoothing applied to the estimated head parameters to mitigate jitter. Note that, due to the cached neural renderer, the perceptual novel-view-synthesis latency is much lower, at ${35}\mathrm{\;{ms}}$ , which facilitates a faithful VR experience even if the whole system communication is slower.
158
+
159
+ <table><tr><td>Pipeline Component</td><td>Camera A</td><td>Encoder</td><td>ZMQ</td><td>Internet</td><td>ZMQ</td><td>Decoder</td><td>ZMQ</td><td>Display</td><td>Total</td></tr><tr><td>FPS</td><td>90</td><td>25</td><td>>100</td><td>$> {100}$</td><td>$> {100}$</td><td>54</td><td>>100</td><td>$> {100}$</td><td>25</td></tr><tr><td>Latency (ms)</td><td>21</td><td>40</td><td>3</td><td>2</td><td>3</td><td>90</td><td>3</td><td>10</td><td>172</td></tr></table>
160
+
161
+ Figure 5: System Latency Breakdown: The blue coded parts are major system components, green coded parts are inter-transmission ZMQ, and the red parts are total FPS and latency. The total end-to-end latency with our computer hardware configuration is 172 msec at 25 fps. Additional latency due to the camera interface to the Open-TeleView components is dependent upon operating system drivers and are not included in these figures.
162
+
163
+ ### 4.3 Velocity-based 2D Face Tracking
164
+
165
+ To evaluate the speed of the Encoder with our velocity-based 2D face tracking method, we independently test the Encoder using our recorded video of a subject moving their head quickly. The ${1920} \times {1080}$ at ${60}\mathrm{{FPS}}$ video contains 995 frames in total. Using our velocity method, the Encoder reruns the full face detection 58 times; without the velocity method, the Encoder reruns full face detection 676 times. Thus, it can be observed that our simple velocity method, that predicts the next frame's face location, can achieve a significant reduction in the number of times we have to rerun the time-costly face detection algorithm. From the perspective of the OpenTeleView platform affordance for this module, the timing information and the ability to switch between recorded video and live video feeds within the whole framework provides useful analytics to target timing bottlenecks to facilitate improving each module. In this case, we compared three different approaches that tradeoff face detection accuracy and computational load affecting latency.
166
+
167
+ ### 4.4 Decoder Reconstruction Quality
168
+
169
+ To evaluate the quality of our displayed image, we independently test our Decoder's neural renderer on the withheld test sequence of our talking head dataset (main view). We are able to achieve a peak signal-to-noise ratio (PSNR) of 27.5 on the image from which the 3D head parameters have been estimated. Furthermore, we also test our models ability to perform view-dependent rendering by evaluating it on the second (side) view. This is done by taking the estimated 3D head parameters from the frontal recording, rotating those corresponding to the head pose based on the rotation matrix between the main and side cameras. This is an especially difficult setting, for which our model was able to reconstruct the entire sequence with an average PSNR of 25.7. Note, when running our model on the estimated parameters from the side view, we are only able to achieve a PSNR of 26.7, showing that his side view is in general more difficult to reconstruct. Qualitative results and error maps can be seen in Figure 6.
170
+
171
+ ![01963e0a-72f6-7d90-be25-5f08193d72d3_5_934_149_705_570_0.jpg](images/01963e0a-72f6-7d90-be25-5f08193d72d3_5_934_149_705_570_0.jpg)
172
+
173
+ Figure 6: Comparison of our Decoder against the ground truth image for both main and side view examples. Examples where we perform novel-view synthesis (NVS) on the parameters from the main view are also shown.
174
+
175
+ ### 4.5 OpenTeleView Modularity
176
+
177
+ To demonstrate the modularity of the proposed platform, we also experiment with the 2D encoder and decoder introduced in [20]. In this setting, we transfer the $2\mathrm{D}$ keypoint locations $\mathbf{p}$ and their edge weights $\mathbf{w}$ obtained from the encoder. These are first rasterized into a coarse mesh, which is then lifted to a full image using a UNet. The latency for the encoder and decoder is respectively $4\mathrm{\;{ms}}$ and ${45}\mathrm{\;{ms}}$ . Figure 7 shows example images using this approach. It demonstrates that the platform can support entirely different encoder and decoder networks and corresponding parameterization (2D vs. 3D), without having to change the network communication or other parts of the framework.
178
+
179
+ ### 4.6 OpenTeleView Integration
180
+
181
+ However, while each of our models is tested and developed using recorded videos and in isolation; when integrated, upstream delays in capturing and processing, such variable camera frame rates, leads to the degradation of downstream performance.
182
+
183
+ In the variable frame rate camera input, we observed that the velocity-based head tracking and warping-based neural renderer must compensate for increased differences between incoming frames. Further, the jitter associated with the incoming frames is not consistent, thus, a neural renderer may have variable time differences between frames, further challenging research that uses this approach. We illustrate how both modules are affected by changes in overall system framerate by subsampling frames in recorded videos and measure the performance versus the input framerate. These results are shown in Figure 8.
184
+
185
+ The ability for our OpenTeleView end-to-end platform to integrate different components easily enables isolation of each component's performance within real-world scenarios. Thus, in our example implementation, we illustrate that by switching in different encoder solutions and decoder solutions with both live feed and recorded video feeds, careful performance analysis, the strengths and weaknesses of each component are identified along with the inter-dependencies amongst the components. Thus, our OpenTeleView platform fills a significant gap in assessing different computer vision approaches to the encoder and decoder methods that are usually assessed only in isolation on pre-recorded data sets. Hence, our contribution enables apples-to-apples comparisons of different algorithms for 3D teleconferencing.
186
+
187
+ ![01963e0a-72f6-7d90-be25-5f08193d72d3_6_159_154_700_457_0.jpg](images/01963e0a-72f6-7d90-be25-5f08193d72d3_6_159_154_700_457_0.jpg)
188
+
189
+ Figure 7: Example of modularity: we substituted using a 2D AutoLink method that conditions on 2D keypoints instead of a 3D mesh [20]. The encoder/decoder interface makes this a simple operation so that researchers can swap different approaches to compare performance in real-world like end-to-end teleconferencing.
190
+
191
+ ![01963e0a-72f6-7d90-be25-5f08193d72d3_6_197_871_628_772_0.jpg](images/01963e0a-72f6-7d90-be25-5f08193d72d3_6_197_871_628_772_0.jpg)
192
+
193
+ Figure 8: Reduced input video/image frame rates negatively impact the performance of both the velocity-based head tracking and the neural rendering.
194
+
195
+ ## 5 LIMITATIONS AND FUTURE WORK
196
+
197
+ The focus of this paper is on the OpenTeleView platform rather than the specifics of the baseline encoder/decoder pair we implemented to provide a particular baseline. In that context, even though our platform has most of the major modules implemented for end-to-end $3\mathrm{D}$ teleconferencing, there are some components which we leave for future work. These include: additional analytics such as temporal and spatial jitter measurements; additional baseline use-cases such as multicamera and mobile displays; symmetric communication, multicast abilities; embedded, synchronized audio support rather than out-of-band audio; and, parameterized input control so that the input video stream characteristics can be easily adjusted to simulate different real-world camera input statistics.
198
+
199
+ ## 6 CONCLUSION
200
+
201
+ We created the OpenTeleView platform along with two baseline implementation that illustrates how the end-to-end 3D teleconferencing can work and future research on separate modules can be analyzed. The baseline 3D implementation provides a medium fidelity tele-conferencing experience using modifications of existing techniques available in the literature. The second method uses a much simpler 2D representation to illustrate the support for modularity and flexibility of the encoder/decoder to support a range of approaches researchers may investigate. The platform is intended to use off-the-shelf components for computational, camera and display hardware along with an internet-based communication infrastructure so that it is accessible to a large range of researchers. This approach enables research on specific approaches to encode the input video and decode it to provide view-dependent rendering needed for 3D teleconferenc-ing to be tested and analysed in a common end-to-end platform. By doing so, research contributions on specific modules can be tested in real-world scenarios to facilitate constant innovations in $3\mathrm{D}$ tele-conferencing technology to lead the way for establishing this new form of remote communication.
202
+
203
+ ## REFERENCES
204
+
205
+ [1] P. Ageneau. https://github.com/paullouisageneau/libdatachannel, 2022.
206
+
207
+ [2] V. Blanz and T. Vetter. A morphable model for the synthesis of 3d faces. In Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '99, p. 187-194. ACM Press/Addison-Wesley Publishing Co., USA, 1999. doi: 10.1145/ 311535.311556
208
+
209
+ [3] A. Bulat and G. Tzimiropoulos. How far are we from solving the 2d &3d face alignment problem? (and a dataset of 230,000 3d facial landmarks). In International Conference on Computer Vision, 2017.
210
+
211
+ [4] C. Cao, Y. Weng, S. Zhou, Y. Tong, and K. Zhou. Facewarehouse: A 3d facial expression database for visual computing. IEEE Transactions on Visualization and Computer Graphics, 20(3):413-425, 2014. doi: 10.1109/TVCG.2013.249
212
+
213
+ [5] Q. Cao, L. Shen, W. Xie, O. M. Parkhi, and A. Zisserman. Vggface2: A dataset for recognising faces across pose and age. In 2018 13th IEEE International Conference on Automatic Face Gesture Recognition (FG 2018), pp. 67-74, 2018. doi: 10.1109/FG.2018.00020
214
+
215
+ [6] L. Casas and K. Mitchell. Intermediated reality: A framework for communication through tele-puppetry. Frontiers in Robotics and AI, $6 : {60},{2019}$ .
216
+
217
+ [7] S.-Y. Chen, F.-L. Liu, Y.-K. Lai, P. L. Rosin, C. Li, H. Fu, and L. Gao. Deepfaceediting: Deep face generation and editing with disentangled geometry and appearance control. ACM Trans. Graph., 40(4), jul 2021. doi: 10.1145/3450626.3459760
218
+
219
+ [8] J. S. Chung, A. Nagrani, and A. Zisserman. Voxceleb2: Deep speaker recognition. CoRR, abs/1806.05622, 2018.
220
+
221
+ [9] M. Dou, Y. Shi, J.-M. Frahm, H. Fuchs, B. Mauchly, and M. Marathe. Room-sized informal telepresence system. In 2012 IEEE Virtual Reality Workshops (VRW), pp. 15-18, 2012. doi: 10.1109/VR.2012. 6180869
222
+
223
+ [10] P. Dou, S. K. Shah, and I. A. Kakadiaris. End-to-end 3d face reconstruction with deep neural networks, 2017.
224
+
225
+ [11] P. Dou, Y. Wu, S. Shah, and I. Kakadiaris. Robust 3d face shape reconstruction from single images via two-fold coupled structure learning and off-the-shelf landmark detectors. In Proceedings of the British Machine Vision Conference. BMVA Press, 2014. doi: 10.5244/C.28.
226
+
227
+ 131
228
+
229
+ [12] D. B. Fafard. A virtual testbed for fish-tank virtual reality: Improving calibration with a virtual-in-virtual display. 2019.
230
+
231
+ [13] Y. Feng, H. Feng, M. J. Black, and T. Bolkart. Learning an animatable detailed $3\mathrm{\;d}$ face model from in-the-wild images. ACM Transactions on Graphics (TOG), 40(4):1-13, 2021.
232
+
233
+ [14] O. Fried, A. Tewari, M. Zollhöfer, A. Finkelstein, E. Shechtman, D. B. Goldman, K. Genova, Z. Jin, C. Theobalt, and M. Agrawala. Text-based editing of talking-head video. ACM Trans. Graph., 38(4), jul 2019. doi: 10.1145/3306346.3323028
234
+
235
+ [15] P. Garrido, M. Zollhöfer, D. Casas, L. Valgaerts, K. Varanasi, P. Pérez, and $\mathrm{C}$ . Theobalt. Reconstruction of personalized $3\mathrm{\;d}$ face rigs from monocular video. ACM Trans. Graph., 35(3), may 2016. doi: 10. 1145/2890493
236
+
237
+ [16] P. Ghosh, P. S. Gupta, R. Uziel, A. Ranjan, M. J. Black, and T. Bolkart. Gif: Generative interpretable faces. In 2020 International Conference on 3D Vision (3DV), pp. 868-878. IEEE, 2020.
238
+
239
+ [17] S. Gibbs, C. Arapis, and C. Breiteneder. Teleport - towards immersive copresence. Multimedia Syst., 7:214-221, 05 1999. doi: 10.1007/ s005300050123
240
+
241
+ [18] R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker. Multi-pie. In 2008 8th IEEE International Conference on Automatic Face Gesture Recognition, pp. 1-8, 2008. doi: 10.1109/AFGR.2008.4813399
242
+
243
+ [19] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2016. doi: 10.1109/CVPR. 2016.90
244
+
245
+ [20] X. He, B. Wandt, and H. Rhodin. Autolink: Self-supervised learning of human skeletons and object outlines by linking keypoints. arXiv preprint arXiv:2205.10636, 2022.
246
+
247
+ [21] R. Jesup, S. Loreto, and M. Tüxen. WebRTC Data Channels. RFC 8831, Jan. 2021. doi: 10.17487/RFC8831
248
+
249
+ [22] A. Jones, M. Lang, G. Fyffe, X. Yu, J. Busch, I. McDowall, M. Bolas, and P. Debevec. Achieving eye contact in a one-to-many $3\mathrm{\;d}$ video teleconferencing system. ACM Trans. Graph., 28(3), jul 2009. doi: 10. 1145/1531326.1531370
250
+
251
+ [23] J. T. Kajiya. The rendering equation. In Proceedings of the 13th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '86, p. 143-150. Association for Computing Machinery, New York, NY, USA, 1986. doi: 10.1145/15922.15902
252
+
253
+ [24] T. Karras, S. Laine, and T. Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401-4410, 2019.
254
+
255
+ [25] C. Kuster, N. Ranieri, A. Agustina, H. Zimmer, J. Bazin, C. Sun, T. Popa, and M. Gross. Towards next generation 3d teleconferencing systems. pp. 1-4, 10 2012. doi: 10.1109/3DTV.2012.6365454
256
+
257
+ [26] J. Lawrence, D. B. Goldman, S. Achar, G. M. Blascovich, J. G. Desloge, T. Fortes, E. M. Gomez, S. Häberling, H. Hoppe, A. Huibers, C. Knaus, B. Kuschak, R. Martin-Brualla, H. Nover, A. I. Russell, S. M. Seitz, and K. Tong. Project starline: A high-fidelity telepresence system. ACM Transactions on Graphics (Proc. SIGGRAPH Asia), 40(6), 2021.
258
+
259
+ [27] J. Lawrence, D. B. Goldman, S. Achar, G. M. Blascovich, J. G. Desloge, T. Fortes, E. M. Gomez, S. Häberling, H. Hoppe, A. Huibers, C. Knaus, B. Kuschak, R. Martin-Brualla, H. Nover, A. I. Russell, S. M. Seitz, and K. Tong. Project starline: A high-fidelity telepresence system. ACM Transactions on Graphics (Proc. SIGGRAPH Asia), 40(6), 2021.
260
+
261
+ [28] T. Li, T. Bolkart, M. J. Black, H. Li, and J. Romero. Learning a model of facial shape and expression from 4D scans. ACM Transactions on Graphics, (Proc. SIGGRAPH Asia), 36(6):194:1-194:17, 2017.
262
+
263
+ [29] A. Maimone, J. Bidwell, K. Peng, and H. Fuchs. Enhanced personal autostereoscopic telepresence system using commodity depth cameras. Computers & Graphics, 36(7):791-807, 2012. Augmented Reality Computer Graphics in China. doi: 10.1016/j.cag.2012.04.011
264
+
265
+ [30] R. Martin-Brualla, R. Pandey, S. Yang, P. Pidlypenskyi, J. Taylor, J. Valentin, S. Khamis, P. Davidson, A. Tkach, P. Lincoln, A. Kow-dle, C. Rhemann, D. B. Goldman, C. Keskin, S. Seitz, S. Izadi, and
266
+
267
+ S. Fanello. Lookingood: Enhancing performance capture with real-time neural re-rendering. ACM Trans. Graph., 37(6), dec 2018. doi: 10 .1145/3272127.3275099
268
+
269
+ [31] M. Meshry, S. Suri, L. S. Davis, and A. Shrivastava. Learned spatial
270
+
271
+ representations for few-shot talking-head synthesis, 2021.
272
+
273
+ [32] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoor-thi, and R. Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In European conference on computer vision, pp. 405-421. Springer, 2020.
274
+
275
+ [33] T. Nguyen-Phuoc, C. Li, L. Theis, C. Richardt, and Y.-L. Yang. Holo-gan: Unsupervised learning of $3\mathrm{\;d}$ representations from natural images. In The IEEE International Conference on Computer Vision (ICCV), Nov 2019.
276
+
277
+ [34] M. Niemeyer and A. Geiger. Giraffe: Representing scenes as compositional generative neural feature fields. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.
278
+
279
+ [35] S. Orts-Escolano, C. Rhemann, S. Fanello, W. Chang, A. Kowdle, Y. Degtyarev, D. Kim, P. L. Davidson, S. Khamis, M. Dou, et al. Holoportation: Virtual 3d teleportation in real-time. In Proceedings of the 29th annual symposium on user interface software and technology, pp. 741-754, 2016.
280
+
281
+ [36] Y. Pan, O. Oyekoya, and A. Steed. A surround video capture and presentation system for preservation of eye-gaze in teleconferencing applications. Presence, 24(1):24-43, 2015.
282
+
283
+ [37] P. Paysan, R. Knothe, B. Amberg, S. Romdhani, and T. Vetter. A 3d face model for pose and illumination invariant face recognition. In 2009 Sixth IEEE International Conference on Advanced Video and Signal Based Surveillance, pp. 296-301, 2009. doi: 10.1109/AVSS. 2009.58
284
+
285
+ [38] A. Ranjan, T. Bolkart, S. Sanyal, and M. J. Black. Generating 3d faces using convolutional mesh autoencoders. In Proceedings of the European Conference on Computer Vision (ECCV), September 2018.
286
+
287
+ [39] H. M. Rara, A. A. Farag, and T. Davis. Model-based 3d shape recovery from single images of unknown pose and illumination using a small number of feature points. In Proceedings of the 2011 International Joint Conference on Biometrics, IJCB '11, p. 1-7. IEEE Computer Society, USA, 2011. doi: 10.1109/IJCB.2011.6117493
288
+
289
+ [40] E. Rescorla. WebRTC Security Architecture. RFC 8827, Jan. 2021. doi: 10.17487/RFC8827
290
+
291
+ [41] S. Sanyal, T. Bolkart, H. Feng, and M. Black. Learning to regress 3d face shape and expression from an image without $3\mathrm{\;d}$ supervision. In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), June 2019.
292
+
293
+ [42] S. Suwajanakorn, S. M. Seitz, and I. Kemelmacher-Shlizerman. Synthesizing obama: Learning lip sync from audio. ACM Trans. Graph., 36(4), jul 2017. doi: 10.1145/3072959.3073640
294
+
295
+ [43] Z. Tan, M. Chai, D. Chen, J. Liao, Q. Chu, L. Yuan, S. Tulyakov, and N. Yu. Michigan: Multi-input-conditioned hair image generation for portrait editing. ACM Trans. Graph., 39(4), jul 2020. doi: 10. 1145/3386569.3392488
296
+
297
+ [44] A. Tewari, M. Elgharib, G. Bharaj, F. Bernard, H.-P. Seidel, P. Pérez, M. Zollhofer, and C. Theobalt. Stylerig: Rigging stylegan for 3d control over portrait images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6142-6151, 2020.
298
+
299
+ [45] J. Thies, M. Zollhöfer, and M. Nießner. Deferred neural rendering: Image synthesis using neural textures. ACM Transactions on Graphics (TOG), 38(4):1-12, 2019.
300
+
301
+ [46] J. Thies, M. Zollhöfer, M. Nießner, L. Valgaerts, M. Stamminger, and C. Theobalt. Real-time expression transfer for facial reenactment. ${ACM}$ Trans. Graph., 34(6), oct 2015. doi: 10.1145/2816795.2818056
302
+
303
+ [47] J. Thies, M. Zollhöfer, M. Stamminger, C. Theobalt, and M. Nießner. Face2face: Real-time face capture and reenactment of RGB videos. CoRR, abs/2007.14808, 2020.
304
+
305
+ [48] D. Vlasic, M. Brand, H. Pfister, and J. Popović. Face transfer with multilinear models. ACM Trans. Graph., 24(3):426-433, jul 2005. doi: 10.1145/1073204.1073209
306
+
307
+ [49] M. Wang, W. Deng, J. Hu, J. Peng, X. Tao, and Y. Huang. Racial faces in-the-wild: Reducing racial bias by deep unsupervised domain adaptation. CoRR, abs/1812.00194, 2018.
308
+
309
+ [50] T.-C. Wang, A. Mallya, and M.-Y. Liu. One-shot free-view neural
310
+
311
+ talking-head synthesis for video conferencing, 2021.
312
+
313
+ [51] F. Yu, S. Fels, and H. Rhodin. Scaling neural face synthesis to high fps and low latency by neural caching, 2022.
314
+
315
+ [52] E. Zakharov, A. Shysheya, E. Burkov, and V. Lempitsky. Few-shot adversarial learning of realistic neural talking head models, 2019.
316
+
317
+ [53] C. Zhang, Q. Cai, P. A. Chou, Z. Zhang, and R. Martin-Brualla. Viewport: A distributed, immersive teleconferencing system with infrared dot pattern. IEEE MultiMedia, 20(1):17-27, 2013. doi: 10.1109/ MMUL.2013.12
318
+
319
+ [54] Y. Zhang, J. Yang, Z. Liu, R. Wang, G. Chen, X. Tong, and B. Guo. Virtualcube: An immersive $3\mathrm{\;d}$ video communication system. IEEE Transactions on Visualization and Computer Graphics, 28(5):2146- 2156, 2022.
320
+
321
+ [55] Q. Zhou, G. Miller, K. Wu, D. Correa, and S. Fels. Automatic calibration of a multiple-projector spherical fish tank vr display. pp. 1072-1081, 03 2017. doi: 10.1109/WACV.2017.124
322
+
323
+ [56] Q. Zhou, G. Miller, K. Wu, I. Stavness, and S. Fels. Analysis and practical minimization of registration error in a spherical fish tank virtual reality system. In Asian Conference on Computer Vision, pp. 519-534. Springer, 2016.
324
+
325
+ [57] Q. Zhou, F. Wu, S. Fels, and I. Stavness. Closer object looks smaller: Investigating the duality of size perception in a spherical fish tank vr display. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1-9, 2020. doi: 10.1145/3313831.3376601
326
+
327
+ [58] ZMQ. https://github.com/zeromq, 2022.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/CrkHdts-KT/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § OPENTELEVIEW: AN OPEN 3D TELECONFERENCING RESEARCH PLATFORM
2
+
3
+ Category: Research
4
+
5
+ § ABSTRACT
6
+
7
+ Recent demonstrations of 3D telepresence provide a glimpse into a future where $2\mathrm{D}$ video communication is replaced with photo-realistic virtual avatars rendered on $3\mathrm{D}$ displays. However, the existing technology demonstrations typically run on expensive dedicated devices that require the calibration of multiple cameras by experts and the underlying reconstruction, compression, transmission, and rendering methods remain proprietary. We describe our open platform for real-time end-to-end 3D teleconferencing using commodity hardware coupled with a modular software structure for inserting advanced computer vision algorithms supporting research and development. We demonstrate the utility of our modular end-to-end approach by integrating state-of-the art modules and improving them based on an analysis of current bottlenecks targeting low-latency processing. We include a baseline implementation supporting real-time 3D teleconferencing that provides a new benchmark for evaluation of current and future algorithms. We demonstrate the practicality of our approach with a baseline, a 3D teleconferencing system running at 25 frames per second with ${172}\mathrm{\;{ms}}$ latency on consumer GPUs that applies to a single RGB camera input and various 3D display technologies. Our 3D teleconferencing platform is open source, which paves the way for computer vision, computer graphics and HCI research to continue innovating together to make 3D teleconferencing the telecommunication standard.
8
+
9
+ § 1 INTRODUCTION
10
+
11
+ With the dramatically accelerated shift to online meetings from the impact of the COVID-19 pandemic, there has been a resurgence in the need of new teleconferencing technology that creates a more real and in-person experience. One major challenge is to make tele-conferencing have feeling of presence including eye contact and situational awareness of each person's real-world space, such that pointing, and gesture are coordinated. Hence, more research effort is appearing for teleconferencing that allows the user to appear in $3\mathrm{D}$ and maintain direct eye contact with multiple speakers to enhance the overall communication experience and improve the information transmission efficiency [25]. Virtual Reality (VR) and Augmented Reality (AR) are the two main trends to create 3D experiences in recent years. These trends use three different types of hardware: headsets (HMDs) that connect to your PC, 2D semi-transparent displays like Google Glasses, and standalone 3D display devices. These displays support view-dependent rendering such as used in Fish Tank Virtual Reality (FTVR) that creates an effective method to support presence with stereo and motion parallax depth cues. However, these systems require rendering a person's likeness from different viewpoints which is not available without using some mechanism to capture and transmit the users' 3D characteristics. A number of proprietary systems have been proposed to achieve this goal, e.g., Google Starline project [26], Microsoft Holoportation [35], and [36], but each has either closed systems or large scale proprietary or prohibitively expensive hardware. Likewise, they are unavailable for researchers to perform perceptual evaluation to determine how well they achieve a sense of presence. Furthermore, the complex infrastructure to test proposed new research algorithms for supporting different aspects of the 3D teleconferencing pipeline is not readily accessible; thus, researcher results are typically reported in isolation without the opportunity to stress test it within the ecosystem of an end-to-end system. Our contribution fills this missing piece.
12
+
13
+ We describe OpenTeleView (actual name hidden for review) platform that provides an end-to-end platform that supports researchers to include specific contributions to different parts of the pipeline in a 3D teleconferencing system. Within the platform, each component's performance can be tested within a perceptually suitable 3D telecon-ferencing system for benchmarking and optimization. We provide the end-to-end system that uses off the shelf (OTS) components along with our own adaptations of existing algorithmic approaches in the literature to demonstrate: a) an accessible, low-cost, replicable end-to-end 3D teleconferencing system with the latest advances in research included as a benchmark; b) interface descriptions that provide connections for research as well as the needed scaffolding to enable end-to-end functional and perceptual performance testing; c) a modular interface for researchers to connect to common development platforms like PyTorch and Unity; and, d) a high-resolution offline recording at ${60}\mathrm{{fps}}$ with novel-view ground truth to establish a public benchmark for $3\mathrm{D}$ teleconferencing quality. Figure 1 shows an example of a user talking while showing her 3D image at the receiver's view-dependent display.
14
+
15
+ < g r a p h i c s >
16
+
17
+ Figure 1: OpenTeleView modular End-to-End 3D teleconferencing in action. The Sender side camera captured image (left) is encoded to a neural 3D model. Its parameters are sent to the receiver side where a photo-realistic view-dependent rendering is shown on the Receiver's 3D display (right). Being modular, research results on different encoders can be substituted for analysis and comparison on real-world 3D teleconferencing experiences.
18
+
19
+ We provide results from experiments with the baseline implementation and variations to demonstrate how the platform can be used to help identify and optimize different types of algorithmic bottlenecks. Our implementation has an end-to-end latency of ${172}\mathrm{\;{ms}}$ with a sustained frame rate on average of 25 frames per second (FPS) providing an excellent reference point for innovative algorithms to be tested against. Besides as an algorithmic research platform, the technical performance is suitable for qualitative perceptual testing allowing different modules to be compared with each other in real-world user testing.
20
+
21
+ § 2 RELATED WORK
22
+
23
+ Research in teleconferencing has moved from 2D video to 3D. While significant research has gone into developing algorithms to make these systems feasible, we focus on the systems as a whole.
24
+
25
+ § 2.1 TALKING HEAD MODELS
26
+
27
+ Parametric head models $\left\lbrack {2,{28}}\right\rbrack$ are widely used in face generation [16,42] and reenactment [46-48]. These parametric models consume a low dimension vector that drives avatars to control the subjects. Following this line of work, we leverage the parametric model FLAME [28] in our baseline implementation and surround it with communication and rendering modules.
28
+
29
+ § 2.2 NEURAL RENDERING
30
+
31
+ Different from traditional rendering methods [23], neural rendering does not necessarily need the explicit mesh and texture. It can be achieved by implicit neural representation [32], and Generative Adversarial Networks [24]. However, they usually focus on image quality for novel view synthesis [33,34] and object editing [7,14,43], both of which rely on very deep neural networks that only run at low frame rates. We utilize a parametric mesh model with the deferred neural rendering method $\left\lbrack {{44},{45}}\right\rbrack$ , aiming at high-resolution high-fidelity face synthesis at high frame rates and extend it to work alongside the other modules to form a complete teleconferencing system.
32
+
33
+ § 2.3 3D TELECONFERENCING
34
+
35
+ Gibbs et al. design a room-scale system which uses a single camera, a view tracking system, and IR emitter to render perspectively correct mono or stereo images on a wall-sized display [17]. Following that, [22] leverage a fast-rotating, convex mirror as a 3D display along with a high-speed projector to display a 3D image of a user. $\left\lbrack {{29},{54}}\right\rbrack$ design a fully GPU-accelerated data processing and rendering pipeline and use a set of Microsoft Kinect color-plus-depth cameras to allow head-tracked stereo views to be rendered for a parallax barrier autostereoscopic display. $\left\lbrack {9,{36}}\right\rbrack$ design a room-scale telepresence setup which uses an array of color and depth cameras, and displays in two locations to synthesize images of users in both rooms with correct eye gaze. [25] use a single Microsoft Kinect depth camera and an RGB camera to render users from novel views without the need of a large camera array. This rendering is then shown on a 3D display over a 3D background. [53] use an array of IR cameras and lasers, RGB and Microsoft Kinect depth cameras to develop a system for three-person teleconferencing with proper eye gazes. Another line of work uses avatars or figures [6] as surrogates that circumvents the challenge of rendering a virtual avatar. More recently, [27] developed an end-to-end system which utilizes an array of cameras (IR, RGB, and tracking) and an autostereoscopic display among other contributions to enable face-to-face teleconfer-encing better than $2\mathrm{D}$ alternatives. [30] uses a depth camera, and its 'inpainting' only supports moderate view changes. [52] and [31] are 2D, not capable of novel view synthesis. [50] could replace our FLAME-based encoder-decoder, but is not open source and the runtime is not stated. However, all of the recent live systems are proprietary and there is no publicly available offline benchmark.
36
+
37
+ § 3 END-TO-END PIPELINE
38
+
39
+ The challenge of 3D teleconferencing is finding compatible modules and connecting them to efficiently infer, transmit, and render a realistic 3D head model so that convincing 3D motion parallax and stereo depth cues are maintained as if the Sender appears at the Receiver's location [57]. Figure 2 illustrates the main components of our OpenTeleView platform, with the Sender/Receiver hardware configuration, Encoder and Decoder, Persistent Data Storage (PDS) and communication module. The diagram shows the data flow from a Sender to a Receiver which would be duplicated for the bi-directional system; though they may have different camera and display configurations. The heart of the research for end-to-end 3D teleconferencing are the matched Encoder and Decoder for the encoding/compression of the input video signal and the subsequent decoding and view dependent rendering.
40
+
41
+ < g r a p h i c s >
42
+
43
+ Figure 2: The main components needed by our OpenTeleView platform to define a 3D teleconferencing system are: 1. Sender/Receiver hardware, 2. Encoder, 3. Decoder, and 4. Persistent Data Storage needs. Our platform provides network scaffolding and communication interfaces, including optional access to the Receiver's tracked position by the Encoder and Decoder, to support a range of end-to-end 3D teleconferencing research for performance testing, analysis and comparison.
44
+
45
+ We structured the OpenTeleView system to capture the main components that are necessary for an end-to-end 3D teleconferencing system and designed it to be modular, with the expectation that researchers will be able to add their own hardware assumptions with associated encoding and decoding approaches to strike different tradeoffs between quality and resources, e.g., for real-world perception testing as well as measurements of efficiency and quality of service. We provide sufficient scaffolding to accommodate a range of hardware assumptions, such as different display types, camera inputs and tracking technologies for rendering while providing software interfaces for supporting encoders and decoders doing frame-by-frame processing but also have access to persistent memory, accessed at start up when a connection is made between Sender and Receiver to exchange pre-trained models.
46
+
47
+ The communication infrastructure provides interfaces for interprocess communication to support modules to be run on different computers as well be written in different languages appropriate for the research.
48
+
49
+ We provide a baseline implementation with the OpenTeleView platform using a pre-trained head model and a neural render trained on Sender video data collected offline. Figure 3 shows the different components, each explained in detail in the subsequent sections. The Encoder generates a small set of $3\mathrm{D}$ head parameters of the Sender that is sent to the Decoder. The head parameters capture enough 3D content so that the Decoder can recreate the head of the user along with a neural renderer trained on the Sender's data that provides a photo-realistic, view-dependent render that can appear on the Receiver's display. The neural renderer can continue rendering different view-points as the Receiver moves around their display as needed. To represent the Sender 3D head parameters, we use the FLAME [28] model because it is low-dimensional and more expressive than other representations, e.g., FaceWarehouse model [4] and Basel Face Model [37]. It is easy to fit to data and commonly used by many algorithms (e.g. RingNet [41]; DECA [13]; CoMA [38]).
50
+
51
+ FLAME's head representation include geometry parameters Since FLAME does not have an appearance model, like previous method [13], we adapt the Basel Face Model [37] to be compatible with FLAME to give albedo parameters $\alpha \in {R}^{50}$ . Together, the Encoder (see Figure 3.3) computes these head parameters for every frame of the Sender and transmits these along with camera matrix $c$ and lighting parameters $l$ to the Decoder. The Decoder (see Figure 3.4) then used them to reconstruct the 3D head model of the Sender. The neural renderer then maps the 3D head model to a photo-realistic version of the Sender, though, from the viewpoint of the Receiver.
52
+
53
+ < g r a p h i c s >
54
+
55
+ Figure 3: Baseline 3D Teleconferencing Architecture: Encoder and Decoder use a compact (2.5kB/frame) 3D head model that represents the Sender’s head using shape $\left( \beta \right)$ , expression $\left( \psi \right)$ , pose $\left( \theta \right)$ , albedo $\left( \alpha \right)$ , camera matrix(c), and lighting parameters(i). These are computed every frame by the Encoder, transmitted to the Decoder and decoded from a single RGB image. Using the Receiver’s viewpoint, the neural renderer renders a view-dependent photo-realistic image of the Sender on the Receiver's 3D display.
56
+
57
+ $3\mathrm{D}$ head models and their rendering is an active research area for 3D teleconferencing, thus, our OpenTeleView platform makes it easy to analyse different approaches relative to each other in a real-world end-to-end 3D teleconferencing scenario.
58
+
59
+ § 3.1 SENDER HARDWARE
60
+
61
+ Our example implementation uses a single RGB camera (Logitech C920 Webcam HD Pro, 30 FPS, 1080p) and one computer with a GPU (NVIDIA GeForce RTX 3080) on the Sender side. The Sender side camera gives an RGB image per frame to the Encoder to perform face detection and head parameter extraction with neural networks executed on the GPU.
62
+
63
+ § 3.2 HEAD MODEL AND PERSISTENT DATA STORAGE
64
+
65
+ Our OpenTeleView platform provides a Persistent Data Storage (PDS) model for data created by processes that are not run synchronously with the frame-by-frame streaming, such as a personalized head model; however, it can be accessed synchronously if desired; with the corresponding potential impact to performance.
66
+
67
+ Figure 2 illustrates one of the main use cases we envision: the Encoder is generic, trained once on a large dataset, and its parameters stored in the PDS and loaded at installation time; the Decoder is personalized (to the Sender), trained on the Sender side or external cloud, stored on the PDS, and network weights (354.4MB total size) are transmitted when a connection is made.
68
+
69
+ § 3.2.1 HEAD MODEL PREDICTOR TRAINING
70
+
71
+ In our illustration, the Encoder is generic as it is trained on a public data set with a range of people set rather than on a specific user. To show the modularity of our platform, we use either the self-supervised AutoLink [20] method or DECA [13], a pre-trained model for a generic 3D head model predictor. DECA is trained on over ${21}\mathrm{k}$ subjects and 2 Million images from three publicly available datasets: VGGFACE2 [5], BUPT-Balancedface [49] and VoxCeleb2 [8]. The DECA model is learned in an analysis-by-synthesis way: input a 2D image $I$ , encode the image to a latent code, decode this to synthesize a 2D image ${I}_{r}$ , and minimize the difference between the synthesized image and the input.
72
+
73
+ § 3.2.2 NEURAL RENDERER TRAINING
74
+
75
+ The Decoder is personalized and we experiment with the architectures in [20] and [51]. The former is using a UNet neural network and the latter uses a more complex deferred renderer [45] with a caching mechanism to improve speed and runtime [51] (see Section 3.4). Both are trained using a short RGB video (approximately $5\mathrm{\;{min}}$ ) of the Sender. Videos are shot with a single fixed camera with the subject talking casually while performing small head motions, with a resolution of ${1920} \times {1080}$ at ${60}\mathrm{{FPS}}$ . The previously introduced Encoder models are used to obtain the driving motion from the Sender's talking head video, specifically, AutoLink [20] extracts 2D keypoints and DECA [13] extracts the 3D head parameters. These head parameters are passed as inputs to the 2022 to reconstruct the encoded RGB image. Once training on this autoen-coder objective is complete, the parameters of the neural renderer and the FLAME head shape parameters are stored in the PDS.
76
+
77
+ § 3.3 ENCODER-SENDER
78
+
79
+ The Encoder is a two-step process for each Sender frame to compute the Head Parameters: 1. finding the face of the Sender in the image and 2. using a pre-trained head model to compute the head parameters from the cropped Sender's face image.
80
+
81
+ § 3.3.1 STEP 1: 2D FACE TRACKING
82
+
83
+ We extend a common approach to find a face bounding box around the Sender’s face in the input image from a set of ${682}\mathrm{D}$ face key-points [18].
84
+
85
+ Previous methods $\left\lbrack {{10},{11},{13},{15},{39},{47}}\right\rbrack$ , run face detection, such as FAN [3], on every single frame, which is time-consuming and computationally heavy, leading to increased latency as 2D detection has to run before $3\mathrm{D}$ reconstruction.
86
+
87
+ Instead, to achieve high FPS and low-latency head reconstruction on videos, we utilize
88
+
89
+ that there is a high temporal coherence of video data and propose to reuse the 2D face keypoints extracted from our reconstructed 3D head model of the previous frame to draw the face bounding box of the current frame. As this can lead to misalignment for fast motions, we further approximate the movement of the keypoints using a velocity estimate from the past two frames to extrapolate the position of current bounding box. A full face detection is performed when the bounding box displacement exceeds a threshold. This approach is robust to mispredictions and significantly reduces the time needed to detect and crop the face.
90
+
91
+ ices are projected into the image as $v = {s\Pi }\left( {M}_{i}\right) + t$ , where ${M}_{i} \in {R}^{3}$ is a vertex in $M,\Pi \in {R}^{2 \times 3}$ is the orthographic 3D-2D projection matrix, and $s \in R$ and $t \in {R}^{2}$ denote isotropic scale and 2D translation respectively. The parameters $s$ and $t$ are summarized as an orthographic camera model $c$ .
92
+
93
+ § 3.3.2 STEP 2: EXTRACTING HEAD PARAMETERS
94
+
95
+ With the cropped Sender face as input, a Head Parameter Extractor estimates fine-grained keypoint locations using a ResNet50 [19] followed by a fully connected layer to produce a latent code $e$ , dependent on the used model,2D keypoint locations $\mathbf{p} \in {R}^{32}$ and edge weights $\mathbf{w} \in {64}$ or FLAME parameters, consisting of geometry $\left( {\beta ,\psi ,\theta }\right) \in {R}^{156}$ , albedo coefficients $\alpha \in {R}^{50}$ , camera matrix $c$ , and lighting parameters $l$ . This amounts to at most 2.5 KBytes/frame for encoding the $3\mathrm{D}$ head model of a Sender’s image. As only the time-varying pose information need to be sent every frame, the information sent for the $3\mathrm{D}$ reconstruction is substantially less than what would be needed to send a whole $3\mathrm{D}$ model of the Sender, greatly reducing network transmission time.
96
+
97
+ § 3.4 DECODER-RECEIVER
98
+
99
+ The Decoder is responsible for using the Receiver’s position $p$ and parameters $e$ predicted by the Encoder to reconstruct a view-dependent RGB image of the Sender. For the simpler 2D case, the decoder is a single network. Below we explain the 3D version that includes additional, view-dependent rendering steps.
100
+
101
+ There are two main steps in the process. First, the latent code $e$ is used to reconstruct the 3D head mesh of the Sender. Second, we use the personalized neural renderer to take the coarse 3D Head mesh, rotate it to the Receiver's position, and generate a photo-realistic image of the Sender, view-dependent to appear on the view-dependent display.
102
+
103
+ § 3.4.1 3D NEURAL HEAD RENDERER
104
+
105
+ In the 3D setting, given the estimated FLAME parameters from the Encoder, the Decoder reconstructs the FLAME 3D head mesh using linear blend skinning (LBS) on parameters $e$ . To ensure that head is consistently centered in the Receiver's display, we rotate the mesh to the viewpoint $p$ and subtract the midpoint of vertices on each ear from all vertices on the mesh. One of our preliminary baselines uses the coarse albedo parameters to texture and render the mesh. However, simple texture mapping is not photorealistic Hence, we apply deferred neural rendering and first render the 3D mesh with UV coordinates as the texture. This UV map rendering then conditions the subsequent neural renderer along with a subset of the $e$ parameters. Lastly, because the FLAME parameters are predicted from a single image, we apply a small, one-sided box-filter to the pose $\left( \theta \right)$ and the shape $\left( \beta \right)$ parameters during online system evaluation.
106
+
107
+ § 3.4.2 CACHED 3D NEURAL RENDERER
108
+
109
+ To accommodate for the the lower latency required for 3D telecon-ferencing, we use an optimized version [51] of the deferred neural renderer [45]. It is composed of two neural networks: a deep caching network that turns personalized neural textures to frame specific neural feature maps and a lightweight warping network that warps the feature maps cached from the previous frame.
110
+
111
+ The larger caching network can therefore be run sparingly, allowing to reduce the latency while minimally decreasing the visual quality of the generated image. On a multi-GPU machine, this method parallelizes and also increases the rendering frame-rate Note that because this neural renderer is grounded with a $3\mathrm{D}$ mesh. we are able to rotate the mesh (and thereby the UV map) to perform viewpoint-dependent rendering at inference time, based on the Receiver's tracking data. Multiple viewpoints can be rendered for different display configurations, such as right/left perspectives for stereo.
112
+
113
+ § 3.5 RECEIVER HARDWARE
114
+
115
+ For our proof-of-concept implementation, the Receiver side hardware includes a spherical view-dependent display [55], a computer with a GPU, and a tracking system. In our current implementation, we explore the modularity of the platform by running the Decoder and Display processes on separate computers to illustrate that the display may be a self-contained system or the Decoder may be running using cloud services. However, they can also be run on a single computer. In Section 4.2, we analyze the timings of the different system components; thus, separating them allows us to consider this particular scenario.
116
+
117
+ § 3.5.1 SPHERICAL VIEW-DEPENDENT DISPLAY + COMPUTER
118
+
119
+ We use a large spherical view-dependent display [55], also known as a fish-tank virtual reality (FTVR) display. It uses a 24-inch plexiglass spherical screen with a mosaic of 4 registered mini projectors projecting through an 18-inch diameter hole at the bottom. This particular display is well suited for showing a view-dependent rendering of a Sender's head because the spherical shape allows the Receiver to walk around the display and there are no seams. The mosaic of projectors provides a high-resolution, bright image. It has also been shown to be the most effective type of display for representing size and shape constancy which are important for human faces [57] Lastly, the size of the sphere is large enough that a 1:1 aspect ratio is possible for human heads allowing for investigating whether the size of a 3D rendering of a speaker plays a role in perceived quality of presence. The display supports both view dependent and stereo depth cues. If such display is not available, our system also supports rendering to a flat screen.
120
+
121
+ § 3.5.2 TRACKING SYSTEM
122
+
123
+ The tracking system provides Receiver's position and viewing angle to the view-dependent display to achieve view-dependent rendering The quality of view-dependent rendering is sensitive to errors in viewpoint tracking since it contributes significantly to the eye angular error pixels on a spherical view-dependent display [12, 56]. For our proof-of-concept implementation, we use OptiTrack (NaturalPoint Inc., Corvallis, OR) Prime-41 cameras to capture Receiver's position and orientation. This system uses retroreflective markers mounted on the Receiver's shutter glasses. The current tracking system has less than ${0.2}\mathrm{\;{mm}}$ of measurement error and the real-time streaming application connected with Unity has less than ${10}\mathrm{\;{ms}}$ latency. The tracker data is used both by the Decoder and the Display Renderer (see 3.5.3). The Decoder uses Receiver's position and orientation to render perspective dependent images for display.
124
+
125
+ § 3.5.3 RECEIVER DISPLAY RENDER
126
+
127
+ The rendering pipeline for the spherical display [12] is implemented in Unity (Unity Technologies, San Francisco, CA). It features a two-pass rendering approach: 1 . render the image from a Receiver's perspective, and 2. render the pixels on the output display. This separation enables the neural renderer to be trained display agnostic for planar frontal views while mapping to the desired display at runtime. For the spherical display, the second pass involves a mapping between 2D projector pixels to $3\mathrm{D}$ surface positions on a non-planar surface. This warping transformation is achieved by sampling the $2\mathrm{D}$ image texture in a shader program and using of the multiple-projector calibration matrix [55]. The same rendering pipeline also supports several different display modes, including mosaic display on the FTVR sphere, flatscreen display, and virtual display where you can freely move around the rendered objects in a virtual scene; thus, is versatile for researchers to experiment with different view-dependent display types.
128
+
129
+ We build on top of the two-pass rendering to further integrate the neural rendering into the pipeline by adding a rotating plane in the scene that is always normal facing the user and vertical. The neural renderer only requires the Receiver's position and the thereby requested view is always up-right and onto a virtual planar image plane without distortion. To the user, they will always see the view corrected image based on their tracked position and display geometry. When they move around, this image and its orientation will be updated in real time by different aspects of the reconstructed talking head through neural rendering. This technique creates a sense of viewing 3D object while only rendering flat 2D images.
130
+
131
+ § 3.6 TELE-COMMUNICATION NETWORK
132
+
133
+ The goal of the telecommuncation network is to provide flexibility for where the computational resources are for each of the modules while at the same time providing an end-to-end infrastructure that mimics real-world conditions to support stress testing different modules used for 3D teleconferencing. Thus, we use a WebRTC backbone for communication with a ZMQ wrapper for each of the components in the platform. These are described next.
134
+
135
+ § 3.6.1 INTERNET BACKBONE
136
+
137
+ We use the WebRTC protocol [40] using the libdatachannel [1] implementation to negotiate a direct peer-to-peer connection between the Encoder-Sender and the Decoder-Receiver over the internet. A WebRTC UDP configured data channel [21] facilitates the real-time transfer of 3D head parameters between the Sender and the Receiver. The 3D head parameters corresponding to a single frame are serialized using Protocol Buffers in order to be transmissible over the data channel. The UDP data channels are also used for data transfer between and Persistent Data Storage that is not local as well as the Tracker data to the Decoder. All the data channels are wrapped with a ZeroMQ [58] wrapper to provide a common interface for all the interprocess communication including support for different languages.
138
+
139
+ As the communication channels use UDP/IP with a ZeroMQ wrapper for all the communication interfaces, all the components of the end-to-end system can run on different machines as needed. Likewise, the interfaces between components have definitions for different language support enabling researchers to have flexibility in using $\mathrm{C} + +$ , python or other languages to implement specific algorithms. For example, in our current proof-of-concept implementation the networking is $\mathrm{C} + +$ , the Encoder is implemented in Python/PyTorch, and the Decoder is Python/PyTorch.
140
+
141
+ < g r a p h i c s >
142
+
143
+ Figure 4: View-dependent rendering examples at different viewpoints: the first row shows the Receiver's perspective and the second row shows the different positions of the Receiver by a fixed camera location. (a) Viewpoint at left of origin, seeing the right side of the Sender's face (b) Viewpoint at origin, seeing the front side of the Sender's face (c) Viewpoint at the right side of origin, seeing the left side of the Sender's face.
144
+
145
+ Wrapping the communication channels supports the ability to send data structures seamlessly between different processes with different languages freeing the researcher to focus on using their preferred tool while the infrastructure takes care of the scaffolding needed to get the end-to-end system working for doing the analysis. Using this approach also ensures that components that are running on the same machine will do the data exchange locally.
146
+
147
+ § 4 MODULE EVALUATION IN OPENTELEVIEW
148
+
149
+ We present results from analysis of each of our baseline modules when operating independently and as a part of the end-to-end system. The intent is to illustrate that performance analytics available within the end-to-end platform are effective to uncover inter-dependencies between components within the overall system and help to determine where bottlenecks in performance are coming from to guide algorithm development. The experiments reported here demonstrate the utility of testing modules in the OpenTeleView framework to address limitations otherwise unseen in isolated modules. We also show that our baseline end-to-end 3D teleconferencing implementation, along with the variations used for illustrating affects of changes to different modules, provide a good baseline for comparing future encoders/decoders/cameras and displays.
150
+
151
+ § 4.1 MODULE EVALUATION DATASET
152
+
153
+ To illustrate evaluating performance of our individual modules, we recorded a 1920x1080 resolution, 60 FPS, stereo-view talking head dataset (main and side views) of one woman test subject. A second view is recorded to evaluate the Decoder's view-point dependent rendering capabilities. We include 5 sub-sequences in this dataset used for training, validation, and testing the Decoder, a sequence of fast-moving head motions for the Encoder, and a sequence for calibrating the cameras. We will make this dataset publicly available so others can evaluate their modules on the same data.
154
+
155
+ § 4.2 BASELINE SYSTEM LATENCY
156
+
157
+ Figure 5 shows the end-to-end live transmission system pipeline with FPS and latency of each corresponding component. The FPS results are generated by measuring the run time of each individual component. The theoretical latency is computed directly by taking the reciprocal of FPS. For comparison, we also estimated the perceptual latency by computing the time difference between the same movement of a real human and the rendered image on the display. To measure this, we use another high-speed camera to capture, in the same frame, the eye blink motion of a Sender talking and their image in the view-dependent display to calculate the time difference between blink motion. Using this approach we also take into account the OS and camera dependent delays to get an estimate of the overall system latency that would be in a real-system. With our particular camera hardware and OS, the perceptual latency is approximately ${280}\mathrm{\;{ms}}$ , thus, non-encoder/decoder related elements contribute around ${100}\mathrm{\;{ms}}$ of latency. The additional latency in the perceptual measurement comes from the time between eye-blink to the next frame capture (half a frame delay on average), asynchronous queue, and minimal smoothing applied to the estimated head parameters to mitigate jitter. Note that, due to the cached neural renderer, the perceptual novel-view-synthesis latency is much lower, at ${35}\mathrm{\;{ms}}$ , which facilitates a faithful VR experience even if the whole system communication is slower.
158
+
159
+ max width=
160
+
161
+ Pipeline Component Camera A Encoder ZMQ Internet ZMQ Decoder ZMQ Display Total
162
+
163
+ 1-10
164
+ FPS 90 25 >100 $> {100}$ $> {100}$ 54 >100 $> {100}$ 25
165
+
166
+ 1-10
167
+ Latency (ms) 21 40 3 2 3 90 3 10 172
168
+
169
+ 1-10
170
+
171
+ Figure 5: System Latency Breakdown: The blue coded parts are major system components, green coded parts are inter-transmission ZMQ, and the red parts are total FPS and latency. The total end-to-end latency with our computer hardware configuration is 172 msec at 25 fps. Additional latency due to the camera interface to the Open-TeleView components is dependent upon operating system drivers and are not included in these figures.
172
+
173
+ § 4.3 VELOCITY-BASED 2D FACE TRACKING
174
+
175
+ To evaluate the speed of the Encoder with our velocity-based 2D face tracking method, we independently test the Encoder using our recorded video of a subject moving their head quickly. The ${1920} \times {1080}$ at ${60}\mathrm{{FPS}}$ video contains 995 frames in total. Using our velocity method, the Encoder reruns the full face detection 58 times; without the velocity method, the Encoder reruns full face detection 676 times. Thus, it can be observed that our simple velocity method, that predicts the next frame's face location, can achieve a significant reduction in the number of times we have to rerun the time-costly face detection algorithm. From the perspective of the OpenTeleView platform affordance for this module, the timing information and the ability to switch between recorded video and live video feeds within the whole framework provides useful analytics to target timing bottlenecks to facilitate improving each module. In this case, we compared three different approaches that tradeoff face detection accuracy and computational load affecting latency.
176
+
177
+ § 4.4 DECODER RECONSTRUCTION QUALITY
178
+
179
+ To evaluate the quality of our displayed image, we independently test our Decoder's neural renderer on the withheld test sequence of our talking head dataset (main view). We are able to achieve a peak signal-to-noise ratio (PSNR) of 27.5 on the image from which the 3D head parameters have been estimated. Furthermore, we also test our models ability to perform view-dependent rendering by evaluating it on the second (side) view. This is done by taking the estimated 3D head parameters from the frontal recording, rotating those corresponding to the head pose based on the rotation matrix between the main and side cameras. This is an especially difficult setting, for which our model was able to reconstruct the entire sequence with an average PSNR of 25.7. Note, when running our model on the estimated parameters from the side view, we are only able to achieve a PSNR of 26.7, showing that his side view is in general more difficult to reconstruct. Qualitative results and error maps can be seen in Figure 6.
180
+
181
+ < g r a p h i c s >
182
+
183
+ Figure 6: Comparison of our Decoder against the ground truth image for both main and side view examples. Examples where we perform novel-view synthesis (NVS) on the parameters from the main view are also shown.
184
+
185
+ § 4.5 OPENTELEVIEW MODULARITY
186
+
187
+ To demonstrate the modularity of the proposed platform, we also experiment with the 2D encoder and decoder introduced in [20]. In this setting, we transfer the $2\mathrm{D}$ keypoint locations $\mathbf{p}$ and their edge weights $\mathbf{w}$ obtained from the encoder. These are first rasterized into a coarse mesh, which is then lifted to a full image using a UNet. The latency for the encoder and decoder is respectively $4\mathrm{\;{ms}}$ and ${45}\mathrm{\;{ms}}$ . Figure 7 shows example images using this approach. It demonstrates that the platform can support entirely different encoder and decoder networks and corresponding parameterization (2D vs. 3D), without having to change the network communication or other parts of the framework.
188
+
189
+ § 4.6 OPENTELEVIEW INTEGRATION
190
+
191
+ However, while each of our models is tested and developed using recorded videos and in isolation; when integrated, upstream delays in capturing and processing, such variable camera frame rates, leads to the degradation of downstream performance.
192
+
193
+ In the variable frame rate camera input, we observed that the velocity-based head tracking and warping-based neural renderer must compensate for increased differences between incoming frames. Further, the jitter associated with the incoming frames is not consistent, thus, a neural renderer may have variable time differences between frames, further challenging research that uses this approach. We illustrate how both modules are affected by changes in overall system framerate by subsampling frames in recorded videos and measure the performance versus the input framerate. These results are shown in Figure 8.
194
+
195
+ The ability for our OpenTeleView end-to-end platform to integrate different components easily enables isolation of each component's performance within real-world scenarios. Thus, in our example implementation, we illustrate that by switching in different encoder solutions and decoder solutions with both live feed and recorded video feeds, careful performance analysis, the strengths and weaknesses of each component are identified along with the inter-dependencies amongst the components. Thus, our OpenTeleView platform fills a significant gap in assessing different computer vision approaches to the encoder and decoder methods that are usually assessed only in isolation on pre-recorded data sets. Hence, our contribution enables apples-to-apples comparisons of different algorithms for 3D teleconferencing.
196
+
197
+ < g r a p h i c s >
198
+
199
+ Figure 7: Example of modularity: we substituted using a 2D AutoLink method that conditions on 2D keypoints instead of a 3D mesh [20]. The encoder/decoder interface makes this a simple operation so that researchers can swap different approaches to compare performance in real-world like end-to-end teleconferencing.
200
+
201
+ < g r a p h i c s >
202
+
203
+ Figure 8: Reduced input video/image frame rates negatively impact the performance of both the velocity-based head tracking and the neural rendering.
204
+
205
+ § 5 LIMITATIONS AND FUTURE WORK
206
+
207
+ The focus of this paper is on the OpenTeleView platform rather than the specifics of the baseline encoder/decoder pair we implemented to provide a particular baseline. In that context, even though our platform has most of the major modules implemented for end-to-end $3\mathrm{D}$ teleconferencing, there are some components which we leave for future work. These include: additional analytics such as temporal and spatial jitter measurements; additional baseline use-cases such as multicamera and mobile displays; symmetric communication, multicast abilities; embedded, synchronized audio support rather than out-of-band audio; and, parameterized input control so that the input video stream characteristics can be easily adjusted to simulate different real-world camera input statistics.
208
+
209
+ § 6 CONCLUSION
210
+
211
+ We created the OpenTeleView platform along with two baseline implementation that illustrates how the end-to-end 3D teleconferencing can work and future research on separate modules can be analyzed. The baseline 3D implementation provides a medium fidelity tele-conferencing experience using modifications of existing techniques available in the literature. The second method uses a much simpler 2D representation to illustrate the support for modularity and flexibility of the encoder/decoder to support a range of approaches researchers may investigate. The platform is intended to use off-the-shelf components for computational, camera and display hardware along with an internet-based communication infrastructure so that it is accessible to a large range of researchers. This approach enables research on specific approaches to encode the input video and decode it to provide view-dependent rendering needed for 3D teleconferenc-ing to be tested and analysed in a common end-to-end platform. By doing so, research contributions on specific modules can be tested in real-world scenarios to facilitate constant innovations in $3\mathrm{D}$ tele-conferencing technology to lead the way for establishing this new form of remote communication.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/Gkogn48LeI/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,495 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # "There is no reason anybody should be using 1D anymore": Design and Evaluation of 2D Jupyter Notebooks
2
+
3
+ Category: Research
4
+
5
+ ![01963e08-3485-70c4-92b4-fed45b904948_0_234_382_1330_315_0.jpg](images/01963e08-3485-70c4-92b4-fed45b904948_0_234_382_1330_315_0.jpg)
6
+
7
+ Figure 1: From Left to Right: Finding & Comparing Results 2D Notebook, Parameter Tuning 2D Notebook, Code Comparison 2D Notebook Clip
8
+
9
+ ## Abstract
10
+
11
+ Current computational notebooks, such as Jupyter, are a popular tool for data science and analysis. However, they use a 1D list structure for cells that introduces and exacerbates user issues, such as messiness, tedious navigation, inefficient use of large screen space, performance of non-linear analyses, and presentation of non-linear narratives. To ameliorate these issues, we designed a prototype extension for Jupyter Notebooks that enables 2D organization of computational notebook cells. In this paper, we present two evaluative studies to determine whether "2D computational notebooks" provide advantages over the current computational notebook structure. From these studies, we found empirical evidence that $2\mathrm{D}$ computational notebooks provide enhanced efficiency and usability. We also gathered design feedback which may inform future works. Overall, the prototype was positively received, with some users expressing a clear preference for $2\mathrm{D}$ computational notebooks even at this early stage of development.
12
+
13
+ Index Terms: Human-centered computing-Human Computer Interaction (HCI); Human-centered computing-Visualization
14
+
15
+ ## 1 INTRODUCTION
16
+
17
+ Computational notebooks like Jupyter $\left\lbrack {{13},{19}}\right\rbrack$ , used to construct and present computational narratives $\left\lbrack {{27},{32},{34}}\right\rbrack$ , struggle with non-linear analyses, such as comparative analyses, and non-linear narratives $\left\lbrack {9,{32}}\right\rbrack$ , as well as navigating longer notebooks $\left\lbrack 9\right\rbrack$ , preventing and managing messiness $\left\lbrack {7,{10},{16},{22},{23},{32}}\right\rbrack$ , and efficiently using large display spaces [9]. We suggest that part of the reason for these issues is the current 1D, top-to-bottom organization of notebook cells.
18
+
19
+ Weinman et al.'s work on Fork-It [37] showed 2D space can be helpful; they introduced forking, the temporary creation of split columns in an otherwise 1D notebook. While this work helps nonlinear analyses, it does not easily accommodate non-linear narratives, which may benefit from a persistent multiple column approach. Wang, Dai, and Edwards [36] also sought to shift computational notebooks from the current 1D structure with Stickyland, which allows users to "stick" cells to a dock that is persistently at the top of the computational notebook interface even when scrolling. Harden et al. [9] explored how users would arrange cells in $2\mathrm{D}$ and found three different patterns: linear (with either split cells or split columns), multi-column, and workboard. This work demonstrated alternative organizations of cells, some of which would not be possible in the prior works mentioned; it also suggests that computational notebook users could benefit from 2D space usage for organizing notebook cells in a more flexible yet persistent manner.
20
+
21
+ This paper contributes to computational notebook research through evaluations of a 2D layout extension for computational notebooks. We focused on the following research questions:
22
+
23
+ 1. When comparing 1D and 2D layouts, which mode supports more efficient user completion of data science tasks, such as information retrieval, results comparison, parameter tuning, and code comparison?
24
+
25
+ 2. What strengths and weaknesses might 2D layouts have compared to 1D layouts?
26
+
27
+ 3. Would users find 2D layouts more usable than 1D layouts?
28
+
29
+ 4. Would users prefer to use 2D layouts for computational notebook cells?
30
+
31
+ To answer these questions, we designed a Jupyter Notebook extension that enables a 2D multi-column cell layout. We then conducted two user studies using this extension where users performed a series of tasks in both 1D and 2D layouts, followed by qualitative data gathering through surveys and, in the second study, interviews. The first study used pre-made notebooks to evaluate whether the extension enhances performance and usability, while the second study focused on creation of a 2D notebook from scratch for a data science task. We found 2D layouts provided more efficient user task performance and enhanced usability over 1D layouts. Users overwhelmingly preferred the 2D notebooks, and made use of available display space to organize notebooks such that more cells are simultaneously visible. We also noted some design challenges for $2\mathrm{D}$ layouts, including managing column width in a multi-column layout.
32
+
33
+ ## 2 BACKGROUND AND RELATED WORKS
34
+
35
+ This work builds on two key areas of research: computational notebooks and Space to Think.
36
+
37
+ ### 2.1 Computational Notebooks
38
+
39
+ Computational notebooks support incremental and iterative analysis $\left\lbrack {{14},{32}}\right\rbrack$ and computational narrative formation through interleaving code, visualizations, and text $\left\lbrack {{26},{32}}\right\rbrack$ . However, computational notebook users face various issues and pain points [4], such as messiness $\left\lbrack {{10},{17},{23},{32}}\right\rbrack$ , dealing with non-linear analyses and narratives [32], and navigating longer notebooks [9]. These issues may be exacerbated by the current 1D structure of computational notebooks.
40
+
41
+ ![01963e08-3485-70c4-92b4-fed45b904948_1_151_140_1497_325_0.jpg](images/01963e08-3485-70c4-92b4-fed45b904948_1_151_140_1497_325_0.jpg)
42
+
43
+ Figure 2: Notebook Controls for 2D Jupyter extension
44
+
45
+ Head et al. [10] showed messiness can come from disorder, deletion, and dispersal, where disorder means run order and presentation order are different, deletion means overwriting or deleting necessary code, and dispersal means related cells are far apart. Many tools have been developed to help deal with messiness, from Head et al.'s work [10], to cell dependency graph visualization [38] to version control systems for computational notebooks [15, 16]. The 1D structure may exacerbate messiness given the looping nature of sensemaking in computational notebooks [28, 29], so 2D space usage may help minimize it.
46
+
47
+ Scrolling through a long notebook can be tedious and negatively affect various tasks like debugging and cleaning. While Google Colaboratory [8] enables jumping to different sections through a table of contents, the 1D structure can still result in tedious scrolling.
48
+
49
+ Exploration of 2D space usage by Weinman et al. [37] and Harden et al. [9] produced positive responses. Within the bounded 2D of Fork-It [37], users did more than just comparative analyses; they used the split column structure to organize code and contain messes. Harden et al.'s [9] findings corroborate these potential use cases.
50
+
51
+ ### 2.2 Space to Think
52
+
53
+ Andrews et al. [1] found large, high-resolution displays benefit sensemaking in 2 key ways through what they called "Space to Think": external memory and semantic encoding. External memory means more information can be stored on screen space instead of in one's mind, which allows physical navigation, like moving one's head, to replace virtual navigation, like scrolling or changing tabs. Semantic encoding means users can group related items spatially based on their mental model of the connection between items; in short, users can externalize their understanding onto the screen. Recent studies $\left\lbrack {5,{20},{21}}\right\rbrack$ have expanded this concept to the space provided by virtual and augmented reality or cited Space to Think as an influence on their design [25,30,31]. Kirshenbaum et al. [18] found Space to Think can also benefit collaborative meetings.
54
+
55
+ Current computational notebook systems with their 1D structures do not adequately use Space to Think without clumsy workarounds like opening the same notebook multiple times and arranging side-by-side. 2D space usage may enable Space to Think in data science tasks [9]. To this end, some recent tools, such as VisSnippets [3], Einblick [11, 33], CoCalc [12, 24, 35], and Code Bubbles [2], have begun to explore 2D layouts of cells using a whiteboard metaphor.
56
+
57
+ ## 3 DESIGN OF 2D JUPYTER NOTEBOOK EXTENSION
58
+
59
+ Harden et al. [9] found two main categories of 2D layouts for computational notebooks based on user-generated layouts: multi-column and workboard, both of which are supported by the 2D Jupyter extension we developed and evaluated; the extension can be found at https://github.com/elizabethc99/2D-Jupyter on GitHub. Multi-column is fully supported. Workboard, or more complex structures such as directed graphs and nested columns and rows, is enabled by freeform dragging of cells.
60
+
61
+ To support multi-column layouts, 2D Jupyter enables creation and deletion of columns, resizing and re-ordering of columns, adding cells to a column, and moving cells from one column to another; This is done through user interface (UI) additions, as seen in Figure 2. The Plus and Minus buttons on the main toolbar create and delete individual columns respectively. Also, each column now has a toolbar at its top; The bold Plus button here adds a cell to the column, the left and right arrows reorders the column in the arrow direction, and the gray box can be clicked and dragged to resize a column. Finally, cells can be dragged to another column by clicking and holding the new gray box on each cell's left side.
62
+
63
+ To enable workboard layouts, each cell can be dragged and placed outside of the columns, as seen in the freeform cell in Figure 2. More advanced workboard features, such as arrows to connect cells or other whiteboard annotations are not yet implemented. For now, we suggest using workboard freeform cells for more ephemeral uses such as scratch space, viewing data, and other tasks not relevant to the final computational narrative.
64
+
65
+ ## 4 STUDY 1 METHODOLOGY
66
+
67
+ The goal of our first study is to measure and compare user task performance in 1D and 2D notebooks. We therefore conducted a controlled study consisting of a pre-screening questionnaire, a set of user performance tasks, and survey questions. The study design had one within-subjects variable, layout with two treatments, $\mathbf{{ID}}$ and ${2D}$ ; and one between-subjects variable, order with two treatments, 1D-first and 2D-first. The user tasks focused on research question 1; participants completed three task sections in both 1D and 2D. For the surveys, we focused on research questions 2-4.
68
+
69
+ ### 4.1 Recruitment and Screening
70
+
71
+ We recruited 89 participants via academic listservs of students and faculty from a large state university. Each participant completed a screening questionnaire asking whether they had experience with both Python and computational notebooks such as Jupyter. We invited 62 participants with experience in both tools to continue; 31 completed the study, with 1 of these 31 participants' data discarded due to technical issues. 16 participants, including the one whose data was discarded, were assigned to 1D-First, while the other 15 were assigned to 2D-First, this effectively led to 15 participants for both the 1D-First and 2D-First treatments. Participants were randomly assigned to a group, with the only restriction being balancing the group numbers so that they were as equal as possible.
72
+
73
+ ### 4.2 Hardware for User Study
74
+
75
+ For the user study tasks, participants used an iMac computer with a 24-inch monitor and either an iMac mouse with a built-in trackpad for horizontal and vertical scrolling, or an external trackpad with horizontal and vertical scrolling that also had buttons for clicking. The monitor was wide enough to display 4 to 5 columns of the notebook at a time.
76
+
77
+ ### 4.3 Task Designs & Rationales
78
+
79
+ The tasks were designed to mimic common data science scenarios performed in computational notebooks. We created 6 computational notebooks (3 1D, 3 2D) for this study. Each notebook was designed for one of three task sets: Finding & Comparing Results, Parameter Tuning, and Code Comparison. Each layout (1D, 2D) and task set combo had one notebook, and each task set's notebooks were slightly different so participants could not memorize answers between layouts. However, the differences were designed to not impact difficulty between the tasks in 1D and 2D. Users had the notebooks open, one at a time, on the iMac, while the user study survey, with questions and instructions, was open on a separate laptop.
80
+
81
+ To compare 1D vs. 2D, we measured time to completion and accuracy for all tasks; we also measured the number of times and amount of time spent scrolling for the code comparison task. 16 participants started with the 1D notebooks first, and 15 participants started with 2D first; This design, along with training in the first notebook layout type for each person, helped counterbalance the study to minimize bias from repeated tasks. One 1D First participant's data was discarded due to technical issues. Each participant took at most 1 hour to complete the study.
82
+
83
+ #### 4.3.1 Finding & Comparing Results Task
84
+
85
+ Harden et al. [9] found that users expected finding and comparing tasks to be better in 2D layouts than in 1D layouts. Thus, this task set sought to measure statistically whether such a benefit exists.
86
+
87
+ The notebooks for this task set contained COVID-19 data analysis for the USA by state and then for 5 individual states by county, as seen in the left image in Figure 1. Sections 1-3 of these notebooks had cells for imports, function definitions, and data preparation, while Sections 4-9 had cells that analyzed and visualized results for each geographic region as a scatterplot and 3 bar charts. In data science, such notebooks often result from copying-and-pasting cells for parallel analyses of different data subsets. The 1D notebook design concatenated these sections into a single long list of cells. In the 2D notebook, each of the 9 sections was separated into its own column of cells, with columns arrange left to right. This notebook design was based on common layout strategies previously observed by Harden et al. [9], where a common strategy was to organize parallel analyses in side-by-side columns to enable easy comparison.
88
+
89
+ For this task set, we included a find task, a graph comparison task, and a numerical comparison task. We did not allow participants to look over the notebook before beginning the task set.
90
+
91
+ In the find task, participants had to locate info in the notebook based on the notebook structure. The question was of the form "Which state's analysis is found between the analysis of STATE1 data and the analysis of STATE2 data?" We measured the time it took each participant to retrieve the info in 1D vs. 2D notebook layouts. The hypothesis was that spatial 2D columns would enable more rapid recognition and access to relevant notebook sections.
92
+
93
+ In the graph comparison task, participants had to compare results in several different charts throughout the notebook. The question was of the form "Out of those shown in the relevant bar charts, which county in which state, EXCLUDING the ALL STATES section, had the highest number for ATTRIBUTE of COVID-19?" We measured the time it took each participant to compare charts in $1\mathrm{D}$ vs. $2\mathrm{D}$ notebook layouts. The hypotheses was that 2D column structure that aligned parallel analyses would enable faster comparison by horizontally scrolling through the corresponding charts, whereas the 1D notebook would require significant vertical scrolling and searching for each chart to compare.
94
+
95
+ Similarly, in the numerical comparison task, participants were asked a question of the form "Which section's scatterplot graph's line of best fit least/best fits the data (coefficient of determination closest to 0/1)?" The coefficient of determination was displayed above each scatterplot. We measured the time it took each participant to compare numerical results in $1\mathrm{D}$ vs. $2\mathrm{D}$ notebook layouts.
96
+
97
+ #### 4.3.2 Parameter Tuning Task
98
+
99
+ A common problem in data science involves testing various parameter values for an ML model. The notebooks for this task, as seen in the middle image in Figure 1, contained K-Nearest Neighbors (KNN) algorithm used to analyze network stability data. Participants were instructed the following: "You will be asked questions that require tuning the parameter ’ $\mathrm{k}$ ’ in Section 1 and choosing the distance metric in Section 4. Only run the necessary cells (the "k-value" cell in Section 1, and the cells in Section 4) to test each possible parameter set (k-value and distance metric)." In each notebook, the cell which assigns the k -value was in the first section while the code for calculating the distances, making predictions, and determining accuracy on the test set were in the fourth section; participants were not allowed to move cells. Participants were asked three questions in the following order, with different $\mathrm{k}$ -value options for $1\mathrm{D}$ and $2\mathrm{D}$ :
100
+
101
+ 1. Which of the following k-values produces the most accurate model with the given dataset for the Euclidean distance metric?
102
+
103
+ 2. Which of the following k-values produces the most accurate model with the given dataset for the Manhattan distance metric?
104
+
105
+ 3. Given each distance metric with its optimal k-value, which distance metric produces the most accurate model on the given dataset?
106
+
107
+ In data science endeavors, code near the beginning of a notebook can influence results later on in the notebook; While it is possible to move such dispersed cells closer to each other, such re-ordering is not always feasible depending on the design of the analysis. Thus, we sought to simulate a situation in which one wants to retain the given order while continuing their analysis. The goal here is to see if 2D notebooks, with a layout where each section has its own column, can minimize the effects of dispersal [10] by making cells that are far apart in a 1D layout effectively closer on the screen in a 2D layout and lead to performance improvements. Thus, we measured how long it took participants to answer all three questions together.
108
+
109
+ #### 4.3.3 Code Comparison Task
110
+
111
+ Data scientists often need to compare the code for multiple versions of a model to understand differences. The notebooks for this task, as seen in the right image in Figure 1, contained two runs of a K-Nearest Neighbors ML algorithm with several code differences between them. Participants had to choose which items from the list of options, ordered in terms of appearance, differed between each run. The 2D notebook organized the two runs into adjacent columns. The list of differences included items such as the following:
112
+
113
+ 1. The cutoff number for the training and testing splits
114
+
115
+ 2. Different distance metrics (Manhattan, Euclidean) used
116
+
117
+ 3. The variable name for the distance matrix
118
+
119
+ 4. The value of $\mathrm{k}$ (number of nearest neighbors)
120
+
121
+ The goal of this task was to test how quickly users can find differences between two similar sets of code, which often happens when debugging model errors. Given that Harden et al. [9] found significant skepticism about the potential of $2\mathrm{D}$ notebook layouts for debugging, it makes sense to test this important debugging sub-task.
122
+
123
+ Online Submission ID: 0
124
+
125
+ Table 1: P-values for 2-Factor ANOVA by Task and Effect
126
+
127
+ <table><tr><td>Task</td><td>Order</td><td>Layout</td><td>Interaction</td><td>Improvement by ${2D}$</td></tr><tr><td>Find</td><td>0.024</td><td>0.271</td><td>0.378</td><td>N/A</td></tr><tr><td>Graph Comparison</td><td>0.106</td><td><0.001</td><td>0.032</td><td>32%</td></tr><tr><td>Number Comparison</td><td>0.023</td><td>$\mathbf{ < {0.001}}$</td><td>0.007</td><td>46%</td></tr><tr><td>Parameter Tuning</td><td>0.934</td><td>0.007</td><td>0.219</td><td>19%</td></tr><tr><td>Code Comparison</td><td>0.840</td><td><0.001</td><td>$\mathbf{ < {0.001}}$</td><td>34%</td></tr></table>
128
+
129
+ Bolded values are statistically significant with a 0.05 threshold. All other values are not statistically significant.
130
+
131
+ ### 4.4 Survey Questions Design
132
+
133
+ Likert-scale questions were used at the end of both the 1D and 2D task sections, and after both sections were completed. The 5 questions at the end of the 1D and 2D task sections focused on rating each layout individually, without comparison to the other, while the 13 questions at the end focused on comparing 1D and 2D layouts; these 13 questions were largely taken from Harden et al.'s experiment [9]. After the 13 questions was a comment box where users could elaborate on any answers they gave.
134
+
135
+ The questions after each of the 1D and 2D task sections focused on perceptions of usability for the layout on the given tasks; We compared their answers between layouts to better understand whether users saw potential improvements in 2D layouts over 1D layouts.
136
+
137
+ ### 4.5 Data Analysis Process
138
+
139
+ We divided the quantitative data analysis for Study 1 into 3 areas: Efficiency Measurements, Survey Questions, and Scrolling Time.
140
+
141
+ #### 4.5.1 Efficiency Measurements
142
+
143
+ We used 2-Factor ANOVA to test if layout (1D or 2D), as well as order (1D First, 2D First, ) and interaction between layout and order, affected time to completion; significant results were followed up with Tukey's Test to determine the nature of the effects. We also compared the mean differences found by Tukey's Test to average completion time for $1\mathrm{D}$ in the form of percent time saved.
144
+
145
+ #### 4.5.2 Survey Questions
146
+
147
+ For the Post-1D and Post-2D questions, we created and analyzed a bar chart of average rating by order and layout, and a heatmap of ratings. We also tested the statistical significance of the differences in ratings using a paired t-test and the Wilcoxon test, inspired by work by De Winter and Doduo on analyzing Likert-Scale questions [6]. For the Post-Experiment questions, we made and analyzed a heatmap of ratings. Finally, we analyzed qualitative comments for themes.
148
+
149
+ #### 4.5.3 Scrolling Time
150
+
151
+ To determine the amount of scrolling done in 1D vs 2D, we recorded scrolling events, including the time taken to scroll, while watching the footage for each participant's Code Comparison task work in 1D and 2D. We limited events to scrolls for navigation as opposed to micro-scrolling events that do not bring new cells into view; we did this by only considering those scrolling events that lasted for at least 2 seconds. To determine scrolling endpoints, we looked for breaks between scrolls lasting at least 2 seconds; scrolling events with smaller breaks than 2 seconds were considered as 1 event for the purpose of this analysis.
152
+
153
+ ## 5 STUDY 1 RESULTS
154
+
155
+ We divide our results into 4 areas: User Interaction Strategies, Efficiency Measurements, Survey Questions, and Scrolling Time.
156
+
157
+ ### 5.1 User Interaction Strategies
158
+
159
+ Our observations of user behaviors with 1D and 2D layouts, divided by task notebook, are summarized here.
160
+
161
+ #### 5.1.1 Finding & Comparing Results Task
162
+
163
+ In 1D, all users started by scrolling down through the notebook to answer the Find question (which state's section was between two other states' sections) until they found the answer. Then, they scrolled through Sections 5 through 9 to answer the graphical Comparison question (which county in which state had the highest value for a particular variable) and compared the bar chart results and axes, which was sufficient to find the highest value. Some users, because they forgot a previous value or wanted to verify their memory, would scroll back to earlier results, sometimes multiple times, before submitting an answer. A couple users took notes on paper to avoid this issue. For the numerical comparison question, users repeated the process for first Comparison question with Sections 4 through 9.
164
+
165
+ In 2D, all users started by scrolling to the right to answer the Find question. Since the columns for the relevant sections (4-9) were fairly well aligned, as seen in Figure 1, this mitigated the need to perform vertical scrolling except for between questions. Users scrolled less distance in $2\mathrm{D}$ due to more efficient use of space with 1 column representing 1 section. Then, to answer the 2 Comparison questions, all users used physical navigation (e.g. head movement) with less scrolling needed, since the screen could show 4 columns. The efficient, well-organized use of 2D also led users to perform less backtracking, if any, and eliminated the need to take notes on paper.
166
+
167
+ #### 5.1.2 Parameter Tuning Task
168
+
169
+ In 1D, all users repeatedly scrolled up and down to get results for different parameter combinations (k-value and distance metric). Sometimes users scrolled past the cells they were looking for and thus did additional scrolling to correct their focus. All users took notes on paper so they could remember and compare results.
170
+
171
+ In 2D, much smaller scrolls were needed to get from the first column, where the main parameter was, and the fourth column, where results were calculated. Given the much smaller scrolling distance, scrolls were quicker and did not result in scrolling too far nearly as often. All users also took notes on paper with 2D, as well.
172
+
173
+ #### 5.1.3 Code Comparison Task
174
+
175
+ In 1D, all users scrolled up and down to find code differences in the two different analyses; users examined the code in a cell in the first analysis, then scrolled down to examine the code in the corresponding cell in the second analysis before scrolling back up again to look at the next cell. This process was repeated until all potential differences were checked for. Since users were given a list of potential differences in order of appearance, they knew what to look for; this could have resulted in less forgetting (and thus less re-scrolling) than might otherwise happen.
176
+
177
+ In 2D, the two analyses were nearly horizontally aligned, so all users used physical navigation to find differences instead of virtual navigation; scrolling was used to go further into the notebook rather than to spot differences. As expected, in 2D users scrolled much less than they did in 1D due to the use of physical navigation and externalized memory on the screen.
178
+
179
+ ![01963e08-3485-70c4-92b4-fed45b904948_4_156_153_710_404_0.jpg](images/01963e08-3485-70c4-92b4-fed45b904948_4_156_153_710_404_0.jpg)
180
+
181
+ Figure 3: A bar chart showing average time to completion by task and layout in seconds.
182
+
183
+ ### 5.2 Efficiency Measurements
184
+
185
+ As seen in Table 1 and summarized in Figure 3, we found the layout (1D or 2D) was statistically significant for all tasks except the find task, which may be due to it being a "cold find", one without prior knowledge of the notebook, which fails to make use of the benefits of Space to Think. The interaction between layout and order (1D First or 2D First) was significant for the comparison tasks. Some tasks benefited from repetition, improving in the second layout based on order. For the Graph and Number Comparison tasks, 1D seemed to benefit more from order, while $2\mathrm{D}$ performance was more stable.
186
+
187
+ Tukey's test showed the 2D layout resulted in statistically significant improvements to efficiency, summarized in Table 1; these improvements ranged from about ${20} - {50}\%$ time reduction. These results likely reflect faster navigation of numerous code cells during the data science tasks when the cells are organized into columns.
188
+
189
+ ### 5.3 Survey Questions
190
+
191
+ We divide the survey question results into 3 areas: Post-1D & Post- 2D Questions, Post-Tasks Questions, and Qualitative Comments.
192
+
193
+ Table 2: Post-2D minus Post-1D Average Differences in Rating
194
+
195
+ <table><tr><td>Question</td><td>Mean</td><td>Median</td></tr><tr><td>Easy to Navigate</td><td>1.87</td><td>2.00</td></tr><tr><td>Quickly Find Info</td><td>1.80</td><td>2.00</td></tr><tr><td>Easy to Compare Graphs</td><td>2.87</td><td>3.00</td></tr><tr><td>Easy to Compare Numbers</td><td>2.83</td><td>3.00</td></tr><tr><td>Easy to Compare Code</td><td>3.57</td><td>4.00</td></tr></table>
196
+
197
+ Bolded values are statistically significant with a 0.05 threshold for both paired t-test and Wilcoxon. Positive values indicate 2D is considered better.
198
+
199
+ #### 5.3.1 Post-1D & Post-2D Questions
200
+
201
+ As seen in the bar chart in Figure 4, the heatmap in Figure 5 and the results of Table 2, the user impressions of the usability of 2D layouts were significantly more positive than the 1D layouts on all metrics. Users rated 2D approximately 2-4 points higher (on a 7-point likert scale) than 1D on each metric. Users were nearly unanimously positive in rating $2\mathrm{D}$ , and more evenly divided between positive and negative for 1D. Two participants gave the three negative ratings for 2D in Figure 5; one saw clutter in 2D notebooks as a potential issue, and the other felt the 2D notebooks could be improved by snapping cells next to each to ensure proper alignment of related cells.
202
+
203
+ Interestingly, as seen in Figure 4, participants exposed to the 2D before the 1D layout rated the usability of the 1D layout as significantly worse for the usability questions asked. Thus, exposure to the 2D layout makes the 1D layout seem less usable.
204
+
205
+ ![01963e08-3485-70c4-92b4-fed45b904948_4_929_152_713_388_0.jpg](images/01963e08-3485-70c4-92b4-fed45b904948_4_929_152_713_388_0.jpg)
206
+
207
+ Figure 4: A bar chart comparing the mean ratings for the Post-1D and Post-2D questions; positive values indicate agreement with the sentiment, while negative values indicate disagreement.
208
+
209
+ <table><tr><td colspan="2">Item</td><td colspan="7">Rating</td></tr><tr><td>Layout</td><td>Question</td><td>Strongly Agree</td><td>Agree</td><td>Agree a little</td><td>Neutral</td><td>Disagree a little</td><td>Disagree</td><td>Strongly Disagree</td></tr><tr><td rowspan="5">1D</td><td>Easy to Navigate</td><td>4</td><td>8</td><td>6</td><td>1</td><td>7</td><td>3</td><td>1</td></tr><tr><td>Quickly Find Info</td><td>4</td><td>5</td><td>7</td><td>6</td><td>5</td><td>2</td><td>1</td></tr><tr><td>Easy to Compare Graphs</td><td>3</td><td>4</td><td>5</td><td>1</td><td>5</td><td>10</td><td>2</td></tr><tr><td>Easy to Compare Numbers</td><td>1</td><td>7</td><td>3</td><td>0</td><td>8</td><td>7</td><td>4</td></tr><tr><td>Easy to Compare Code</td><td>0</td><td>4</td><td>5</td><td>1</td><td>5</td><td>8</td><td>7</td></tr><tr><td rowspan="5">2D</td><td>Easy to Navigate</td><td>18</td><td>8</td><td>4</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>Quickly Find Info</td><td>16</td><td>13</td><td>0</td><td>0</td><td>0</td><td>0</td><td>1</td></tr><tr><td>Easy to Compare Graphs</td><td>19</td><td>9</td><td>2</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>Easy to Compare Numbers</td><td>19</td><td>6</td><td>4</td><td>0</td><td>0</td><td>1</td><td>0</td></tr><tr><td>Easy to Compare Code</td><td>24</td><td>4</td><td>0</td><td>1</td><td>0</td><td>1</td><td>0</td></tr></table>
210
+
211
+ Figure 5: A heatmap comparing the ratings for the Post-1D and Post- 2D questions.
212
+
213
+ #### 5.3.2 Post-Experiment Questions
214
+
215
+ As seen in the Figure 6 heatmap, when explicitly asked to compare their experiences with the two layouts, participants overwhelmingly viewed 2D as more effective for common data science tasks, especially comparisons, and felt the 2D layout improved their performance. They also agreed that $2\mathrm{D}$ made better use of screen space, and that this was key to their success. Furthermore, most participants seemed interested in using 2D layouts instead of 1D layouts, with only one participant expressing neutrality.
216
+
217
+ One curious result is that participants expressed skepticism about 2D layouts being better for presenting computational narratives and collaborating with others. Harden et al. [9] found the opposite; debugging, analysis and development, and navigation were seen as weaknesses of 2D layouts, while presentation and collaboration were seen as strengths. This difference may be due to the tasks that users performed in each study; presentation was key for Harden et al. [9], whereas debugging and comparison were key in this study.
218
+
219
+ <table><tr><td colspan="2">Item</td><td colspan="7">Rating</td></tr><tr><td>Category</td><td>Question</td><td>Strongly Agree</td><td>Agree</td><td>Agree a little</td><td>Neutral</td><td>Disagree a little</td><td>Disagree</td><td>Strongly Disagree</td></tr><tr><td rowspan="9">2D Better than 1D at <task></td><td>Navigate</td><td>15</td><td>7</td><td>2</td><td>4</td><td>2</td><td>0</td><td>0</td></tr><tr><td>Locate Items</td><td>17</td><td>8</td><td>4</td><td>1</td><td>0</td><td>0</td><td>0</td></tr><tr><td>Organize & Clean</td><td>10</td><td>10</td><td>5</td><td>3</td><td>2</td><td>0</td><td>0</td></tr><tr><td>Present</td><td>9</td><td>7</td><td>3</td><td>7</td><td>3</td><td>1</td><td>0</td></tr><tr><td>Explore & Prep Data</td><td>12</td><td>12</td><td>5</td><td>0</td><td>1</td><td>0</td><td>0</td></tr><tr><td>Analyze & Develop</td><td>14</td><td>9</td><td>1</td><td>5</td><td>1</td><td>0</td><td>이</td></tr><tr><td>Debug code</td><td>12</td><td>11</td><td>4</td><td>1</td><td>2</td><td>0</td><td>이</td></tr><tr><td>Compare results</td><td>27</td><td>3</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>Collaborate</td><td>8</td><td>8</td><td>3</td><td>9</td><td>2</td><td>0</td><td>0</td></tr><tr><td rowspan="4">Statements about 2D Layout</td><td>2D Spatial Layout improved Performance</td><td>19</td><td>7</td><td>4</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>More Cells on Screen in 2D Improved Performance</td><td>18</td><td>8</td><td>3</td><td>0</td><td>1</td><td>0</td><td>0</td></tr><tr><td>2D Lavouts Better Used Screen Space</td><td>21</td><td>6</td><td>3</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>Would Use 2D instead of 1D</td><td>17</td><td>10</td><td>2</td><td>1</td><td>0</td><td>0</td><td>이</td></tr></table>
220
+
221
+ Figure 6: A heatmap visualizing the ratings for the Post-Tasks questions.
222
+
223
+ Table 3: Qualitative Themes in Study 1 Survey
224
+
225
+ <table><tr><td>Theme</td><td>Sample Quote</td><td>Number of Participants</td></tr><tr><td>Positive Comments on 2D</td><td/><td>20</td></tr><tr><td>Better Comparison in 2D</td><td>"The 2d layout seems like a solid choice for a lot of analysis applications where you want to do similar but slightly different processes and compare the results."</td><td>7</td></tr><tr><td>Better Navigation in 2D</td><td>"I liked having no more scrolling! It was more intuitive and easier to compare side-by-side sections compared to having to scroll so much. I spent so much time scrolling in the 1D notebook that I forgot what I had looked at previously."</td><td>6</td></tr><tr><td>Practice with 2D Would Help Improve Performance</td><td>"This was my first experience with 2D notebooks after extensive use of 1D notebooks, so the advantages would be compounded given more time to familiarize myself."</td><td>3</td></tr><tr><td>2D is Better Than 1D</td><td>"There is no reason anybody should be using 1D anymore."</td><td>2</td></tr><tr><td>Other</td><td>"I really found it easy to look through all the results from different experiments. We always have to run multiple iteration with different parameter to calculate results and so 2D makes it very easy to see our progress in the notebook and also can be easily inferred."</td><td>2</td></tr><tr><td>Thoughtful Feedback on 2D</td><td/><td>6</td></tr><tr><td>Column Width & Amount</td><td>"Putting too many columns in one screen caused little confusions and potentially increase the number of scrolls."</td><td>2</td></tr><tr><td>Arrow Key Navigation</td><td>"I found the 2D notebooks were more quick to navigate, but it was easier to navigate the 1D notebook using keys rather than the mouse, which might have been a little bit faster."</td><td>1</td></tr><tr><td>Cluttering Screen Space</td><td>"I believe one of the only things I might do in a 2D notebook that wouldn't be as easy would be displaying some visuals, as the layout would make them smaller, along with the text. Also having two visuals right next to each other might be seen as cluttered."</td><td>1</td></tr><tr><td>Use with Lower Resolutions</td><td>"The 2D notebooks were definitely easier to use, but for some tasks/cases (such as present- ing on a monitor which may be low-resolution, or collaborating with a colleague who has a low-resolution monitor) that might change."</td><td>1</td></tr><tr><td>Setup Time</td><td>"The only downside I could see is it taking slightly more time to initially set up but other than that it seems like a good option to have."</td><td>1</td></tr><tr><td>Skepticism about 2D</td><td/><td>2</td></tr><tr><td>Presentation Skepticism</td><td>"[1D] looks more clean if you were to present something to another person."</td><td>1</td></tr><tr><td>Debugging & Dev Skepticism</td><td>"For development and collaboration the linear 1d notebook would be easier to debug."</td><td>1</td></tr></table>
226
+
227
+ Table 4: Scroll Event Analysis Totals Across All Participants
228
+
229
+ <table><tr><td>Measure</td><td>1D Layout</td><td>2D Layout</td></tr><tr><td>Sum of Scroll Event Times</td><td>2071 seconds</td><td>561 seconds</td></tr><tr><td>Count of Scroll Events</td><td>410 events</td><td>195 events</td></tr><tr><td>Mean Time per Scroll Event</td><td>5.05 seconds</td><td>2.88 seconds</td></tr><tr><td>Median Time per Scroll Event</td><td>4 seconds</td><td>2 seconds</td></tr></table>
230
+
231
+ #### 5.3.3 Qualitative Comments
232
+
233
+ Of the 27 participants who left a qualitative comment on the survey, 20 expressed positivity about 2D layouts, while only 2 expressed that they might still prefer 1D notebooks for any task. 2 participants went so far as to express sentiments suggesting that $2\mathrm{D}$ layouts make 1D obsolete. 6 participants also left thoughtful feedback that may inform design of future 2D computational notebooks. Several comments pointed out the link between memory and navigation, that more time scrolling in $1\mathrm{D}$ led to more forgetting important information for the task. The results are summarized in Table 3 with the themes found, a sample quote for each sub-theme, and the number of comments matching the theme.
234
+
235
+ ### 5.4 Scrolling Time
236
+
237
+ For the code comparison task, we found participants scrolled more times and spent more time scrolling in 1D layout, as seen in Table 4. Each scroll event in 1D tended to be longer than those in 2D. Given differences in typical user interactions described earlier, specifically the elimination of the need to scroll and reduction of scrolling distances for comparison, it makes sense that the 2D layout would have much less scrolling time and events for the Code Comparison task. This confirms that reducing scroll navigation is an important factor in enabling the faster performance results of $2\mathrm{D}$ . This may be due to multi-columns bringing cells nearer to each other and fitting more cells on the screen simultaneously.
238
+
239
+ ## 6 Study 2 Methodology
240
+
241
+ The second study focused on understanding how users would utilize the 2D space when starting nearly from scratch, as well as evaluating the longitudinal usability of the 2D Jupyter extension for writing code in a more ecologically valid setting. It consisted of a main task, interview, and a survey.
242
+
243
+ ### 6.1 Recruitment
244
+
245
+ Participants were recruited from undergraduate and graduate computer science classes at a large state university, and were invited to participate if they had prior experience using Python and computational notebooks. In total, 9 participants completed the study.
246
+
247
+ ### 6.2 Hardware Used in Study
248
+
249
+ For this study, participants used their personal computers to complete the task. Most participants conducted the task on a laptop using the laptop display and built-in trackpad. Two participants connected their laptops to a 64-inch $4\mathrm{\;K}$ monitor and extended their displays to the larger screen, but continued to use the laptop's built-in trackpad for navigation and scrolling.
250
+
251
+ ### 6.3 Task Design
252
+
253
+ A data analysis task for this study was designed that would allow participants to utilize all of the 2D Jupyter extension features. Participants were given a Jupyter notebook file containing task instructions, initial library imports, and loading of two datasets: a COVID dataset containing the number of cases and deaths in each county in the US, and a demographics dataset containing the population of each US county as of the 2020 census.
254
+
255
+ #### 6.3.1 Original Task
256
+
257
+ The first five participants were instructed to use 2D Jupyter to answer the following questions:
258
+
259
+ 1. How do the most recent deaths per case compare between all the counties of Virginia?
260
+
261
+ 2. Do the deaths per case in each county of Virginia correlate to the population density?
262
+
263
+ #### 6.3.2 Modified Task
264
+
265
+ For the remaining four participants, the data analysis task was modified to introduce more complexity and encourage more flexibility in the use of the $2\mathrm{D}$ environment. These participants were given the following instructions:
266
+
267
+ You are being asked to analyze COVID data for three states - Virginia, Texas and Illinois. Prepare a notebook for presentation with the following:
268
+
269
+ 1. Create the following charts for each state:
270
+
271
+ - Bar chart showing top 10 counties with highest cases
272
+
273
+ - Bar chart showing top 10 counties with highest deaths
274
+
275
+ - Bar chart showing top 10 counties with highest deaths per case
276
+
277
+ - Scatterplot showing the correlation between cases and deaths by county (include the correlation coefficient)
278
+
279
+ - Bar chart showing top 10 counties with the highest number of cases relative to population
280
+
281
+ - Bar chart showing top 10 counties with the highest number of deaths relative to population.
282
+
283
+ 2. Using only the charts you have created in part 1 answer the following questions:
284
+
285
+ (a) Which state had the county with the highest number of cases?
286
+
287
+ (b) Which state had the county with the highest number of deaths?
288
+
289
+ (c) Which state had the county with the highest number of deaths per case?
290
+
291
+ (d) Which state had the highest correlation between cases and deaths?
292
+
293
+ (e) Which state had the county with the highest number of cases relative to population density?
294
+
295
+ (f) Which state had the county with the highest number of deaths relative to population density?
296
+
297
+ (g) How many counties with the top 10 deaths relative to population were also in the top 10 deaths per case for that state?
298
+
299
+ 3. Prepare the notebook for presentation of your findings.
300
+
301
+ For all participants, an initial meeting was scheduled to give an overview of the 2D Jupyter extension and to go over the data analysis task. Each participant was allowed to complete the task at their own pace over the course of 2 weeks. An interview session was scheduled after each participant had completed the task, in which they were asked questions about their experiences using 2D Jupyter. At the end of the interview, each person was asked to complete a survey.
302
+
303
+ ### 6.4 Interview and Survey Questions
304
+
305
+ After completing the data analysis tasks, participants were interviewed about their experience using 2D Jupyter. Interview questions were focused on understanding how the participant used the $2\mathrm{D}$ layout and what features they utilized. Additionally, the participant was asked to share their opinion on any advantages or disadvantages that 2D notebooks had as compared to traditional 1D notebooks. The questions asked during the interview included:
306
+
307
+ 1. What was your overall strategy for using the 2D environment?
308
+
309
+ 2. What features of the $2\mathrm{D}$ notebook did you utilize?
310
+
311
+ 3. Are there any features that you wish you had?
312
+
313
+ 4. Were there any difficulties in using the 2D notebook during your data analysis?
314
+
315
+ 5. Did the 2D environment provide any advantages for this task as compared to a 1D notebook?
316
+
317
+ 6. Did the 2D environment provide any disadvantages for this task as compared to a 1D notebook?
318
+
319
+ A survey was also given to participants after the interview session; it consisted mainly of Likert-scale questions. The first four questions of the survey focused on the benefits of a 2D layout in completing the main parts of a data analysis task. The next three questions evaluated the usability of 2D Jupyter. Finally, the survey included two short-answer questions to allow users to provide any suggestions and comments they had regarding their overall experience.
320
+
321
+ ## 7 STUDY 2 RESULTS
322
+
323
+ Results of this study are primarily qualitative. A summary of these results can be found in Table 5 with the common themes, a sample quote for each theme, and the number of participants who made comments matching the theme.
324
+
325
+ ### 7.1 Strategies for Using the 2D Environment
326
+
327
+ For the original data analysis task, we found two main strategies for using 2D space. The first strategy, which 3 participants used, was to use a separate column for each question they were asked to answer. Each column contained the entirety of the analysis needed to answer the question, with the exception of one participant who used two columns to answer the second question to reduce the amount of vertical scrolling needed to view the entire notebook. The second strategy, used by one participant, was to use the columns to separate the steps of the data science workflow, such as data pre-processing, data exploration, and so on. Each column was treated as a new section of the overall notebook.
328
+
329
+ For the modified data analysis task, each participant had a different strategy for using the $2\mathrm{D}$ space. One participant used the columns as sections of their notebook, creating a new column when they began working on a new data science subtask. Another participant used columns to reduce scrolling and only created a new column when the vertical length of the page became too long. One participant used a single column of cells alongside a single markdown scratch cell containing the task instructions; they used the freeform cell placement ability to move the markdown cell down the page as the page became longer. The last participant created only two columns and placed cells side by side when they wanted to reference code they wanted to reuse or to compare visualisations.
330
+
331
+ Finally, one participant used 2D Jupyter for their own project rather than the given data analysis task. This participant was a student in an artificial intelligence class and was working on a project to build their own AI model that could play a game. This participant primarily used the freeform scratch cell feature of the extension to test parameters for their model, rather than using multiple columns.
332
+
333
+ Table 5: Qualitative Themes in Study 2 Interviews and Survey
334
+
335
+ <table><tr><td>Theme</td><td>Sample Quote</td><td>Number of Participants</td></tr><tr><td>Advantages of 2D</td><td/><td>5</td></tr><tr><td>Better navigation</td><td>"It was easier for me to find the exact cell that I was looking for."</td><td>2</td></tr><tr><td>Better organization</td><td>"I don't know that there's any extra challenges from a 2D environment...I think it's strictly better organizationally"</td><td>1</td></tr><tr><td>Ease of comparisons</td><td>"...when I have to compare two data frames...side by side that's really useful."</td><td>2</td></tr><tr><td>Disadvantages of 2D</td><td/><td>2</td></tr><tr><td>Viewing on Small Screens</td><td>"... the major disadvantage is all the [horizontal] scrolling that you have to do."</td><td>1</td></tr><tr><td>Cluttered Look</td><td>"..it can look kind of cluttered sometimes, like it can be maybe a little overwhelming..."</td><td>1</td></tr><tr><td>Usability Feedback</td><td/><td>8</td></tr><tr><td>Column Resizing</td><td>"... if it was possible to resize it directly from [the middle of the column] instead of having to go up and resize, that would be good."</td><td>2</td></tr><tr><td>Column Scrolling</td><td>"I would like each columns to have their own [independent] scrolling area"</td><td>1</td></tr><tr><td>Easy to Learn</td><td>". . . . after that that small little learning curve, I think everything else was... super straight- forward"</td><td>2</td></tr><tr><td>Opportunities</td><td>"I don't see there being like any sort of disadvantage or any type of limitation that $2\mathrm{D}$ has compared to 1D. If anything... the opportunities are endless."</td><td>3</td></tr></table>
336
+
337
+ Table 6: Number of Columns Used by Participants
338
+
339
+ <table><tr><td>Number of Columns Used</td><td>Number of Participants</td></tr><tr><td>1</td><td>2</td></tr><tr><td>2</td><td>1</td></tr><tr><td>3</td><td>2</td></tr><tr><td>4</td><td>2</td></tr><tr><td>6</td><td>1 (4k screen)</td></tr><tr><td>10</td><td>1 (4k screen)</td></tr></table>
340
+
341
+ Table 6 shows the number of columns used by participants. 2 participants created 1 column of cells alongside a freeform scratch cell that they moved around the notebook outside the column as they worked. One participant used 2 columns, primarily using the second column to place cells side-by-side for referencing code or comparing visualizations. Most participants used 3-4 columns to complete the data analysis task. The 2 participants who used the large $4\mathrm{\;K}$ display created the most columns, using 6 and 10 columns in their completed notebooks.
342
+
343
+ ### 7.2 Advantages of 2D Over 1D
344
+
345
+ Participants found 2D notebooks had several advantages over 1D notebooks. 5 participants noted that referencing other cells was easier in the $2\mathrm{D}$ environment and reduced the amount of scrolling needed while developing the notebook. Additionally, they found comparing data or charts easier in the 2D environment. 3 participants said the 2D environment made it easier to keep track of cells. 2 participants liked that they could view more of their code at once.
346
+
347
+ ### 7.3 Disadvantages of 2D Compared to 1D
348
+
349
+ Participants also found several disadvantages of 2D notebooks in comparison to 1D. 2 participants noted that smaller screens may make it more difficult to navigate the notebook if there are several columns, due to requiring both vertical and horizontal scrolling to access cells or requiring horizontally scrolling code in very narrow columns. 2 participants suggested a larger notebook makes the user need to have a good mental map of their layout in order to not be lost while navigating the $2\mathrm{D}$ environment. One participant presented initial confusion as to whether separate columns were operating with separate kernels. Finally, one participant noted the extension does not support exporting the $2\mathrm{D}$ layout to another file format, such as HTML or PDF, making sharing 2D notebooks difficult.
350
+
351
+ ### 7.4 Suggestions and Improvements
352
+
353
+ Participants had the opportunity throughout the study to provide comments and suggestions on the 2D extension. Primarily, participants wanted shortcut access to the new toolbar controls. For example, multiple participants wanted to add code cells from anywhere in the notebook, without needing to use the toolbars at the top of the columns. Other participants wanted to resize the columns without needing to scroll to the top of the column to find the resize controller. Two participants suggested adding the ability to independently vertically scroll through a column while keeping the rest of the notebook static. One participant wanted to be able to concurrently run cells placed side-by-side without having to run each cell individually.
354
+
355
+ Several participants found that orienting themselves in 2D space was somewhat challenging and provided suggestions for improvement. One participant suggested adding a mini-map at the bottom corner of the screen to see their location within the overall notebook. Other participants suggested labelling each cell with a row and a column number, similar to how Excel spreadsheet cells are labeled.
356
+
357
+ ### 7.5 Survey Questions
358
+
359
+ All 9 participants in Study 2 were asked to complete the survey, but 2 participants skipped the questions in their responses, resulting in 7 total responses. The results of this survey are shown in Figure 7.
360
+
361
+ The heatmap shows participants generally viewed 2D notebooks positively. When asked if the 2D layout was beneficial in completing common data analysis tasks, most participants agreed or strongly agreed with the statements. In terms of usability, all participants agreed with the statement that it was easy to understand how to use the $2\mathrm{D}$ extension. Additionally, most participants agreed that it was easy to navigate in the 2D layout. When asked if they would prefer using the 2D extension over the traditional 1D environment, all participants were either netural or agreed with the statement.
362
+
363
+ ## 8 DISCUSSION
364
+
365
+ ### 8.1 Task Efficiency Benefits
366
+
367
+ 2D computational notebook layouts provide benefits to task efficiency by reducing the amount of scrolling necessary and shortening the length of needed scrolls. As seen in Study 1, 2D layouts provided statistically significant reductions in time to completion overall, as well as for when 1D was first and when $2\mathrm{D}$ was first; the lack of statistical significance for the comparison tasks when $2\mathrm{D}$ was first and for comparing the second layout in each condition suggests that the practice effect as a result of doing similar tasks again in the second layout masked the effect of the layout. Given how much less scrolling was done in terms of total scrolling time, number of scrolling events, and average scrolling time in the $2\mathrm{D}$ layouts, per Study 1's Scrolling Time analysis, combined with the time to completion results, 2D layouts clearly provide benefits to efficiency.
368
+
369
+ <table><tr><td>Statement</td><td>Strongly Agree</td><td>Agree</td><td>Neither agree nor disagree</td><td>Disagree</td><td>Strongly Disagree</td></tr><tr><td>I found the use of the 2D extension to be beneficial in organizing my notebook.</td><td>2</td><td>3</td><td>1</td><td>0</td><td>1</td></tr><tr><td>I found the use of the 2D extension to be beneficial in data processing.</td><td>1</td><td>2</td><td>3</td><td>0</td><td>1</td></tr><tr><td>I found the use of the 2D extension to be beneficial in creating visualizations.</td><td>3</td><td>2</td><td>1</td><td>0</td><td>1</td></tr><tr><td>I found the use of the $2\mathrm{D}$ extension to be beneficial in debugging my code.</td><td>0</td><td>4</td><td>2</td><td>0</td><td>1</td></tr><tr><td>It was easy to understand how to use the 2D extension.</td><td>2</td><td>5</td><td>0</td><td>0</td><td>0</td></tr><tr><td>It was easy to navigate in the 2D notebook.</td><td>0</td><td>5</td><td>1</td><td>1</td><td>0</td></tr><tr><td>I prefer using the 2D notebook environment over the traditional notebook environment.</td><td>0</td><td>3</td><td>4</td><td>0</td><td>0</td></tr></table>
370
+
371
+ Figure 7: Survey results from Study 2
372
+
373
+ The reduced scrolling is a result of $2\mathrm{D}$ ’s ability to bring more cells nearer to each other. Theoretically, 2D can reduce distances by the square root of $1\mathrm{D}$ distances. Practically, $2\mathrm{D}$ enabled non-linear code structures, such as parallel analyses, to be horizontally aligned in columns, thus supporting common data-science tasks such as comparison. 2D enabled more such relationships to be encoded into the space. In contrast, 1D encodes only a single ordering, and would require complex refactoring tools to enable various types of parallel analyses and comparisons.
374
+
375
+ ### 8.2 Usability Benefits
376
+
377
+ 2D layouts appear more usable for certain basic and more complex tasks. Based on the results from Study 1 as seen in Figures 5 and 6 , navigating and finding information, comparing results, and data science tasks such as organizing and cleaning may be easier in a 2D notebook. This may be due to more effective use of screen space to display more information at once in an organized manner, along with more efficient scrolling options.
378
+
379
+ In Study 2, several participants found the 2D environment provided an advantage in locating code or data. The ability to break up the notebook into distinct sections meant they did not have to first search for a section of their notebook, and then search for the info they needed within the section; the multi-column layout enabled users to more easily find the info they were looking for, since they could instantly identify the section of the notebook they needed.
380
+
381
+ Additionally, participants found that the 2D environment made it more convenient to refer to other cells. In the 1D environment, users would need to move two cells close to each other in order to easily compare the contents, often disrupting the organization of the notebook. In the 2D environment, participants were able to maintain the organization of the cells in their respective sections, while still being able to place cells next to each other for ease of comparison.
382
+
383
+ ### 8.3 Effects of Hardware on 2D Computational Notebooks
384
+
385
+ Different setups, especially in the second study, made for different experiences with 2D Jupyter. Specifically, both screen size and scrolling device (e.g. trackpad vs. mouse) affected usability. Larger screen sizes afforded the ability to visualize more columns at once and to better ensure those columns were sufficiently wide for the code. This enabled physical navigation more effectively than smaller screens and thus led to less scrolling. Furthermore, scrolling devices which easily enable horizontal scrolling through simple gestures, such as trackpads, appeared to provide a better user experience with the 2D layout than a standard mouse or vertical scroll wheel.
386
+
387
+ There is a tradeoff between vertical and horizontal scrolling. Using more columns reduces vertical scrolling, but increases horizontal scrolling. On very small displays, some users in Study 2 indicated that intensive use of both types of scrolling may be worse than just vertical scrolling. However, large widescreen displays, increasingly common in data-science workspaces, mitigate this tradeoff by minimizing the horizontal scrolling needed to traverse the notebook, enabling multiple columns to greatly reduce vertical scrolling. Even with a modest 24" display, like in Study 1, the benefit was significant, and would likely increase with larger displays.
388
+
389
+ ### 8.4 Design Challenges & Opportunities
390
+
391
+ While 2D computational notebooks may provide efficiency and usability benefits, especially with the right setup, there is still room for improvement on their design.
392
+
393
+ Column width in the many-column design pattern may impact user experience; if the columns are too wide, fewer columns will fit on the screen, but if the columns are too narrow, visuals may become too small to easily read and the screen may feel cluttered, potentially leading to confusions that affect performance. Thus, managing column width becomes an important factor; this is currently doable in 2D Jupyter through manual resizing of columns. Still, it may be beneficial to provide functionality that resizes columns to an ideal width through a quick interaction, such as is done in spreadsheets.
394
+
395
+ Additional navigation options tailored to different 2D layouts may also benefit users. Navigating 1D computational notebooks with arrow keys can be quicker than navigating with manual scrolls, and the same may apply to 2D computational notebooks; the challenge is whether and how to incorporate the left and right arrow keys (or even diagonals) to quickly navigate. One option is to borrow the spreadsheet metaphor and have each arrow key move to the adjacent cell in the direction of the key. Making individual columns independently scrollable may also benefit navigation, especially when working on smaller screens. This would allow longer columns to be scrolled without impacting the view of shorter columns.
396
+
397
+ ### 8.5 Limitations
398
+
399
+ #### 8.5.1 Bugs in Extension
400
+
401
+ At the time of conducting both studies, the 2D Jupyter extension contained some bugs that could affect user experience. In particular, the drag and drop feature occasionally did not allow the user to release the cell at an intended location, forcing the user to reload the page. Additionally, the layout of the 2D environment was sometimes not properly saved between kernel sessions, requiring the user to reorganize their notebook before resuming work. Finally, the extension required users to manually save their work as the autosave feature built into Jupyter Notebooks did not work with the extension. These bugs did not affect Study 1 except for contributing to the technical issues that led to discarding one participant's data.
402
+
403
+ #### 8.5.2 2D Layouts other than Multi-Column
404
+
405
+ Given that both studies used an extension which does not, at the time of this writing, fully support 2D layouts other than multi-column, care must be taken in assigning benefits to other 2D layouts. Some of the advantages of the multi-column layout may be due to how compact it is; less compact 2D layouts might not see the same level of benefits in some areas, like reduced scrolling and task efficiency. Evaluating other 2D layouts is a subject for future work.
406
+
407
+ ## 9 CONCLUSION
408
+
409
+ Computational notebooks are a potent tool for creating and presenting computational narratives; the current 1D layout of notebooks, while elegant in its simplicity, imposes certain limitations that make comparative analyses and navigating longer non-linear notebooks, among other tasks, more difficult. Thus, we developed and evaluated the potential of $2\mathrm{D}$ layouts for computational notebooks, starting with the multi-column layout enabled in our 2D Jupyter extension.
410
+
411
+ The multi-column 2D layout provides benefits in efficiency and usability for common data science tasks such as comparative analyses by enabling greater physical navigation, thus minimizing the scope and need for virtual navigation (scrolling). In addition, the multi-column layout provides an effective sectioning mechanism that may help combat messiness along with providing more efficient navigation. Overall, 2D layouts have the potential to improve upon the current state of computational notebooks and provide a novel way to enhance the creation and presentation of non-linear computational narratives through enabling Space to Think.
412
+
413
+ ## REFERENCES
414
+
415
+ [1] C. Andrews, A. Endert, and C. North. Space to think: large high-resolution displays for sensemaking. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 55-64, 2010.
416
+
417
+ [2] A. Bragdon, S. P. Reiss, R. Zeleznik, S. Karumuri, W. Cheung, J. Kaplan, C. Coleman, F. Adeputra, and J. J. LaViola Jr. Code bubbles: rethinking the user interface paradigm of integrated development environments. In Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering-Volume 1, pp. 455-464, 2010.
418
+
419
+ [3] A. Burks, L. Renambot, and A. Johnson. Vissnippets: A web-based system for impromptu collaborative data exploration on large displays. In Practice and Experience in Advanced Research Computing, pp. 144-151. 2020.
420
+
421
+ [4] S. Chattopadhyay, I. Prasad, A. Z. Henley, A. Sarma, and T. Barik. What's wrong with computational notebooks? pain points, needs, and design opportunities. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1-12, 2020.
422
+
423
+ [5] K. Davidson, L. Lisle, K. Whitley, D. A. Bowman, and C. North. Exploring the evolution of sensemaking strategies in immersive space to think. IEEE Transactions on Visualization and Computer Graphics, 2022.
424
+
425
+ [6] J. C. De Winter and D. Dodou. Five-point likert items: t test versus mann-whitney-wilcoxon. Practical Assessment, Research & Evaluation, 15(11):1-12, 2010.
426
+
427
+ [7] H. Dong, S. Zhou, J. L. Guo, and C. Kästner. Splitting, renaming, removing: A study of common cleaning activities in jupyter notebooks. In 2021 36th IEEE/ACM International Conference on Automated Software Engineering Workshops (ASEW), pp. 114-119, 2021. doi: 10. 1109/ASEW52652.2021.00032
428
+
429
+ [8] Google. Welcome to colaboratory - colaboratory, 2022.
430
+
431
+ [9] J. Harden, E. Christman, N. Kirshenbaum, J. Wenskovitch, J. Leigh, and C. North. Exploring organization of computational notebook cells in 2d space. In 2022 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), pp. 1-6. IEEE, 2022.
432
+
433
+ [10] A. Head, F. Hohman, T. Barik, S. M. Drucker, and R. DeLine. Managing messes in computational notebooks. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-12, 2019.
434
+
435
+ [11] E. A. Inc. Einblick - multiplayer python notebooks on an interactive canvas, 2023.
436
+
437
+ [12] S. Inc. Collaborative calculation and data science, 2023.
438
+
439
+ [13] P. Jupyter. Project jupyter - home, 2021.
440
+
441
+ [14] S. Kandel, A. Paepcke, J. M. Hellerstein, and J. Heer. Enterprise data analysis and visualization: An interview study. IEEE Transactions on Visualization and Computer Graphics, 18(12):2917-2926, 2012.
442
+
443
+ [15] M. B. Kery, A. Horvath, and B. A. Myers. Variolite: Supporting exploratory programming by data scientists. In CHI, vol. 10, pp. 3025453- 3025626, 2017.
444
+
445
+ [16] M. B. Kery and B. A. Myers. Interactions for untangling messy history in a computational notebook. In 2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), pp. 147-155. IEEE, 2018.
446
+
447
+ [17] M. B. Kery, M. Radensky, M. Arya, B. E. John, and B. A. Myers. The story in the notebook: Exploratory data science using a literate programming tool. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-11, 2018.
448
+
449
+ [18] N. Kirshenbaum, K. Davidson, J. Harden, C. North, D. Kobayashi, R. Theriot, R. S. Tabalba Jr, M. L. Rogers, M. Belcaid, A. T. Burks, et al. Traces of time through space: Advantages of creating complex canvases in collaborative meetings. Proceedings of the ACM on Human-Computer Interaction, 5(ISS):1-20, 2021.
450
+
451
+ [19] T. Kluyver, B. Ragan-Kelley, F. Pérez, B. E. Granger, M. Busson-nier, J. Frederic, K. Kelley, J. B. Hamrick, J. Grout, S. Corlay, et al. Jupyter Notebooks-a publishing format for reproducible computational workflows., vol. 2016. 2016.
452
+
453
+ [20] L. Lisle, X. Chen, J. E. Gitre, C. North, and D. A. Bowman. Evaluating the benefits of the immersive space to think. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 331-337. IEEE, 2020.
454
+
455
+ [21] L. Lisle, K. Davidson, E. J. Gitre, C. North, and D. A. Bowman. Sensemaking strategies with immersive space to think. In 2021 IEEE Virtual Reality and 3D User Interfaces (VR), pp. 529-537. IEEE, 2021.
456
+
457
+ [22] E. S. Liu, D. A. Lukes, and W. G. Griswold. Refactoring in computational notebooks. ACM Transactions on Software Engineering and Methodology, 2022.
458
+
459
+ [23] J. Liu, N. Boukhelifa, and J. R. Eagan. Understanding the role of alternatives in data analysis practices. IEEE transactions on visualization and computer graphics, 26(1):66-76, 2019.
460
+
461
+ [24] P. V. Merzlykin, M. V. Marienko, and S. V. Shokaliuk. Cocalc tools as a means of open science and its didactic potential in the educational process. In Proceedings of the 1st Symposium on Advances in Educational Technology, vol. 1, pp. 109-118, 2022.
462
+
463
+ [25] L. Pavanatto, C. North, D. A. Bowman, C. Badea, and R. Stoakley. Do we still need physical monitors? an evaluation of the usability of ar virtual monitors for productivity work. In 2021 IEEE Virtual Reality and 3D User Interfaces (VR), pp. 759-767. IEEE, 2021.
464
+
465
+ [26] F. Perez and B. E. Granger. Project jupyter: Computational narratives as the engine of collaborative data science. Retrieved September, 11(207):108, 2015.
466
+
467
+ [27] J. M. Perkel. Why jupyter is data scientists' computational notebook of choice. Nature, 563(7732):145-147, 2018.
468
+
469
+ [28] P. Pirolli and S. Card. The sensemaking process and leverage points for analyst technology as identified through cognitive task analysis. In Proceedings of international conference on intelligence analysis, vol. 5, pp. 2-4. McLean, VA, USA, 2005.
470
+
471
+ [29] D. Raghunandan, A. Roy, S. Shi, N. Elmqvist, and L. Battle. Code code evolution: Understanding how people change data science notebooks over time. arXiv preprint arXiv:2209.02851, 2022.
472
+
473
+ [30] P. Reipschlager, T. Flemisch, and R. Dachselt. Personal augmented reality for information visualization on large interactive displays. IEEE Transactions on Visualization and Computer Graphics, 27(2):1182- 1192, 2020.
474
+
475
+ [31] H. Romat, N. Henry Riche, K. Hinckley, B. Lee, C. Appert, E. Pietriga, and C. Collins. Activeink: (th) inking with data. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-13, 2019.
476
+
477
+ [32] A. Rule, A. Tabard, and J. D. Hollan. Exploration and explanation in computational notebooks. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-12, 2018.
478
+
479
+ [33] Z. Shang, E. Zgraggen, B. Buratti, P. Eichmann, N. Karimeddiny, C. Meyer, W. Runnels, and T. Kraska. Davos: a system for interactive data-driven decision making. Proceedings of the VLDB Endowment, 14(12):2893-2905, 2021.
480
+
481
+ [34] J. Singer. Notes on notebooks: Is jupyter the bringer of jollity? In Proceedings of the 2020 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software, pp. 180-186, 2020.
482
+
483
+ [35] K. Vlasenko, O. Chumak, D. Bobyliev, I. Lovianova, and I. Sitak. Development of an online-course syllabus" operations research oriented to cloud computing in the cocalc system". In ICTERI, pp. 278-291, 2020.
484
+
485
+ [36] Z. J. Wang, K. Dai, and W. K. Edwards. Stickyland: Breaking the linear presentation of computational notebooks. In CHI Conference on Human Factors in Computing Systems Extended Abstracts, pp. 1-7, 2022.
486
+
487
+ [37] N. Weinman, S. M. Drucker, T. Barik, and R. DeLine. Fork it: Supporting stateful alternatives in computational notebooks. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1-12, 2021.
488
+
489
+ [38] J. Wenskovitch, J. Zhao, S. Carter, M. Cooper, and C. North. Albireo: An interactive tool for visually summarizing computational notebook
490
+
491
+ ## Online Submission ID: 0
492
+
493
+ structure. In 2019 IEEE Visualization in Data Science (VDS), pp. 1-10.
494
+
495
+ IEEE, 2019.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/Gkogn48LeI/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,684 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § "THERE IS NO REASON ANYBODY SHOULD BE USING 1D ANYMORE": DESIGN AND EVALUATION OF 2D JUPYTER NOTEBOOKS
2
+
3
+ Category: Research
4
+
5
+ < g r a p h i c s >
6
+
7
+ Figure 1: From Left to Right: Finding & Comparing Results 2D Notebook, Parameter Tuning 2D Notebook, Code Comparison 2D Notebook Clip
8
+
9
+ § ABSTRACT
10
+
11
+ Current computational notebooks, such as Jupyter, are a popular tool for data science and analysis. However, they use a 1D list structure for cells that introduces and exacerbates user issues, such as messiness, tedious navigation, inefficient use of large screen space, performance of non-linear analyses, and presentation of non-linear narratives. To ameliorate these issues, we designed a prototype extension for Jupyter Notebooks that enables 2D organization of computational notebook cells. In this paper, we present two evaluative studies to determine whether "2D computational notebooks" provide advantages over the current computational notebook structure. From these studies, we found empirical evidence that $2\mathrm{D}$ computational notebooks provide enhanced efficiency and usability. We also gathered design feedback which may inform future works. Overall, the prototype was positively received, with some users expressing a clear preference for $2\mathrm{D}$ computational notebooks even at this early stage of development.
12
+
13
+ Index Terms: Human-centered computing-Human Computer Interaction (HCI); Human-centered computing-Visualization
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ Computational notebooks like Jupyter $\left\lbrack {{13},{19}}\right\rbrack$ , used to construct and present computational narratives $\left\lbrack {{27},{32},{34}}\right\rbrack$ , struggle with non-linear analyses, such as comparative analyses, and non-linear narratives $\left\lbrack {9,{32}}\right\rbrack$ , as well as navigating longer notebooks $\left\lbrack 9\right\rbrack$ , preventing and managing messiness $\left\lbrack {7,{10},{16},{22},{23},{32}}\right\rbrack$ , and efficiently using large display spaces [9]. We suggest that part of the reason for these issues is the current 1D, top-to-bottom organization of notebook cells.
18
+
19
+ Weinman et al.'s work on Fork-It [37] showed 2D space can be helpful; they introduced forking, the temporary creation of split columns in an otherwise 1D notebook. While this work helps nonlinear analyses, it does not easily accommodate non-linear narratives, which may benefit from a persistent multiple column approach. Wang, Dai, and Edwards [36] also sought to shift computational notebooks from the current 1D structure with Stickyland, which allows users to "stick" cells to a dock that is persistently at the top of the computational notebook interface even when scrolling. Harden et al. [9] explored how users would arrange cells in $2\mathrm{D}$ and found three different patterns: linear (with either split cells or split columns), multi-column, and workboard. This work demonstrated alternative organizations of cells, some of which would not be possible in the prior works mentioned; it also suggests that computational notebook users could benefit from 2D space usage for organizing notebook cells in a more flexible yet persistent manner.
20
+
21
+ This paper contributes to computational notebook research through evaluations of a 2D layout extension for computational notebooks. We focused on the following research questions:
22
+
23
+ 1. When comparing 1D and 2D layouts, which mode supports more efficient user completion of data science tasks, such as information retrieval, results comparison, parameter tuning, and code comparison?
24
+
25
+ 2. What strengths and weaknesses might 2D layouts have compared to 1D layouts?
26
+
27
+ 3. Would users find 2D layouts more usable than 1D layouts?
28
+
29
+ 4. Would users prefer to use 2D layouts for computational notebook cells?
30
+
31
+ To answer these questions, we designed a Jupyter Notebook extension that enables a 2D multi-column cell layout. We then conducted two user studies using this extension where users performed a series of tasks in both 1D and 2D layouts, followed by qualitative data gathering through surveys and, in the second study, interviews. The first study used pre-made notebooks to evaluate whether the extension enhances performance and usability, while the second study focused on creation of a 2D notebook from scratch for a data science task. We found 2D layouts provided more efficient user task performance and enhanced usability over 1D layouts. Users overwhelmingly preferred the 2D notebooks, and made use of available display space to organize notebooks such that more cells are simultaneously visible. We also noted some design challenges for $2\mathrm{D}$ layouts, including managing column width in a multi-column layout.
32
+
33
+ § 2 BACKGROUND AND RELATED WORKS
34
+
35
+ This work builds on two key areas of research: computational notebooks and Space to Think.
36
+
37
+ § 2.1 COMPUTATIONAL NOTEBOOKS
38
+
39
+ Computational notebooks support incremental and iterative analysis $\left\lbrack {{14},{32}}\right\rbrack$ and computational narrative formation through interleaving code, visualizations, and text $\left\lbrack {{26},{32}}\right\rbrack$ . However, computational notebook users face various issues and pain points [4], such as messiness $\left\lbrack {{10},{17},{23},{32}}\right\rbrack$ , dealing with non-linear analyses and narratives [32], and navigating longer notebooks [9]. These issues may be exacerbated by the current 1D structure of computational notebooks.
40
+
41
+ < g r a p h i c s >
42
+
43
+ Figure 2: Notebook Controls for 2D Jupyter extension
44
+
45
+ Head et al. [10] showed messiness can come from disorder, deletion, and dispersal, where disorder means run order and presentation order are different, deletion means overwriting or deleting necessary code, and dispersal means related cells are far apart. Many tools have been developed to help deal with messiness, from Head et al.'s work [10], to cell dependency graph visualization [38] to version control systems for computational notebooks [15, 16]. The 1D structure may exacerbate messiness given the looping nature of sensemaking in computational notebooks [28, 29], so 2D space usage may help minimize it.
46
+
47
+ Scrolling through a long notebook can be tedious and negatively affect various tasks like debugging and cleaning. While Google Colaboratory [8] enables jumping to different sections through a table of contents, the 1D structure can still result in tedious scrolling.
48
+
49
+ Exploration of 2D space usage by Weinman et al. [37] and Harden et al. [9] produced positive responses. Within the bounded 2D of Fork-It [37], users did more than just comparative analyses; they used the split column structure to organize code and contain messes. Harden et al.'s [9] findings corroborate these potential use cases.
50
+
51
+ § 2.2 SPACE TO THINK
52
+
53
+ Andrews et al. [1] found large, high-resolution displays benefit sensemaking in 2 key ways through what they called "Space to Think": external memory and semantic encoding. External memory means more information can be stored on screen space instead of in one's mind, which allows physical navigation, like moving one's head, to replace virtual navigation, like scrolling or changing tabs. Semantic encoding means users can group related items spatially based on their mental model of the connection between items; in short, users can externalize their understanding onto the screen. Recent studies $\left\lbrack {5,{20},{21}}\right\rbrack$ have expanded this concept to the space provided by virtual and augmented reality or cited Space to Think as an influence on their design [25,30,31]. Kirshenbaum et al. [18] found Space to Think can also benefit collaborative meetings.
54
+
55
+ Current computational notebook systems with their 1D structures do not adequately use Space to Think without clumsy workarounds like opening the same notebook multiple times and arranging side-by-side. 2D space usage may enable Space to Think in data science tasks [9]. To this end, some recent tools, such as VisSnippets [3], Einblick [11, 33], CoCalc [12, 24, 35], and Code Bubbles [2], have begun to explore 2D layouts of cells using a whiteboard metaphor.
56
+
57
+ § 3 DESIGN OF 2D JUPYTER NOTEBOOK EXTENSION
58
+
59
+ Harden et al. [9] found two main categories of 2D layouts for computational notebooks based on user-generated layouts: multi-column and workboard, both of which are supported by the 2D Jupyter extension we developed and evaluated; the extension can be found at https://github.com/elizabethc99/2D-Jupyter on GitHub. Multi-column is fully supported. Workboard, or more complex structures such as directed graphs and nested columns and rows, is enabled by freeform dragging of cells.
60
+
61
+ To support multi-column layouts, 2D Jupyter enables creation and deletion of columns, resizing and re-ordering of columns, adding cells to a column, and moving cells from one column to another; This is done through user interface (UI) additions, as seen in Figure 2. The Plus and Minus buttons on the main toolbar create and delete individual columns respectively. Also, each column now has a toolbar at its top; The bold Plus button here adds a cell to the column, the left and right arrows reorders the column in the arrow direction, and the gray box can be clicked and dragged to resize a column. Finally, cells can be dragged to another column by clicking and holding the new gray box on each cell's left side.
62
+
63
+ To enable workboard layouts, each cell can be dragged and placed outside of the columns, as seen in the freeform cell in Figure 2. More advanced workboard features, such as arrows to connect cells or other whiteboard annotations are not yet implemented. For now, we suggest using workboard freeform cells for more ephemeral uses such as scratch space, viewing data, and other tasks not relevant to the final computational narrative.
64
+
65
+ § 4 STUDY 1 METHODOLOGY
66
+
67
+ The goal of our first study is to measure and compare user task performance in 1D and 2D notebooks. We therefore conducted a controlled study consisting of a pre-screening questionnaire, a set of user performance tasks, and survey questions. The study design had one within-subjects variable, layout with two treatments, $\mathbf{{ID}}$ and ${2D}$ ; and one between-subjects variable, order with two treatments, 1D-first and 2D-first. The user tasks focused on research question 1; participants completed three task sections in both 1D and 2D. For the surveys, we focused on research questions 2-4.
68
+
69
+ § 4.1 RECRUITMENT AND SCREENING
70
+
71
+ We recruited 89 participants via academic listservs of students and faculty from a large state university. Each participant completed a screening questionnaire asking whether they had experience with both Python and computational notebooks such as Jupyter. We invited 62 participants with experience in both tools to continue; 31 completed the study, with 1 of these 31 participants' data discarded due to technical issues. 16 participants, including the one whose data was discarded, were assigned to 1D-First, while the other 15 were assigned to 2D-First, this effectively led to 15 participants for both the 1D-First and 2D-First treatments. Participants were randomly assigned to a group, with the only restriction being balancing the group numbers so that they were as equal as possible.
72
+
73
+ § 4.2 HARDWARE FOR USER STUDY
74
+
75
+ For the user study tasks, participants used an iMac computer with a 24-inch monitor and either an iMac mouse with a built-in trackpad for horizontal and vertical scrolling, or an external trackpad with horizontal and vertical scrolling that also had buttons for clicking. The monitor was wide enough to display 4 to 5 columns of the notebook at a time.
76
+
77
+ § 4.3 TASK DESIGNS & RATIONALES
78
+
79
+ The tasks were designed to mimic common data science scenarios performed in computational notebooks. We created 6 computational notebooks (3 1D, 3 2D) for this study. Each notebook was designed for one of three task sets: Finding & Comparing Results, Parameter Tuning, and Code Comparison. Each layout (1D, 2D) and task set combo had one notebook, and each task set's notebooks were slightly different so participants could not memorize answers between layouts. However, the differences were designed to not impact difficulty between the tasks in 1D and 2D. Users had the notebooks open, one at a time, on the iMac, while the user study survey, with questions and instructions, was open on a separate laptop.
80
+
81
+ To compare 1D vs. 2D, we measured time to completion and accuracy for all tasks; we also measured the number of times and amount of time spent scrolling for the code comparison task. 16 participants started with the 1D notebooks first, and 15 participants started with 2D first; This design, along with training in the first notebook layout type for each person, helped counterbalance the study to minimize bias from repeated tasks. One 1D First participant's data was discarded due to technical issues. Each participant took at most 1 hour to complete the study.
82
+
83
+ § 4.3.1 FINDING & COMPARING RESULTS TASK
84
+
85
+ Harden et al. [9] found that users expected finding and comparing tasks to be better in 2D layouts than in 1D layouts. Thus, this task set sought to measure statistically whether such a benefit exists.
86
+
87
+ The notebooks for this task set contained COVID-19 data analysis for the USA by state and then for 5 individual states by county, as seen in the left image in Figure 1. Sections 1-3 of these notebooks had cells for imports, function definitions, and data preparation, while Sections 4-9 had cells that analyzed and visualized results for each geographic region as a scatterplot and 3 bar charts. In data science, such notebooks often result from copying-and-pasting cells for parallel analyses of different data subsets. The 1D notebook design concatenated these sections into a single long list of cells. In the 2D notebook, each of the 9 sections was separated into its own column of cells, with columns arrange left to right. This notebook design was based on common layout strategies previously observed by Harden et al. [9], where a common strategy was to organize parallel analyses in side-by-side columns to enable easy comparison.
88
+
89
+ For this task set, we included a find task, a graph comparison task, and a numerical comparison task. We did not allow participants to look over the notebook before beginning the task set.
90
+
91
+ In the find task, participants had to locate info in the notebook based on the notebook structure. The question was of the form "Which state's analysis is found between the analysis of STATE1 data and the analysis of STATE2 data?" We measured the time it took each participant to retrieve the info in 1D vs. 2D notebook layouts. The hypothesis was that spatial 2D columns would enable more rapid recognition and access to relevant notebook sections.
92
+
93
+ In the graph comparison task, participants had to compare results in several different charts throughout the notebook. The question was of the form "Out of those shown in the relevant bar charts, which county in which state, EXCLUDING the ALL STATES section, had the highest number for ATTRIBUTE of COVID-19?" We measured the time it took each participant to compare charts in $1\mathrm{D}$ vs. $2\mathrm{D}$ notebook layouts. The hypotheses was that 2D column structure that aligned parallel analyses would enable faster comparison by horizontally scrolling through the corresponding charts, whereas the 1D notebook would require significant vertical scrolling and searching for each chart to compare.
94
+
95
+ Similarly, in the numerical comparison task, participants were asked a question of the form "Which section's scatterplot graph's line of best fit least/best fits the data (coefficient of determination closest to 0/1)?" The coefficient of determination was displayed above each scatterplot. We measured the time it took each participant to compare numerical results in $1\mathrm{D}$ vs. $2\mathrm{D}$ notebook layouts.
96
+
97
+ § 4.3.2 PARAMETER TUNING TASK
98
+
99
+ A common problem in data science involves testing various parameter values for an ML model. The notebooks for this task, as seen in the middle image in Figure 1, contained K-Nearest Neighbors (KNN) algorithm used to analyze network stability data. Participants were instructed the following: "You will be asked questions that require tuning the parameter ’ $\mathrm{k}$ ’ in Section 1 and choosing the distance metric in Section 4. Only run the necessary cells (the "k-value" cell in Section 1, and the cells in Section 4) to test each possible parameter set (k-value and distance metric)." In each notebook, the cell which assigns the k -value was in the first section while the code for calculating the distances, making predictions, and determining accuracy on the test set were in the fourth section; participants were not allowed to move cells. Participants were asked three questions in the following order, with different $\mathrm{k}$ -value options for $1\mathrm{D}$ and $2\mathrm{D}$ :
100
+
101
+ 1. Which of the following k-values produces the most accurate model with the given dataset for the Euclidean distance metric?
102
+
103
+ 2. Which of the following k-values produces the most accurate model with the given dataset for the Manhattan distance metric?
104
+
105
+ 3. Given each distance metric with its optimal k-value, which distance metric produces the most accurate model on the given dataset?
106
+
107
+ In data science endeavors, code near the beginning of a notebook can influence results later on in the notebook; While it is possible to move such dispersed cells closer to each other, such re-ordering is not always feasible depending on the design of the analysis. Thus, we sought to simulate a situation in which one wants to retain the given order while continuing their analysis. The goal here is to see if 2D notebooks, with a layout where each section has its own column, can minimize the effects of dispersal [10] by making cells that are far apart in a 1D layout effectively closer on the screen in a 2D layout and lead to performance improvements. Thus, we measured how long it took participants to answer all three questions together.
108
+
109
+ § 4.3.3 CODE COMPARISON TASK
110
+
111
+ Data scientists often need to compare the code for multiple versions of a model to understand differences. The notebooks for this task, as seen in the right image in Figure 1, contained two runs of a K-Nearest Neighbors ML algorithm with several code differences between them. Participants had to choose which items from the list of options, ordered in terms of appearance, differed between each run. The 2D notebook organized the two runs into adjacent columns. The list of differences included items such as the following:
112
+
113
+ 1. The cutoff number for the training and testing splits
114
+
115
+ 2. Different distance metrics (Manhattan, Euclidean) used
116
+
117
+ 3. The variable name for the distance matrix
118
+
119
+ 4. The value of $\mathrm{k}$ (number of nearest neighbors)
120
+
121
+ The goal of this task was to test how quickly users can find differences between two similar sets of code, which often happens when debugging model errors. Given that Harden et al. [9] found significant skepticism about the potential of $2\mathrm{D}$ notebook layouts for debugging, it makes sense to test this important debugging sub-task.
122
+
123
+ Online Submission ID: 0
124
+
125
+ Table 1: P-values for 2-Factor ANOVA by Task and Effect
126
+
127
+ max width=
128
+
129
+ Task Order Layout Interaction Improvement by ${2D}$
130
+
131
+ 1-5
132
+ Find 0.024 0.271 0.378 N/A
133
+
134
+ 1-5
135
+ Graph Comparison 0.106 <0.001 0.032 32%
136
+
137
+ 1-5
138
+ Number Comparison 0.023 $\mathbf{ < {0.001}}$ 0.007 46%
139
+
140
+ 1-5
141
+ Parameter Tuning 0.934 0.007 0.219 19%
142
+
143
+ 1-5
144
+ Code Comparison 0.840 <0.001 $\mathbf{ < {0.001}}$ 34%
145
+
146
+ 1-5
147
+
148
+ Bolded values are statistically significant with a 0.05 threshold. All other values are not statistically significant.
149
+
150
+ § 4.4 SURVEY QUESTIONS DESIGN
151
+
152
+ Likert-scale questions were used at the end of both the 1D and 2D task sections, and after both sections were completed. The 5 questions at the end of the 1D and 2D task sections focused on rating each layout individually, without comparison to the other, while the 13 questions at the end focused on comparing 1D and 2D layouts; these 13 questions were largely taken from Harden et al.'s experiment [9]. After the 13 questions was a comment box where users could elaborate on any answers they gave.
153
+
154
+ The questions after each of the 1D and 2D task sections focused on perceptions of usability for the layout on the given tasks; We compared their answers between layouts to better understand whether users saw potential improvements in 2D layouts over 1D layouts.
155
+
156
+ § 4.5 DATA ANALYSIS PROCESS
157
+
158
+ We divided the quantitative data analysis for Study 1 into 3 areas: Efficiency Measurements, Survey Questions, and Scrolling Time.
159
+
160
+ § 4.5.1 EFFICIENCY MEASUREMENTS
161
+
162
+ We used 2-Factor ANOVA to test if layout (1D or 2D), as well as order (1D First, 2D First, ) and interaction between layout and order, affected time to completion; significant results were followed up with Tukey's Test to determine the nature of the effects. We also compared the mean differences found by Tukey's Test to average completion time for $1\mathrm{D}$ in the form of percent time saved.
163
+
164
+ § 4.5.2 SURVEY QUESTIONS
165
+
166
+ For the Post-1D and Post-2D questions, we created and analyzed a bar chart of average rating by order and layout, and a heatmap of ratings. We also tested the statistical significance of the differences in ratings using a paired t-test and the Wilcoxon test, inspired by work by De Winter and Doduo on analyzing Likert-Scale questions [6]. For the Post-Experiment questions, we made and analyzed a heatmap of ratings. Finally, we analyzed qualitative comments for themes.
167
+
168
+ § 4.5.3 SCROLLING TIME
169
+
170
+ To determine the amount of scrolling done in 1D vs 2D, we recorded scrolling events, including the time taken to scroll, while watching the footage for each participant's Code Comparison task work in 1D and 2D. We limited events to scrolls for navigation as opposed to micro-scrolling events that do not bring new cells into view; we did this by only considering those scrolling events that lasted for at least 2 seconds. To determine scrolling endpoints, we looked for breaks between scrolls lasting at least 2 seconds; scrolling events with smaller breaks than 2 seconds were considered as 1 event for the purpose of this analysis.
171
+
172
+ § 5 STUDY 1 RESULTS
173
+
174
+ We divide our results into 4 areas: User Interaction Strategies, Efficiency Measurements, Survey Questions, and Scrolling Time.
175
+
176
+ § 5.1 USER INTERACTION STRATEGIES
177
+
178
+ Our observations of user behaviors with 1D and 2D layouts, divided by task notebook, are summarized here.
179
+
180
+ § 5.1.1 FINDING & COMPARING RESULTS TASK
181
+
182
+ In 1D, all users started by scrolling down through the notebook to answer the Find question (which state's section was between two other states' sections) until they found the answer. Then, they scrolled through Sections 5 through 9 to answer the graphical Comparison question (which county in which state had the highest value for a particular variable) and compared the bar chart results and axes, which was sufficient to find the highest value. Some users, because they forgot a previous value or wanted to verify their memory, would scroll back to earlier results, sometimes multiple times, before submitting an answer. A couple users took notes on paper to avoid this issue. For the numerical comparison question, users repeated the process for first Comparison question with Sections 4 through 9.
183
+
184
+ In 2D, all users started by scrolling to the right to answer the Find question. Since the columns for the relevant sections (4-9) were fairly well aligned, as seen in Figure 1, this mitigated the need to perform vertical scrolling except for between questions. Users scrolled less distance in $2\mathrm{D}$ due to more efficient use of space with 1 column representing 1 section. Then, to answer the 2 Comparison questions, all users used physical navigation (e.g. head movement) with less scrolling needed, since the screen could show 4 columns. The efficient, well-organized use of 2D also led users to perform less backtracking, if any, and eliminated the need to take notes on paper.
185
+
186
+ § 5.1.2 PARAMETER TUNING TASK
187
+
188
+ In 1D, all users repeatedly scrolled up and down to get results for different parameter combinations (k-value and distance metric). Sometimes users scrolled past the cells they were looking for and thus did additional scrolling to correct their focus. All users took notes on paper so they could remember and compare results.
189
+
190
+ In 2D, much smaller scrolls were needed to get from the first column, where the main parameter was, and the fourth column, where results were calculated. Given the much smaller scrolling distance, scrolls were quicker and did not result in scrolling too far nearly as often. All users also took notes on paper with 2D, as well.
191
+
192
+ § 5.1.3 CODE COMPARISON TASK
193
+
194
+ In 1D, all users scrolled up and down to find code differences in the two different analyses; users examined the code in a cell in the first analysis, then scrolled down to examine the code in the corresponding cell in the second analysis before scrolling back up again to look at the next cell. This process was repeated until all potential differences were checked for. Since users were given a list of potential differences in order of appearance, they knew what to look for; this could have resulted in less forgetting (and thus less re-scrolling) than might otherwise happen.
195
+
196
+ In 2D, the two analyses were nearly horizontally aligned, so all users used physical navigation to find differences instead of virtual navigation; scrolling was used to go further into the notebook rather than to spot differences. As expected, in 2D users scrolled much less than they did in 1D due to the use of physical navigation and externalized memory on the screen.
197
+
198
+ < g r a p h i c s >
199
+
200
+ Figure 3: A bar chart showing average time to completion by task and layout in seconds.
201
+
202
+ § 5.2 EFFICIENCY MEASUREMENTS
203
+
204
+ As seen in Table 1 and summarized in Figure 3, we found the layout (1D or 2D) was statistically significant for all tasks except the find task, which may be due to it being a "cold find", one without prior knowledge of the notebook, which fails to make use of the benefits of Space to Think. The interaction between layout and order (1D First or 2D First) was significant for the comparison tasks. Some tasks benefited from repetition, improving in the second layout based on order. For the Graph and Number Comparison tasks, 1D seemed to benefit more from order, while $2\mathrm{D}$ performance was more stable.
205
+
206
+ Tukey's test showed the 2D layout resulted in statistically significant improvements to efficiency, summarized in Table 1; these improvements ranged from about ${20} - {50}\%$ time reduction. These results likely reflect faster navigation of numerous code cells during the data science tasks when the cells are organized into columns.
207
+
208
+ § 5.3 SURVEY QUESTIONS
209
+
210
+ We divide the survey question results into 3 areas: Post-1D & Post- 2D Questions, Post-Tasks Questions, and Qualitative Comments.
211
+
212
+ Table 2: Post-2D minus Post-1D Average Differences in Rating
213
+
214
+ max width=
215
+
216
+ Question Mean Median
217
+
218
+ 1-3
219
+ Easy to Navigate 1.87 2.00
220
+
221
+ 1-3
222
+ Quickly Find Info 1.80 2.00
223
+
224
+ 1-3
225
+ Easy to Compare Graphs 2.87 3.00
226
+
227
+ 1-3
228
+ Easy to Compare Numbers 2.83 3.00
229
+
230
+ 1-3
231
+ Easy to Compare Code 3.57 4.00
232
+
233
+ 1-3
234
+
235
+ Bolded values are statistically significant with a 0.05 threshold for both paired t-test and Wilcoxon. Positive values indicate 2D is considered better.
236
+
237
+ § 5.3.1 POST-1D & POST-2D QUESTIONS
238
+
239
+ As seen in the bar chart in Figure 4, the heatmap in Figure 5 and the results of Table 2, the user impressions of the usability of 2D layouts were significantly more positive than the 1D layouts on all metrics. Users rated 2D approximately 2-4 points higher (on a 7-point likert scale) than 1D on each metric. Users were nearly unanimously positive in rating $2\mathrm{D}$ , and more evenly divided between positive and negative for 1D. Two participants gave the three negative ratings for 2D in Figure 5; one saw clutter in 2D notebooks as a potential issue, and the other felt the 2D notebooks could be improved by snapping cells next to each to ensure proper alignment of related cells.
240
+
241
+ Interestingly, as seen in Figure 4, participants exposed to the 2D before the 1D layout rated the usability of the 1D layout as significantly worse for the usability questions asked. Thus, exposure to the 2D layout makes the 1D layout seem less usable.
242
+
243
+ < g r a p h i c s >
244
+
245
+ Figure 4: A bar chart comparing the mean ratings for the Post-1D and Post-2D questions; positive values indicate agreement with the sentiment, while negative values indicate disagreement.
246
+
247
+ max width=
248
+
249
+ 2|c|Item 7|c|Rating
250
+
251
+ 1-9
252
+ Layout Question Strongly Agree Agree Agree a little Neutral Disagree a little Disagree Strongly Disagree
253
+
254
+ 1-9
255
+ 5*1D Easy to Navigate 4 8 6 1 7 3 1
256
+
257
+ 2-9
258
+ Quickly Find Info 4 5 7 6 5 2 1
259
+
260
+ 2-9
261
+ Easy to Compare Graphs 3 4 5 1 5 10 2
262
+
263
+ 2-9
264
+ Easy to Compare Numbers 1 7 3 0 8 7 4
265
+
266
+ 2-9
267
+ Easy to Compare Code 0 4 5 1 5 8 7
268
+
269
+ 1-9
270
+ 5*2D Easy to Navigate 18 8 4 0 0 0 0
271
+
272
+ 2-9
273
+ Quickly Find Info 16 13 0 0 0 0 1
274
+
275
+ 2-9
276
+ Easy to Compare Graphs 19 9 2 0 0 0 0
277
+
278
+ 2-9
279
+ Easy to Compare Numbers 19 6 4 0 0 1 0
280
+
281
+ 2-9
282
+ Easy to Compare Code 24 4 0 1 0 1 0
283
+
284
+ 1-9
285
+
286
+ Figure 5: A heatmap comparing the ratings for the Post-1D and Post- 2D questions.
287
+
288
+ § 5.3.2 POST-EXPERIMENT QUESTIONS
289
+
290
+ As seen in the Figure 6 heatmap, when explicitly asked to compare their experiences with the two layouts, participants overwhelmingly viewed 2D as more effective for common data science tasks, especially comparisons, and felt the 2D layout improved their performance. They also agreed that $2\mathrm{D}$ made better use of screen space, and that this was key to their success. Furthermore, most participants seemed interested in using 2D layouts instead of 1D layouts, with only one participant expressing neutrality.
291
+
292
+ One curious result is that participants expressed skepticism about 2D layouts being better for presenting computational narratives and collaborating with others. Harden et al. [9] found the opposite; debugging, analysis and development, and navigation were seen as weaknesses of 2D layouts, while presentation and collaboration were seen as strengths. This difference may be due to the tasks that users performed in each study; presentation was key for Harden et al. [9], whereas debugging and comparison were key in this study.
293
+
294
+ max width=
295
+
296
+ 2|c|Item 7|c|Rating
297
+
298
+ 1-9
299
+ Category Question Strongly Agree Agree Agree a little Neutral Disagree a little Disagree Strongly Disagree
300
+
301
+ 1-9
302
+ 9*2D Better than 1D at <task></task> Navigate 15 7 2 4 2 0 0
303
+
304
+ 2-9
305
+ Locate Items 17 8 4 1 0 0 0
306
+
307
+ 2-9
308
+ Organize & Clean 10 10 5 3 2 0 0
309
+
310
+ 2-9
311
+ Present 9 7 3 7 3 1 0
312
+
313
+ 2-9
314
+ Explore & Prep Data 12 12 5 0 1 0 0
315
+
316
+ 2-9
317
+ Analyze & Develop 14 9 1 5 1 0 이
318
+
319
+ 2-9
320
+ Debug code 12 11 4 1 2 0 이
321
+
322
+ 2-9
323
+ Compare results 27 3 0 0 0 0 0
324
+
325
+ 2-9
326
+ Collaborate 8 8 3 9 2 0 0
327
+
328
+ 1-9
329
+ 4*Statements about 2D Layout 2D Spatial Layout improved Performance 19 7 4 0 0 0 0
330
+
331
+ 2-9
332
+ More Cells on Screen in 2D Improved Performance 18 8 3 0 1 0 0
333
+
334
+ 2-9
335
+ 2D Lavouts Better Used Screen Space 21 6 3 0 0 0 0
336
+
337
+ 2-9
338
+ Would Use 2D instead of 1D 17 10 2 1 0 0 이
339
+
340
+ 1-9
341
+
342
+ Figure 6: A heatmap visualizing the ratings for the Post-Tasks questions.
343
+
344
+ Table 3: Qualitative Themes in Study 1 Survey
345
+
346
+ max width=
347
+
348
+ Theme Sample Quote Number of Participants
349
+
350
+ 1-3
351
+ Positive Comments on 2D X 20
352
+
353
+ 1-3
354
+ Better Comparison in 2D "The 2d layout seems like a solid choice for a lot of analysis applications where you want to do similar but slightly different processes and compare the results." 7
355
+
356
+ 1-3
357
+ Better Navigation in 2D "I liked having no more scrolling! It was more intuitive and easier to compare side-by-side sections compared to having to scroll so much. I spent so much time scrolling in the 1D notebook that I forgot what I had looked at previously." 6
358
+
359
+ 1-3
360
+ Practice with 2D Would Help Improve Performance "This was my first experience with 2D notebooks after extensive use of 1D notebooks, so the advantages would be compounded given more time to familiarize myself." 3
361
+
362
+ 1-3
363
+ 2D is Better Than 1D "There is no reason anybody should be using 1D anymore." 2
364
+
365
+ 1-3
366
+ Other "I really found it easy to look through all the results from different experiments. We always have to run multiple iteration with different parameter to calculate results and so 2D makes it very easy to see our progress in the notebook and also can be easily inferred." 2
367
+
368
+ 1-3
369
+ Thoughtful Feedback on 2D X 6
370
+
371
+ 1-3
372
+ Column Width & Amount "Putting too many columns in one screen caused little confusions and potentially increase the number of scrolls." 2
373
+
374
+ 1-3
375
+ Arrow Key Navigation "I found the 2D notebooks were more quick to navigate, but it was easier to navigate the 1D notebook using keys rather than the mouse, which might have been a little bit faster." 1
376
+
377
+ 1-3
378
+ Cluttering Screen Space "I believe one of the only things I might do in a 2D notebook that wouldn't be as easy would be displaying some visuals, as the layout would make them smaller, along with the text. Also having two visuals right next to each other might be seen as cluttered." 1
379
+
380
+ 1-3
381
+ Use with Lower Resolutions "The 2D notebooks were definitely easier to use, but for some tasks/cases (such as present- ing on a monitor which may be low-resolution, or collaborating with a colleague who has a low-resolution monitor) that might change." 1
382
+
383
+ 1-3
384
+ Setup Time "The only downside I could see is it taking slightly more time to initially set up but other than that it seems like a good option to have." 1
385
+
386
+ 1-3
387
+ Skepticism about 2D X 2
388
+
389
+ 1-3
390
+ Presentation Skepticism "[1D] looks more clean if you were to present something to another person." 1
391
+
392
+ 1-3
393
+ Debugging & Dev Skepticism "For development and collaboration the linear 1d notebook would be easier to debug." 1
394
+
395
+ 1-3
396
+
397
+ Table 4: Scroll Event Analysis Totals Across All Participants
398
+
399
+ max width=
400
+
401
+ Measure 1D Layout 2D Layout
402
+
403
+ 1-3
404
+ Sum of Scroll Event Times 2071 seconds 561 seconds
405
+
406
+ 1-3
407
+ Count of Scroll Events 410 events 195 events
408
+
409
+ 1-3
410
+ Mean Time per Scroll Event 5.05 seconds 2.88 seconds
411
+
412
+ 1-3
413
+ Median Time per Scroll Event 4 seconds 2 seconds
414
+
415
+ 1-3
416
+
417
+ § 5.3.3 QUALITATIVE COMMENTS
418
+
419
+ Of the 27 participants who left a qualitative comment on the survey, 20 expressed positivity about 2D layouts, while only 2 expressed that they might still prefer 1D notebooks for any task. 2 participants went so far as to express sentiments suggesting that $2\mathrm{D}$ layouts make 1D obsolete. 6 participants also left thoughtful feedback that may inform design of future 2D computational notebooks. Several comments pointed out the link between memory and navigation, that more time scrolling in $1\mathrm{D}$ led to more forgetting important information for the task. The results are summarized in Table 3 with the themes found, a sample quote for each sub-theme, and the number of comments matching the theme.
420
+
421
+ § 5.4 SCROLLING TIME
422
+
423
+ For the code comparison task, we found participants scrolled more times and spent more time scrolling in 1D layout, as seen in Table 4. Each scroll event in 1D tended to be longer than those in 2D. Given differences in typical user interactions described earlier, specifically the elimination of the need to scroll and reduction of scrolling distances for comparison, it makes sense that the 2D layout would have much less scrolling time and events for the Code Comparison task. This confirms that reducing scroll navigation is an important factor in enabling the faster performance results of $2\mathrm{D}$ . This may be due to multi-columns bringing cells nearer to each other and fitting more cells on the screen simultaneously.
424
+
425
+ § 6 STUDY 2 METHODOLOGY
426
+
427
+ The second study focused on understanding how users would utilize the 2D space when starting nearly from scratch, as well as evaluating the longitudinal usability of the 2D Jupyter extension for writing code in a more ecologically valid setting. It consisted of a main task, interview, and a survey.
428
+
429
+ § 6.1 RECRUITMENT
430
+
431
+ Participants were recruited from undergraduate and graduate computer science classes at a large state university, and were invited to participate if they had prior experience using Python and computational notebooks. In total, 9 participants completed the study.
432
+
433
+ § 6.2 HARDWARE USED IN STUDY
434
+
435
+ For this study, participants used their personal computers to complete the task. Most participants conducted the task on a laptop using the laptop display and built-in trackpad. Two participants connected their laptops to a 64-inch $4\mathrm{\;K}$ monitor and extended their displays to the larger screen, but continued to use the laptop's built-in trackpad for navigation and scrolling.
436
+
437
+ § 6.3 TASK DESIGN
438
+
439
+ A data analysis task for this study was designed that would allow participants to utilize all of the 2D Jupyter extension features. Participants were given a Jupyter notebook file containing task instructions, initial library imports, and loading of two datasets: a COVID dataset containing the number of cases and deaths in each county in the US, and a demographics dataset containing the population of each US county as of the 2020 census.
440
+
441
+ § 6.3.1 ORIGINAL TASK
442
+
443
+ The first five participants were instructed to use 2D Jupyter to answer the following questions:
444
+
445
+ 1. How do the most recent deaths per case compare between all the counties of Virginia?
446
+
447
+ 2. Do the deaths per case in each county of Virginia correlate to the population density?
448
+
449
+ § 6.3.2 MODIFIED TASK
450
+
451
+ For the remaining four participants, the data analysis task was modified to introduce more complexity and encourage more flexibility in the use of the $2\mathrm{D}$ environment. These participants were given the following instructions:
452
+
453
+ You are being asked to analyze COVID data for three states - Virginia, Texas and Illinois. Prepare a notebook for presentation with the following:
454
+
455
+ 1. Create the following charts for each state:
456
+
457
+ * Bar chart showing top 10 counties with highest cases
458
+
459
+ * Bar chart showing top 10 counties with highest deaths
460
+
461
+ * Bar chart showing top 10 counties with highest deaths per case
462
+
463
+ * Scatterplot showing the correlation between cases and deaths by county (include the correlation coefficient)
464
+
465
+ * Bar chart showing top 10 counties with the highest number of cases relative to population
466
+
467
+ * Bar chart showing top 10 counties with the highest number of deaths relative to population.
468
+
469
+ 2. Using only the charts you have created in part 1 answer the following questions:
470
+
471
+ (a) Which state had the county with the highest number of cases?
472
+
473
+ (b) Which state had the county with the highest number of deaths?
474
+
475
+ (c) Which state had the county with the highest number of deaths per case?
476
+
477
+ (d) Which state had the highest correlation between cases and deaths?
478
+
479
+ (e) Which state had the county with the highest number of cases relative to population density?
480
+
481
+ (f) Which state had the county with the highest number of deaths relative to population density?
482
+
483
+ (g) How many counties with the top 10 deaths relative to population were also in the top 10 deaths per case for that state?
484
+
485
+ 3. Prepare the notebook for presentation of your findings.
486
+
487
+ For all participants, an initial meeting was scheduled to give an overview of the 2D Jupyter extension and to go over the data analysis task. Each participant was allowed to complete the task at their own pace over the course of 2 weeks. An interview session was scheduled after each participant had completed the task, in which they were asked questions about their experiences using 2D Jupyter. At the end of the interview, each person was asked to complete a survey.
488
+
489
+ § 6.4 INTERVIEW AND SURVEY QUESTIONS
490
+
491
+ After completing the data analysis tasks, participants were interviewed about their experience using 2D Jupyter. Interview questions were focused on understanding how the participant used the $2\mathrm{D}$ layout and what features they utilized. Additionally, the participant was asked to share their opinion on any advantages or disadvantages that 2D notebooks had as compared to traditional 1D notebooks. The questions asked during the interview included:
492
+
493
+ 1. What was your overall strategy for using the 2D environment?
494
+
495
+ 2. What features of the $2\mathrm{D}$ notebook did you utilize?
496
+
497
+ 3. Are there any features that you wish you had?
498
+
499
+ 4. Were there any difficulties in using the 2D notebook during your data analysis?
500
+
501
+ 5. Did the 2D environment provide any advantages for this task as compared to a 1D notebook?
502
+
503
+ 6. Did the 2D environment provide any disadvantages for this task as compared to a 1D notebook?
504
+
505
+ A survey was also given to participants after the interview session; it consisted mainly of Likert-scale questions. The first four questions of the survey focused on the benefits of a 2D layout in completing the main parts of a data analysis task. The next three questions evaluated the usability of 2D Jupyter. Finally, the survey included two short-answer questions to allow users to provide any suggestions and comments they had regarding their overall experience.
506
+
507
+ § 7 STUDY 2 RESULTS
508
+
509
+ Results of this study are primarily qualitative. A summary of these results can be found in Table 5 with the common themes, a sample quote for each theme, and the number of participants who made comments matching the theme.
510
+
511
+ § 7.1 STRATEGIES FOR USING THE 2D ENVIRONMENT
512
+
513
+ For the original data analysis task, we found two main strategies for using 2D space. The first strategy, which 3 participants used, was to use a separate column for each question they were asked to answer. Each column contained the entirety of the analysis needed to answer the question, with the exception of one participant who used two columns to answer the second question to reduce the amount of vertical scrolling needed to view the entire notebook. The second strategy, used by one participant, was to use the columns to separate the steps of the data science workflow, such as data pre-processing, data exploration, and so on. Each column was treated as a new section of the overall notebook.
514
+
515
+ For the modified data analysis task, each participant had a different strategy for using the $2\mathrm{D}$ space. One participant used the columns as sections of their notebook, creating a new column when they began working on a new data science subtask. Another participant used columns to reduce scrolling and only created a new column when the vertical length of the page became too long. One participant used a single column of cells alongside a single markdown scratch cell containing the task instructions; they used the freeform cell placement ability to move the markdown cell down the page as the page became longer. The last participant created only two columns and placed cells side by side when they wanted to reference code they wanted to reuse or to compare visualisations.
516
+
517
+ Finally, one participant used 2D Jupyter for their own project rather than the given data analysis task. This participant was a student in an artificial intelligence class and was working on a project to build their own AI model that could play a game. This participant primarily used the freeform scratch cell feature of the extension to test parameters for their model, rather than using multiple columns.
518
+
519
+ Table 5: Qualitative Themes in Study 2 Interviews and Survey
520
+
521
+ max width=
522
+
523
+ Theme Sample Quote Number of Participants
524
+
525
+ 1-3
526
+ Advantages of 2D X 5
527
+
528
+ 1-3
529
+ Better navigation "It was easier for me to find the exact cell that I was looking for." 2
530
+
531
+ 1-3
532
+ Better organization "I don't know that there's any extra challenges from a 2D environment...I think it's strictly better organizationally" 1
533
+
534
+ 1-3
535
+ Ease of comparisons "...when I have to compare two data frames...side by side that's really useful." 2
536
+
537
+ 1-3
538
+ Disadvantages of 2D X 2
539
+
540
+ 1-3
541
+ Viewing on Small Screens "... the major disadvantage is all the [horizontal] scrolling that you have to do." 1
542
+
543
+ 1-3
544
+ Cluttered Look "..it can look kind of cluttered sometimes, like it can be maybe a little overwhelming..." 1
545
+
546
+ 1-3
547
+ Usability Feedback X 8
548
+
549
+ 1-3
550
+ Column Resizing "... if it was possible to resize it directly from [the middle of the column] instead of having to go up and resize, that would be good." 2
551
+
552
+ 1-3
553
+ Column Scrolling "I would like each columns to have their own [independent] scrolling area" 1
554
+
555
+ 1-3
556
+ Easy to Learn ". . . . after that that small little learning curve, I think everything else was... super straight- forward" 2
557
+
558
+ 1-3
559
+ Opportunities "I don't see there being like any sort of disadvantage or any type of limitation that $2\mathrm{D}$ has compared to 1D. If anything... the opportunities are endless." 3
560
+
561
+ 1-3
562
+
563
+ Table 6: Number of Columns Used by Participants
564
+
565
+ max width=
566
+
567
+ Number of Columns Used Number of Participants
568
+
569
+ 1-2
570
+ 1 2
571
+
572
+ 1-2
573
+ 2 1
574
+
575
+ 1-2
576
+ 3 2
577
+
578
+ 1-2
579
+ 4 2
580
+
581
+ 1-2
582
+ 6 1 (4k screen)
583
+
584
+ 1-2
585
+ 10 1 (4k screen)
586
+
587
+ 1-2
588
+
589
+ Table 6 shows the number of columns used by participants. 2 participants created 1 column of cells alongside a freeform scratch cell that they moved around the notebook outside the column as they worked. One participant used 2 columns, primarily using the second column to place cells side-by-side for referencing code or comparing visualizations. Most participants used 3-4 columns to complete the data analysis task. The 2 participants who used the large $4\mathrm{\;K}$ display created the most columns, using 6 and 10 columns in their completed notebooks.
590
+
591
+ § 7.2 ADVANTAGES OF 2D OVER 1D
592
+
593
+ Participants found 2D notebooks had several advantages over 1D notebooks. 5 participants noted that referencing other cells was easier in the $2\mathrm{D}$ environment and reduced the amount of scrolling needed while developing the notebook. Additionally, they found comparing data or charts easier in the 2D environment. 3 participants said the 2D environment made it easier to keep track of cells. 2 participants liked that they could view more of their code at once.
594
+
595
+ § 7.3 DISADVANTAGES OF 2D COMPARED TO 1D
596
+
597
+ Participants also found several disadvantages of 2D notebooks in comparison to 1D. 2 participants noted that smaller screens may make it more difficult to navigate the notebook if there are several columns, due to requiring both vertical and horizontal scrolling to access cells or requiring horizontally scrolling code in very narrow columns. 2 participants suggested a larger notebook makes the user need to have a good mental map of their layout in order to not be lost while navigating the $2\mathrm{D}$ environment. One participant presented initial confusion as to whether separate columns were operating with separate kernels. Finally, one participant noted the extension does not support exporting the $2\mathrm{D}$ layout to another file format, such as HTML or PDF, making sharing 2D notebooks difficult.
598
+
599
+ § 7.4 SUGGESTIONS AND IMPROVEMENTS
600
+
601
+ Participants had the opportunity throughout the study to provide comments and suggestions on the 2D extension. Primarily, participants wanted shortcut access to the new toolbar controls. For example, multiple participants wanted to add code cells from anywhere in the notebook, without needing to use the toolbars at the top of the columns. Other participants wanted to resize the columns without needing to scroll to the top of the column to find the resize controller. Two participants suggested adding the ability to independently vertically scroll through a column while keeping the rest of the notebook static. One participant wanted to be able to concurrently run cells placed side-by-side without having to run each cell individually.
602
+
603
+ Several participants found that orienting themselves in 2D space was somewhat challenging and provided suggestions for improvement. One participant suggested adding a mini-map at the bottom corner of the screen to see their location within the overall notebook. Other participants suggested labelling each cell with a row and a column number, similar to how Excel spreadsheet cells are labeled.
604
+
605
+ § 7.5 SURVEY QUESTIONS
606
+
607
+ All 9 participants in Study 2 were asked to complete the survey, but 2 participants skipped the questions in their responses, resulting in 7 total responses. The results of this survey are shown in Figure 7.
608
+
609
+ The heatmap shows participants generally viewed 2D notebooks positively. When asked if the 2D layout was beneficial in completing common data analysis tasks, most participants agreed or strongly agreed with the statements. In terms of usability, all participants agreed with the statement that it was easy to understand how to use the $2\mathrm{D}$ extension. Additionally, most participants agreed that it was easy to navigate in the 2D layout. When asked if they would prefer using the 2D extension over the traditional 1D environment, all participants were either netural or agreed with the statement.
610
+
611
+ § 8 DISCUSSION
612
+
613
+ § 8.1 TASK EFFICIENCY BENEFITS
614
+
615
+ 2D computational notebook layouts provide benefits to task efficiency by reducing the amount of scrolling necessary and shortening the length of needed scrolls. As seen in Study 1, 2D layouts provided statistically significant reductions in time to completion overall, as well as for when 1D was first and when $2\mathrm{D}$ was first; the lack of statistical significance for the comparison tasks when $2\mathrm{D}$ was first and for comparing the second layout in each condition suggests that the practice effect as a result of doing similar tasks again in the second layout masked the effect of the layout. Given how much less scrolling was done in terms of total scrolling time, number of scrolling events, and average scrolling time in the $2\mathrm{D}$ layouts, per Study 1's Scrolling Time analysis, combined with the time to completion results, 2D layouts clearly provide benefits to efficiency.
616
+
617
+ max width=
618
+
619
+ Statement Strongly Agree Agree Neither agree nor disagree Disagree Strongly Disagree
620
+
621
+ 1-6
622
+ I found the use of the 2D extension to be beneficial in organizing my notebook. 2 3 1 0 1
623
+
624
+ 1-6
625
+ I found the use of the 2D extension to be beneficial in data processing. 1 2 3 0 1
626
+
627
+ 1-6
628
+ I found the use of the 2D extension to be beneficial in creating visualizations. 3 2 1 0 1
629
+
630
+ 1-6
631
+ I found the use of the $2\mathrm{D}$ extension to be beneficial in debugging my code. 0 4 2 0 1
632
+
633
+ 1-6
634
+ It was easy to understand how to use the 2D extension. 2 5 0 0 0
635
+
636
+ 1-6
637
+ It was easy to navigate in the 2D notebook. 0 5 1 1 0
638
+
639
+ 1-6
640
+ I prefer using the 2D notebook environment over the traditional notebook environment. 0 3 4 0 0
641
+
642
+ 1-6
643
+
644
+ Figure 7: Survey results from Study 2
645
+
646
+ The reduced scrolling is a result of $2\mathrm{D}$ ’s ability to bring more cells nearer to each other. Theoretically, 2D can reduce distances by the square root of $1\mathrm{D}$ distances. Practically, $2\mathrm{D}$ enabled non-linear code structures, such as parallel analyses, to be horizontally aligned in columns, thus supporting common data-science tasks such as comparison. 2D enabled more such relationships to be encoded into the space. In contrast, 1D encodes only a single ordering, and would require complex refactoring tools to enable various types of parallel analyses and comparisons.
647
+
648
+ § 8.2 USABILITY BENEFITS
649
+
650
+ 2D layouts appear more usable for certain basic and more complex tasks. Based on the results from Study 1 as seen in Figures 5 and 6, navigating and finding information, comparing results, and data science tasks such as organizing and cleaning may be easier in a 2D notebook. This may be due to more effective use of screen space to display more information at once in an organized manner, along with more efficient scrolling options.
651
+
652
+ In Study 2, several participants found the 2D environment provided an advantage in locating code or data. The ability to break up the notebook into distinct sections meant they did not have to first search for a section of their notebook, and then search for the info they needed within the section; the multi-column layout enabled users to more easily find the info they were looking for, since they could instantly identify the section of the notebook they needed.
653
+
654
+ Additionally, participants found that the 2D environment made it more convenient to refer to other cells. In the 1D environment, users would need to move two cells close to each other in order to easily compare the contents, often disrupting the organization of the notebook. In the 2D environment, participants were able to maintain the organization of the cells in their respective sections, while still being able to place cells next to each other for ease of comparison.
655
+
656
+ § 8.3 EFFECTS OF HARDWARE ON 2D COMPUTATIONAL NOTEBOOKS
657
+
658
+ Different setups, especially in the second study, made for different experiences with 2D Jupyter. Specifically, both screen size and scrolling device (e.g. trackpad vs. mouse) affected usability. Larger screen sizes afforded the ability to visualize more columns at once and to better ensure those columns were sufficiently wide for the code. This enabled physical navigation more effectively than smaller screens and thus led to less scrolling. Furthermore, scrolling devices which easily enable horizontal scrolling through simple gestures, such as trackpads, appeared to provide a better user experience with the 2D layout than a standard mouse or vertical scroll wheel.
659
+
660
+ There is a tradeoff between vertical and horizontal scrolling. Using more columns reduces vertical scrolling, but increases horizontal scrolling. On very small displays, some users in Study 2 indicated that intensive use of both types of scrolling may be worse than just vertical scrolling. However, large widescreen displays, increasingly common in data-science workspaces, mitigate this tradeoff by minimizing the horizontal scrolling needed to traverse the notebook, enabling multiple columns to greatly reduce vertical scrolling. Even with a modest 24" display, like in Study 1, the benefit was significant, and would likely increase with larger displays.
661
+
662
+ § 8.4 DESIGN CHALLENGES & OPPORTUNITIES
663
+
664
+ While 2D computational notebooks may provide efficiency and usability benefits, especially with the right setup, there is still room for improvement on their design.
665
+
666
+ Column width in the many-column design pattern may impact user experience; if the columns are too wide, fewer columns will fit on the screen, but if the columns are too narrow, visuals may become too small to easily read and the screen may feel cluttered, potentially leading to confusions that affect performance. Thus, managing column width becomes an important factor; this is currently doable in 2D Jupyter through manual resizing of columns. Still, it may be beneficial to provide functionality that resizes columns to an ideal width through a quick interaction, such as is done in spreadsheets.
667
+
668
+ Additional navigation options tailored to different 2D layouts may also benefit users. Navigating 1D computational notebooks with arrow keys can be quicker than navigating with manual scrolls, and the same may apply to 2D computational notebooks; the challenge is whether and how to incorporate the left and right arrow keys (or even diagonals) to quickly navigate. One option is to borrow the spreadsheet metaphor and have each arrow key move to the adjacent cell in the direction of the key. Making individual columns independently scrollable may also benefit navigation, especially when working on smaller screens. This would allow longer columns to be scrolled without impacting the view of shorter columns.
669
+
670
+ § 8.5 LIMITATIONS
671
+
672
+ § 8.5.1 BUGS IN EXTENSION
673
+
674
+ At the time of conducting both studies, the 2D Jupyter extension contained some bugs that could affect user experience. In particular, the drag and drop feature occasionally did not allow the user to release the cell at an intended location, forcing the user to reload the page. Additionally, the layout of the 2D environment was sometimes not properly saved between kernel sessions, requiring the user to reorganize their notebook before resuming work. Finally, the extension required users to manually save their work as the autosave feature built into Jupyter Notebooks did not work with the extension. These bugs did not affect Study 1 except for contributing to the technical issues that led to discarding one participant's data.
675
+
676
+ § 8.5.2 2D LAYOUTS OTHER THAN MULTI-COLUMN
677
+
678
+ Given that both studies used an extension which does not, at the time of this writing, fully support 2D layouts other than multi-column, care must be taken in assigning benefits to other 2D layouts. Some of the advantages of the multi-column layout may be due to how compact it is; less compact 2D layouts might not see the same level of benefits in some areas, like reduced scrolling and task efficiency. Evaluating other 2D layouts is a subject for future work.
679
+
680
+ § 9 CONCLUSION
681
+
682
+ Computational notebooks are a potent tool for creating and presenting computational narratives; the current 1D layout of notebooks, while elegant in its simplicity, imposes certain limitations that make comparative analyses and navigating longer non-linear notebooks, among other tasks, more difficult. Thus, we developed and evaluated the potential of $2\mathrm{D}$ layouts for computational notebooks, starting with the multi-column layout enabled in our 2D Jupyter extension.
683
+
684
+ The multi-column 2D layout provides benefits in efficiency and usability for common data science tasks such as comparative analyses by enabling greater physical navigation, thus minimizing the scope and need for virtual navigation (scrolling). In addition, the multi-column layout provides an effective sectioning mechanism that may help combat messiness along with providing more efficient navigation. Overall, 2D layouts have the potential to improve upon the current state of computational notebooks and provide a novel way to enhance the creation and presentation of non-linear computational narratives through enabling Space to Think.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/N0RiLoidWE/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,377 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Attach That There: Investigating 3D Virtual Assembly Assistants That Point Into the Real World
2
+
3
+ Category: Research
4
+
5
+ ![01963dfe-8c41-7700-b943-6b2554cf9f14_0_218_383_1360_732_0.jpg](images/01963dfe-8c41-7700-b943-6b2554cf9f14_0_218_383_1360_732_0.jpg)
6
+
7
+ Figure 1: Assembly assistant pointing to real world targets from within a spherical FTVR display.
8
+
9
+ ## Abstract
10
+
11
+ Gestures are a fundamental part of human communication. However, commonly used voice assistants do not exploit the advantages of human-like nonverbal communication. We present an Embodied Conversational Agent (ECA) with the ability to explain assembly steps and point to indicate real-world targets. To enable accurate pointing into the real world, we implemented our ECA in a spherical Fish Tank Virtual Reality (FTVR) display. We evaluated the effect of a pointing ECA on the performance and experience in an assembly scenario, as well as investigated whether spherical FTVR displays provide an advantage over 2-dimensional (2D) flat displays. Results show that, while the spherical FTVR was preferred in all conditions, pointing to real pieces did not reduce assembly time or errors compared to showing virtual pieces by holding them up. Based on our findings, we provide design insights and research directions for ECAs with pointing gestures in an assembly scenario.
12
+
13
+ Index Terms: Human-centered computing-Human computer interaction (HCI)-Interaction paradigms-Virtual reality Human-centered computing-Human computer interaction (HCI)— Interaction paradigms-Pointing
14
+
15
+ ## 1 INTRODUCTION
16
+
17
+ Spherical Fish Tank Virtual Reality Displays (FTVR) offer unique opportunities for interactions. While conventional Virtual Reality (VR) displays only support interactions in the virtual world, FTVR displays are non-immersive. Thus, they allow for pointing from within the display to the real space surrounding it, which makes them particularly suitable for implementing 3D Embodied Conversational Agents (ECAs).
18
+
19
+ Through their embodiment, ECAs have the ability to provide additional human-like nonverbal cues, like for example gestures [10]. Deictic gestures, which accompany speech, are a common method to indicate objects and guide the attention to them by substituting linguistic expressions with a pointing gesture [21]. This is particularly helpful in collaboration scenarios where establishing a mutual understanding is essential for successful communication [14]. Deictic gestures are for example used when indicating the position of an object in the room with the answer "it is over there" accompanied by a pointing gesture, instead of describing the location of the object in detail.
20
+
21
+ Deictic pointing can not only enhance the interaction in reality, it can also improve the interaction with ECAs, as they allow users or conversational agents to indicate objects they are talking about. Previous work has shown that a feature description accompanied by a deictic gesture, increases accuracy in identifying a target [4]. Moreover pointing gestures simplify the language dialog by allowing for simpler and shorter descriptions and therefore enable references in situations where descriptions alone would not be possible (e.g. when multiple similar objects are present) [21].
22
+
23
+ We believe that an ECA with the ability to point into the real world would leverage the multi-modality of human communication [32], and therefore enables more natural human agent interactions. While there are many studies on how humans perceive and use gestures, this knowledge can not directly be applied to ECAs, as there is a difference between how humans use and interpret pointing gestures [5]. Previous studies found that it is possible to implement an ECA to point into the real world with a similar or higher accuracy than a real person [35]. However, the effect of an ECA using pointing into the real world accompanied by verbal cues on the interaction experience has not been studied yet.
24
+
25
+ In this study we investigate how ECAs with pointing gestures influence the interaction experience and performance in an assembly scenario. For this purpose, we implemented an ECA with the ability to guide users through the assembly steps by using voice instructions and gestures. To enable pointing from within virtuality into the real world, we use a spherical FTVR display for our ECA. Our spherical FTVR display adapts content to the user's viewpoint by rendering perspective corrected vision and providing motion parallax as well as stereoscopic cues to improve depth and size perception [20].
26
+
27
+ Contributions: 1) We created a novel virtual assistant with the ability to point into the real world, which can be modified and used in other pointing related AR/VR/XR scenarios. 2) We evaluated assembly time, errors and user preference of different display forms and ECA gestures (see Section 4). Our results show that a spherical FTVR display is preferred over a flat 2D display for an ECA. 3) Based on our study, we provide design insights and future research directions for designing ECAs with pointing gestures in assembly scenarios.
28
+
29
+ ## 2 RELATED WORK
30
+
31
+ While deictic gestures are one of the most commonly used forms of non-verbal communication, there are some challenges when implementing them for ECAs. In the following, we first provide an overview over deictic pointing in human communication. Afterwards, we discuss work on how deictic gestures can be implemented in ECAs and the advantages spherical FTVR displays provide.
32
+
33
+ ### 2.1 Deictic Pointing Gestures
34
+
35
+ In their everyday life, humans use deictic pointing gestures when they indicate proximal objects by extending their arm and index finger towards a pointing target. Deictic gestures are fundamental when communicating to establish a mutual understanding and help to direct attention to people or objects, especially when the use of speech only is ambiguous $\left\lbrack {{30},{32}}\right\rbrack$ . Thus, deictic gestures are particularly suitable in an assembly scenario, where spatial deixis is important, since they can substitute certain spatial linguistic expressions and indicate objects [15].
36
+
37
+ How human gestures are interpreted is a key issue in gesture research [19]. Pointing gestures can be distinguished in proximal and distal [38]. Proximal pointing occurs when the pointer touches the target, while distal pointing occurs when the target is situated too far away and the goal is to locate the target's position in a shared environment [8]. We will focus on distant pointing, as the goal is to implement an ECA that assists in an assembly process by pointing at distant pieces. The major challenge of distant pointing is detection accuracy, which quantifies how successful observers can identify pointing targets. Bangerter et al. [5] showed that bias in pointing target detection was small for both vertical and horizontal pointing, while detection accuracy was lower for peripheral targets than for central ones.
38
+
39
+ ### 2.2 Embodied Conversational Agents (ECAs) with Point- ing
40
+
41
+ ECAs are virtual agents that inhibit conversational behaviors and are human-like in the way they use their bodies in conversations [12]. Cassell [12] defined ECA as having the ability to recognize, generate and respond to verbal and non-verbal input, deal with conversational functions as well as give signals that indicate the state of the conversation and contribute new propositions.
42
+
43
+ Previous research showed that the presence of an ECA can improve the interaction between the user and the agent and has a positive effect on the retainability of information, independent of the realism of the embodiment [6]. Yee et al. [26] found that agents with a visual representation lead to more positive social interactions compared to agents without a visual representation. Furthermore, they confirm previous findings that the degree of realism may matter very little: animated highly realistic faces might appear unnatural or disturbing, which confirms Mori et al.'s [25] uncanny valley effect.
44
+
45
+ In the same way as humans use gestures, gestures can also be implemented in ECAs to enable new interaction possibilities. Previous research showed that the integration of gestures in ECAs influences the ECA's personality [9], helps to achieve a sense of co-presence [3], and improves user perceptions of friendliness and trust [32]. Research in human-robot interaction found that robots using gestures increased the user performance while decreasing perceived workload for challenging tasks [24].
46
+
47
+ ### 2.3 ECAs in Different VR/XR Platforms
48
+
49
+ While considerable research has been done targeting how gestures in the virtual world are perceived and influence the interaction experience with ECAs, there is only little research on ECAs pointing into the real world. Wu et al. [34] investigated different pointing cues for ECAs pointing into the real world using a spherical FTVR display. Results show that a combination of head and hand cues yielded the best accuracy with ${82.6}\%$ for fine pointing $\left( {15}^{ \circ }\right)$ compared to hand-only or head-only cues. In a second study, Wu et al. demonstrated that an ECA using arm vector pointing can point to a physical location with comparable or even better accuracy than a real person [35]. Unlike humans who use an eye-fingertip alignment for pointing, which yields a perceptual bias [4], ECAs can be implemented using arm vector pointing to improve detection accuracy [35]. Since previous work already showed high pointing accuracy for ECAs in spherical FTVR displays, we are interested in how pointing gestures in combination with verbal cues can help to establish a joint attention in a real world assembly scenario.
50
+
51
+ An early example of how pointing can be implemented in ECAs is Rae [11], a real estate agent using iconic, metaphoric, and deictic gestures. Rae uses pointing to indicate or emphasize objects in its virtual environment, such as features of homes, either complementary to speech or fully redundant [10]. Kopp et al. [22] designed Max, a human-size agent for cooperative construction tasks in a Collaborative Virtual Environment (CAVE). The agent employs speech, gaze, facial expressions, and gesture to guide the user through construction tasks.
52
+
53
+ While previous examples point in the virtual world only, MACK is an example of an ECA in mixed reality. The agent gives location directions and answers questions by using a combination of speech, gestures, and pointing into the real world, to the paper map in front of the user or to its surroundings to support voice directions [13]. Another example of an agent pointing in MR was presented by Anabuki et al. [1]. They created Welbo, a human-like robot agent that helps users in an MR living room. In the living room, users can interact with objects and simulate virtual furniture in the physical space. Welbo has the ability to have conversations with users and react to their instructions by moving furniture and guiding users with pointing gestures. These examples show the promise that ECAs have for pointing in MR spaces. However, additional research is needed to examine if pointing in MR space improves the interaction with ECAs. In our user study, we investigate the interaction with an ECA pointing from within virtuality to reality in an assembly scenario.
54
+
55
+ A novel approach to guide attention towards distant objects by using gaze was presented by Otsuki et al. [27]. To support remote collaborative tasks, they created "ThirdEye", a hemispherical display that shows tracked eye movement of remote participants. In a user study Otsuki et al. [27] showed that ThirdEye can lead the observer's attention to objects faster compared to only showing the image of the remote participant's face. The results underline the importance of using additional gaze cues for leading attention for remote collaborative tasks. Following this result, we include in addition to pointing cues.
56
+
57
+ All these examples show how ECAs, much like humans, are able to use gestures and gaze to enable more natural interactions and help in completing tasks. With respect to user testing, previous work already showed good pointing accuracy of ECAs in a spherical FTVR displays. In reality, people do not rely on pointing gestures exclusively [5]. Thus, we evaluated the interaction experience of an ECA with pointing gestures in a real world assembly scenario in combination with voice.
58
+
59
+ ### 2.4 Assembly Instructions
60
+
61
+ The most commonly used method for assembly instructions is the traditional paper manual. While a paper manual can show explanations in combination with pictures of the model status for each step, it does not help in identifying similar pieces and viewing the steps in 3D or from different viewpoints. Previous work has already presented different approaches for improving assembly instructions through technology. Two AR approaches were presented by Blattger-ste et al. [7]. The first approach is to display the 2D images of the paper instructions into the user's field of view. The second approach uses in-situ instructions to overlay a marker for piece identification as well as a virtual model of the piece at the correct assembly position using AR glasses or smartphones. Their study results suggest a combination of in-situ feedback for picking the correct piece and pictorial feedback for assembly. Instead of using in-situ feedback for piece identification, our ECA points to the pieces. Based on the shown importance of pictorial feedback for assembly, we include a 3D model as assembly help.
62
+
63
+ To enable more helpful visual instructions for assembly, Yam-aguchi et al. [36] presented a novel approach for generating and visualizing 3D AR tutorials with viewpoint control at runtime. The instructions are shown in an AR "magic mirror" display, which aligns the user's viewpoint of the physical object with the virtual 3D instructions. While the results of their user study did not show significant differences in task completion time and number of errors compared to traditional video tutorials, the AR mirror system led to significantly less mental effort. Subjective results also demonstrated the advantages of the system.
64
+
65
+ Another possibility to guide people through an assembly process is by using an agent. As described above, an example is Max, an ECA using pointing and other gestures to indicate virtual pieces and collaborate with users in the assembly of a virtual model in a CAVE [22]. In contrast to Max, our ECA points from within a FTVR display to real pieces with the goal of guiding users through a real world assembly process. We use an assembly task to investigate our ECA with pointing gestures, since assembly tasks require piece identification and thus, pointing is especially helpful. We focus on using pointing gestures for piece identification, but provide a virtual 3D model in front of the avatar as an assembly help since pictorial feedback for assembly was shown as most helpful [7].
66
+
67
+ ## 3 DESIGN FACTORS
68
+
69
+ This section provides a description of the key aspects of our ECA design and implementation, including display form, appearance, speech, gestures and virtual model.
70
+
71
+ ### 3.1 Display Form
72
+
73
+ We chose to use a spherical FTVR display for our ECA. Since FTVR displays are situated in MR space, compared to immersive Virtual Reality (VR) displays, they can enable pointing from within the display to real objects surrounding it. FTVR displays, introduced by Ware et al. [33], have been shown to increase the perception of three dimensionality of virtual objects. Motion parallax and stereoscopic cues are essential for interpreting pointing gestures and therefore FTVR displays, which provide these cues and create spatial 3D effects by rendering perspective-corrected vision, are particularly suitable for pointing [20]. Spherical FTVR displays improve depth and size perception compared to flat FTVR displays, hence are a more suitable form of FTVR for interpreting pointing targets [38]. Previous research already compared spherical FTVR displays to flat FTVR displays to illustrate improved performance. Thus, we compare the spherical FTVR display to a traditional flat 2D screen, as used in current state-of-the-art home assistants.
74
+
75
+ Table 1: Example voice instruction for both the showing pieces and pointing ECA for both steps, indicating a piece and explaining the assembly.
76
+
77
+ <table><tr><td/><td>Showing Pieces</td><td>Pointing</td></tr><tr><td>Indicate a Piece</td><td>"Take this blue screw"</td><td>"Take that blue screw”</td></tr><tr><td>Explain Assembly</td><td>"Use it to attach the 2 black connectors to the left yellow tube”</td><td>"Use it to attach the 2 black connectors to that yellow tube"</td></tr></table>
78
+
79
+ ### 3.2 ECA Appearance
80
+
81
+ Human-like representations of ECAs are subject to the uncanny valley effect, which occurs when ECAs mimic human features in too much detail, while not fully succeeding, so that they appear unnatural, with an even bigger effect when movement is added [25]. Therefore we decided to use a female Japanese cartoon character with human-like traits while keeping a non-human appearance, as suggested by Schneider et al. [28]. The ECA we used has non-human proportions with big eyes a small nose and mouth. Considering the limited display size and the fact that our assembly task only requires seeing the upper body, Yoon et al. [37] suggest using a half body avatar. We scaled our upper body ECA as big as possible to improve gesture perception while allowing to extend the arm completely for pointing in both displays. This is in accordance with the use case of ECAs since, even though prototypes for life-size displays exist in research [20], in practice display sizes of home assistants are relatively small.
82
+
83
+ We implemented an idle animation state that is played in a infinite loop and consists of subtle arm and upper body movements, to make the ECA appear more active and alive [18]. To make our ECA feel more vivid, we added a blinking animation with random blinks of a rate between 3-5s, following the findings of Takashima et al. [29].
84
+
85
+ ### 3.3 Speech
86
+
87
+ For the ECA's speech we used IBM's Watson Text-to-Speech (TTS) to generate verbal instructions for each assembly step from written text. We used the Oculus LipSync asset to match lip movement with spoken utterances. The asset uses blend shapes included in the avatar model to animate lip movement accordingly.
88
+
89
+ In every assembly step, the ECA first indicates which piece is needed, followed by a description of where the piece has to be attached. In the first step, the ECA broadly describes a piece accompanied by either a pointing gesture or showing a virtual piece by holding it up. In the second step, the ECA either gives broad verbal cues while pointing to the target position or only gives more detailed voice instructions explaining where to attach the piece. The pure voice version describes where the parts need to go in a more detailed manner than the voice in the pointing-added version to help the user complete the task with similar aid level. This is suggested by the substitution hypothesis of Bangerter et al. [4]. Voice instructions for an example assembly step for both the pointing and showing pieces ECA are shown in Table 1.
90
+
91
+ ### 3.4 Gestures
92
+
93
+ To examine an assembly assistant pointing into the real world, two different gestures, pointing and showing pieces by holding them up, were implemented. Both gestures accompany a voice instruction and substitute a spatial location expression of a piece.
94
+
95
+ ![01963dfe-8c41-7700-b943-6b2554cf9f14_3_152_147_713_363_0.jpg](images/01963dfe-8c41-7700-b943-6b2554cf9f14_3_152_147_713_363_0.jpg)
96
+
97
+ Figure 2: ECA showing a virtual piece (left) and pointing (right) in the spherical FTVR display. The virtual model state is displayed in front of the ECA.
98
+
99
+ ## Pointing
100
+
101
+ Previous research suggests that hand gestures combined with head rotation provide the highest accuracy and naturalness compared to hand or head only cues, especially for fine pointing [34]. Thus, we implemented hand as well as head animations to facilitate distinction between close pointing targets. Humans point by aligning their fingertip with the gaze of their dominant eye, while the observer interprets the pointing gesture by referring to the pointer's arm vector [5]. This might lead to ambiguity because the target interpreted by the observer is different from the actual intended pointing target by the pointer followed by the eye-fingertip line. Wu et al. [35] showed that using arm vector pointing for virtual avatars provides pointing with comparable and in some cases better accuracy compared to the pointing of a real person. Therefore our ECA uses arm vector pointing by outstretching the arm and index finger as well as rotating the head towards the target without eye-fingertip alignment (see Figure 2).
102
+
103
+ We implemented the pointing animation in Unity3D using inverse kinematics (IK) to enable the ECA to adapt the pointing animation to variable pointing targets during runtime. This allows a natural-looking arm raise animation while implementing a variable end position where the ECA's arm is outstretched, by building a vector from the shoulder to the index finger and towards the distant target. Instead of using object recognition, we decided to run a Wizard of $\mathrm{{Oz}}$ experiment to avoid recognition errors. Eye movement was not included, since testing revealed that there was no recognizable difference due to the big cartoon style eyes, which were always looking like they would face the target when the head was rotated towards it.
104
+
105
+ ## Showing Pieces
106
+
107
+ As a comparison to the pointing ECA, we also implemented a showing animation, where the ECA holds up virtual pieces instead of pointing to physical pieces in the real world (see Figure 2). The virtual pieces were created by measuring the physical Brio Builder pieces and modeling a virtual representation of them using Blender. The main animation was created using a video of a person holding a piece up as a reference and adding keystrokes to reconstruct the motion for the avatar.
108
+
109
+ ### 3.5 Virtual Model
110
+
111
+ In front of the ECA, we displayed the model state after each assembly step on a small table that is floating in front of the avatar (see Figure 2). In a small pilot trial, we first tested the system without an additional visual representation of the model. The trial showed that it is very difficult to complete an assembly task without a visual aid while relying on voice instructions and gestures only, especially because humans are used to rely on visual aids, like paper manuals, for assembly tasks. Thus, we decided to provide a virtual representation of the model state, allowing participants to verify if they picked the right piece, as well as give an additional visual aid for the assembly. In order to prevent participants from picking a piece based on the virtual model instead of the pointing or showing cue and therefore having a confounding influence on the study results, we displayed the model state only after the piece indication step, while the ECA explains the assembly (see Table 1).
112
+
113
+ Table 2: Overview of the five conditions of the user study.
114
+
115
+ <table><tr><td/><td>Showing Pieces</td><td>Pointing</td></tr><tr><td>Flat (2D)</td><td>X</td><td>$X$</td></tr><tr><td>Spherical FTVR (3D)</td><td>X</td><td>$X$</td></tr><tr><td>Paper Manual</td><td>-</td><td>-</td></tr></table>
116
+
117
+ ## 4 EXPERIMENT
118
+
119
+ The goal of our experiment is to investigate the effect of our ECA with pointing gestures in an assembly scenario. We compare our pointing ECA in a spherical FTVR display to the same ECA in a traditional flat display. To provide a fairer comparison for the flat 2D display we decided to include a condition that is more optimized for the flat display: an ECA holding up virtual pieces in front of its body. With a paper manual as baseline, we measured assembly task completion time, errors and the interaction experience. The five conditions are shown in Table 2.
120
+
121
+ ### 4.1 Participants
122
+
123
+ Fifteen paid participants ( 7 male and 8 female) aged between 18 and 45 were recruited from a local university to participate with a compensation of $\$ {10}$ . All participants had normal or corrected to normal vision. None of them used Brio Builder construction sets before.
124
+
125
+ ### 4.2 Apparatus
126
+
127
+ We used a ${30}\mathrm{\;{cm}}$ diameter spherical FTVR and a flat display to conduct the experiment. To create a ${360}^{ \circ }$ image, four Optoma GT750ST stereo projectors with a ${1024} \times {768}$ pixel resolution and a frame rate of ${120}\mathrm{{hz}}$ rear project onto the spherical surface, making a total NVIDIA Mosaic resolution of ${4096} \times {768}$ at 34.58 ppi [16]. A computer equipped with a NVIDIA Quadro K5200 graphics card runs the Unity application and sends the rendering content to all four projectors. We adopted an automated camera-based multi-projector calibration technique [39], to enable a seamless image with 1-2 millimeter accuracy. NVIDIA Mosaic synchronizes all screens in resolution and frame rate for stereo rendering and enables synchronization of XPand RF shutter glasses to generate stereo images with ${60}\mathrm{\;{hz}}$ for each eye. The total latency lies between ${10} - {20}\mathrm{{msec}}$ [16]. The OptiTrack optical tracking system was used for head tracking by attaching passive markers to the shutter glasses. To adapt the viewpoint to each participant, we used a pattern-based viewpoint calibration [31] with an average error of less than ${1}^{ \circ }$ . The spherical FTVR provides depth cues such as stereoscopic cues and motion parallax.
128
+
129
+ For the flat display condition, we also used an Optoma GT750ST projector with the same ${1024} \times {768}$ pixel resolution and ${120}\mathrm{\;{Hz}}$ frame rate to rear-project on a flat screen to minimize differences between the flat and spherical display. The flat display's physical screen size is ${36}\mathrm{\;{cm}} \times {27}\mathrm{\;{cm}}$ which results in a similar screen area as the spherical screen with a ${30}\mathrm{\;{cm}}$ diameter. In contrast to the spherical screen, the flat screen does not provide motion parallax, stereo rendering or perspective corrected images.
130
+
131
+ ![01963dfe-8c41-7700-b943-6b2554cf9f14_4_222_147_578_394_0.jpg](images/01963dfe-8c41-7700-b943-6b2554cf9f14_4_222_147_578_394_0.jpg)
132
+
133
+ Figure 3: Top view of the table used in the experiment where the pieces were laid out. The free space was used for the assembly.
134
+
135
+ ![01963dfe-8c41-7700-b943-6b2554cf9f14_4_179_664_675_160_0.jpg](images/01963dfe-8c41-7700-b943-6b2554cf9f14_4_179_664_675_160_0.jpg)
136
+
137
+ Figure 4: Extract of the paper manual used in the study, showing four assembly steps.
138
+
139
+ The physical Brio Builder pieces were laid out on a table within a marked area of the size ${83} \times {76}\mathrm{\;{cm}}$ , as close to the ECA as possible, since the detection accuracy of pointing gestures decreases in distance [23]. All pieces were laid out in the same layout for all conditions and participants, to minimize differences. The same table was used for all the conditions. The study setup with all pieces laid out on the table is shown in Figure 3. In front of the pieces, there was free space where participants assembled the model. Both the spherical and flat display was placed so that the perceived size and distance of the avatar is similar.
140
+
141
+ We developed a Unity3D application for the experiment to animate and render the ECA and record task completion time. Our ECA was based on [2], and the virtual Brio Builder pieces used in the application were modeled using Blender. For the paper manual, we used the same models as shown virtually in the ECA conditions. The paper manual was color printed single-sided on large (11x17") paper (see Figure 4).
142
+
143
+ ### 4.3 Design
144
+
145
+ The experiment was conducted using a $2 \times 2$ within-subjects factorial design with a baseline paper manual condition:
146
+
147
+ - C1 Display Form: spherical FTVR display (3D) or flat display (2D).
148
+
149
+ - C2 Gesture: pointing (P) or holding a piece up (H).
150
+
151
+ For every condition, we used a different model, resulting in 5 models used throughout the experiment each consisting of 30 pieces (see Figure 5). The combination of display form/gesture and model as well as the sequence of conditions was fully counterbalanced using Latin squares. For quantitative analysis, we measured task completion time and errors. We collected subjective data about the interaction experience through a questionnaire. Furthermore, we measured the perceived workload using the raw Nasa TLX [17].
152
+
153
+ ### 4.4 Procedure
154
+
155
+ First, we asked participants to sign a consent form and fill in a demographic questionnaire. We then explained the procedure of the study and guided them through a viewpoint calibration. Each participant performed every condition once: two different display form factors combined with two different gesture types and a paper manual as a baseline, resulting in five assembly rounds per participant. Participants were asked to stand in front of the table with the laid-out pieces. In the paper manual condition, participants were instructed to follow the assembly steps shown on the images. They were allowed to navigate through the manual in their own pace and if needed jump back to previous pages, as they would naturally use a paper manual by themselves.
156
+
157
+ ![01963dfe-8c41-7700-b943-6b2554cf9f14_4_923_153_723_171_0.jpg](images/01963dfe-8c41-7700-b943-6b2554cf9f14_4_923_153_723_171_0.jpg)
158
+
159
+ Figure 5: Photos of the five physical models that were assembled in the study.
160
+
161
+ In the assembly assistant conditions, participants were instructed to follow the instructions given by the ECA. Participants were instructed to always pick a piece after the indication step and were allowed to change the piece in the next step if they later notice that they picked a wrong one. They were also allowed to move freely around the table during the assembly process. Each assembly step started with the ECA showing a piece or pointing at a piece required for the following assembly step accompanied by a verbal cue. Once participants decided for a piece, the avatar either only explains the next step or explains and points at the assembly position. At the same time, the model state is shown in front of the avatar as seen in Figure 2. Once the ECA received a verbal response, the next assembly step is started. It took about 5-10 minutes to complete one model assembly.
162
+
163
+ At the end of each assembly round, we presented twelve five-level Likert scale questions to participants and asked them to rate each in the range between "strongly disagree" and "strongly agree". The questions addressed character behavior, presence, and perception as well as general questions about the experience. After the paper manual round, participants were only asked to answer the four general experience questions.
164
+
165
+ Once participants completed the entire experiment, they filled out an overall questionnaire. They were asked to rate and explain which display form they prefer for both the showing pieces condition and the pointing condition. Additionally, they were asked to rank the instruction modes: paper manual, showing pieces and pointing and specify reasons for their preference. The entire experiment took about 60 minutes.
166
+
167
+ ### 4.5 Results
168
+
169
+ In the following section, we describe the findings of our user study regarding work load, assembly completion time, errors and user experience.
170
+
171
+ #### 4.5.1 Work Load
172
+
173
+ First, we analyzed the raw TLX score over the different rounds to determine if potential work load or fatigue effects had to be considered in the further analysis. The mean raw TLX score was $M = {27.0}$ $\left( {{SD} = {14.5}}\right)$ after the first, $M = {33.9}\left( {{SD} = {18.9}}\right)$ after the second, $M = {25.9}\left( {{SD} = {14.0}}\right)$ after the third, $M = {26.3}\left( {{SD} = {12.1}}\right)$ after the fourth and $M = {24.4}\left( {{SD} = {13.5}}\right)$ after the last assembly round. A RM-ANOVA was conducted to reveal if the order significantly influenced the work load. The analysis did not reveal a significant effect of assembly round on work load $\left( {F\left( {4,{56}}\right) = {1.848}, p = {.133}}\right)$ . Therefore, we assume that effects on the assembly performance caused by work load or fatigue are negligible.
174
+
175
+ ![01963dfe-8c41-7700-b943-6b2554cf9f14_5_153_163_707_437_0.jpg](images/01963dfe-8c41-7700-b943-6b2554cf9f14_5_153_163_707_437_0.jpg)
176
+
177
+ Figure 6: Recorded piece identification errors for all four ECA conditions with medians and 95% Cls. Significant values are reported in brackets for $p < {.05}\left( *\right)$ .
178
+
179
+ We also analyzed all sub categories of the raw TLX using a RM-ANOVA. The only category for which a significant difference between conditions was found is frustration $(F\left( {4,{56}}\right) = {4.054}, p <$ .01). A two tailed t-test revealed the ECA pointing in the flat display $\left( {M = {42.0},{SD} = {26.9}}\right)$ led to a significantly higher frustration rating than the ECA that was holding up pieces in the spherical display ( $t =$ $- {2.598}, p < {.05})$ , as well as the paper manual $\left( {t = {2.327}, p < {.05}}\right)$ . There were no significant differences across remaining conditions.
180
+
181
+ #### 4.5.2 Time
182
+
183
+ We measured task completion time for every condition. We performed a RM-ANOVA and found that instruction mode had no significant effect on assembly time $\left( {F\left( {4,{56}}\right) = {0.816}, p = {0.521}}\right)$ .
184
+
185
+ #### 4.5.3 Piece Identification Errors
186
+
187
+ During the assembly process, errors were recorded and categorized in piece identification (finding the right piece) and assembly errors. The piece identification error includes wrongly picked pieces after the ECA was referring to them by showing a piece or pointing, including pieces that were corrected in the next assembly step. Since participants were able to see the model state right away in the paper manual condition and there was no separate piece identification step, the paper manual is not included in the piece identification statistics.
188
+
189
+ Results of the RM-ANOVA show a significant difference between conditions $\left( {F\left( {3,{38}}\right) = {4.174}, p < {.05}}\right)$ . A two tailed t-test revealed that piece identification error was significantly lower $\left( {t = - {3.057}, p < {.01}}\right)$ when the ECA was holding up pieces in the spherical display $\left( {M = {1.6},{SD} = {1.3}}\right)$ compared to when the ECA was pointing in the flat display $\left( {M = {3.6},{SD} = {2.1}}\right)$ . There was no significant difference across remaining conditions.
190
+
191
+ #### 4.5.4 Assembly Errors
192
+
193
+ Assembly errors were calculated by counting each incorrectly chosen and not corrected piece as well as wrongly attached pieces (e.g. pieces attached to a wrong hole or incorrectly rotated). The RM-ANOVA did not show a significant difference for the assembly errors between conditions $\left( {F\left( {4,{52}}\right) = {0.640}, p = {.636}}\right)$ .
194
+
195
+ #### 4.5.5 Time and Piece Identification Error Correlation
196
+
197
+ A Pearson correlation coefficient test was conducted and found a moderate positive correlation between assembly completion time and number of incorrectly identified pieces $\left( {r\left( {54}\right) = {523}, p < {.001}}\right)$ . A visualization of the correlation can be found in Figure 7. As there was no separate identification step in the paper manual condition, target identification errors were only analyzed for the ECA conditions and therefore paper manual times are not included in the correlation analysis.
198
+
199
+ ![01963dfe-8c41-7700-b943-6b2554cf9f14_5_924_155_718_445_0.jpg](images/01963dfe-8c41-7700-b943-6b2554cf9f14_5_924_155_718_445_0.jpg)
200
+
201
+ Figure 7: Correlation between number of incorrectly chosen pieces in each ECA assembly round and assembly task completion time in seconds.
202
+
203
+ <table><tr><td colspan="2"/><td colspan="2">Flat (2D)</td><td colspan="2">$\mathbf{{Spherical}\left( {3D}\right) }$</td></tr><tr><td>Statements</td><td>$\mathbf{{PM}}$</td><td>$\mathbf{H}$</td><td>$\mathbf{P}$</td><td>H</td><td>$\mathbf{P}$</td></tr><tr><td>Felt like ECA was present</td><td>-</td><td>2.4 (1.2)</td><td>2.1 (0.9)</td><td>3.0 (1.2)</td><td>2.8 (0.8)</td></tr><tr><td>Correct piece identification</td><td>-</td><td>3.5 (1.2)</td><td>1.8 (0.9)</td><td>3.9 (1.2)</td><td>2.2 (0.9)</td></tr><tr><td>Enjoyed dis- play form</td><td>-</td><td>3.3 (0.9)</td><td>2.9 (1.0)</td><td>4.2 (0.8)</td><td>3.8 (0.9)</td></tr><tr><td>ECA / manual was helpful</td><td>4.2 (0.8)</td><td>3.9 (0.9)</td><td>3.0 (0.9)</td><td>4.1 (0.8)</td><td>4.0 (0.7)</td></tr><tr><td>Easy to follow steps</td><td>3.7 (1.1)</td><td>4.1 (0.6)</td><td>2.1 (1.1)</td><td>4.4 (0.9)</td><td>2.7 (0.9)</td></tr><tr><td>Liked gesture / manual</td><td>3.9 (1.1)</td><td>4.1 (0.7)</td><td>2.8 (1.1)</td><td>4.3 (1.1)</td><td>3.2 (0.8)</td></tr></table>
204
+
205
+ Table 3: Mean and standard deviation of significant questionnaire responses for all five conditions: paper manual (PM), holding pieces up (H) and pointing (P) for the spherical and flat display. Higher scores indicate stronger agreement ranging from 1 (strongly disagree) to 5 (strongly agree).
206
+
207
+ #### 4.5.6 Subjective Ratings
208
+
209
+ A Friedman ranked sum test was performed on all twelve five-level Likert scale questions. Participants rated each in the range between 1 ("strongly disagree") and 5 ("strongly agree"). The first eight questions which addressed character behavior, character presence and perception were only asked after the four ECA conditions. For all significant statements, mean and standard deviation values are shown in Table 3.
210
+
211
+ Realism of Gestures The Friedman ranked sum test did not reveal a significant difference between conditions for realism of gestures $\left( {{X}^{2}\left( 3\right) = {3.44}, p = {.329}}\right)$ , speech $\left( {{X}^{2}\left( 3\right) = {5.64}, p = {.130}}\right)$ and fidelity $\left( {{X}^{2}\left( 3\right) = {3.660}, p = {.301}}\right)$ . There was also no difference between conditions for the statement that "gestures made ECA seem more realistic" $\left( {{X}^{2}\left( 3\right) = {3.74}, p = {.291}}\right)$ and that "gestures strengthen the connection" between the ECA and themselves $\left( {{X}^{2}\left( 3\right) = {5.640}, p = {.130}}\right)$ .
212
+
213
+ ECA Presence The statement "I felt like ECA was present in the real world" was rated significantly different between conditions, as shown by the Friedman ranked sum test $\left( {{X}^{2}\left( 3\right) = {8.060}, p < {.05}}\right)$ . Post hoc analysis with Wilcoxon signed-rank tests for multiple comparisons resulted in a significant higher rating of the presence for both spherical conditions $\mathrm{H}\left( {z = {2.223}, p < {.05}}\right)$ and $\mathrm{P}$ $\left( {z = {1.988}, p < {.05}}\right)$ compared to 2D-P. No significant differences were found for remaining pairs.
214
+
215
+ Target Identification Confidence Level A Friedman ranked sum test revealed a significant difference across conditions regarding confidence level for target identification $\left( {{X}^{2}\left( 3\right) = {21.460}, p < {.001}}\right)$ . Post-hoc analysis with Wilcoxon signed rank test shows a significant effect between 3D-H and 3D-P $\left( {z = {3.076}, p < {.01}}\right)$ or 2D-P $\left( {z = {3.180}, p < {.01}}\right)$ . It also revealed a significant effect between $2\mathrm{D} - \mathrm{H}$ and $3\mathrm{D} - \mathrm{P}\left( {z = {2.667}, p < {.01}}\right)$ or $2\mathrm{D} - \mathrm{P}\left( {z = {3.040}, p < {.01}}\right)$ . No significant difference was found for remaining pairs.
216
+
217
+ Assembly Confidence Level The participants' confidence level of correct assembly did not show a significant difference between conditions $\left( {{X2}\left( 4\right) = {7.747}, p = {.101}}\right)$ .
218
+
219
+ Enjoyment of Display Form The enjoyment was rated significantly different as revealed by a Friedman ranked sum test $\left( {{X}^{2}\left( 3\right) = {11.340}, p < {.05}}\right)$ . Post-hoc analysis with Wilcoxon signed rank indicated higher enjoyment of $3\mathrm{D} - \mathrm{H}$ compared to $2\mathrm{D} - \mathrm{H}$ $\left( {z = {2.934}, p < {.01}}\right)$ and $2\mathrm{D} - \mathrm{P}\left( {z = {2.497}, p < {.05}}\right)$ . There was no significant effect between remaining conditions.
220
+
221
+ Helpfulness The Friedman ranked sum test revealed a significant difference for helpfulness $\left( {{X}^{2}\left( 4\right) = {11.800}, p < {.05}}\right)$ . Results of the post-hoc analysis with Wilcoxon signed rank test revealed that $2\mathrm{D} - \mathrm{P}$ was significantly less helpful than the other four instruction modes $3\mathrm{D} - \mathrm{H}\left( {z = {2.548}, p < {.05}}\right)$ , 3D-P $\left( {z = {2.623}, p < {.01}}\right) ,2\mathrm{D} - \mathrm{H}\left( {z = {2.578}, p = {.01}}\right)$ and $\mathrm{{PM}}$ $\left( {z = {2.785}, p = {.01}}\right)$ . Results showed no significant differences between remaining conditions.
222
+
223
+ Easy to Follow For the statement "It was easy to follow the assembly steps", the Friedman ranked sum test revealed a significant difference $\left( {{X}^{2}\left( 4\right) = {30.160}, p < {.001}}\right)$ . Post-hoc Wilcoxon signed rank indicated that it is significantly more difficult for 3D-P than for $3\mathrm{D} - \mathrm{H}\left( {z = {3.076}, p < {.01}}\right)$ or $2\mathrm{D} - \mathrm{H}$ $\left( {z = {2.934}, p < {.01}}\right)$ and PM $\left( {z = {2.192}, p < {.05}}\right)$ . It was also significantly harder to follow the instructions for 2D-P compared $3\mathrm{D} - \mathrm{H}\left( {z = {3.408}, p < {.001}}\right)$ or $2\mathrm{D} - \mathrm{H}\left( {z = {3.296}, p < {.001}}\right)$ as well as PM $\left( {z = {2.803}, p < {.01}}\right)$ . No significant difference was found across remaining conditions.
224
+
225
+ General Preference For participants' preference, the Friedman ranked sum test revealed a significant difference $\left( {{X}^{2}\left( 4\right) = {20.680}, p < {.001}}\right)$ . Post-hoc Wilcoxon signed rank showed that $3\mathrm{D} - \mathrm{H}$ was liked significantly more than $3\mathrm{D} - \mathrm{P}$ $\left( {z = {2.934}, p < {.01}}\right)$ and $2\mathrm{D} - \mathrm{P}\left( {z = {3.060}, p < {.01}}\right)$ . $2\mathrm{D} - \mathrm{H}$ was also liked significantly more than $2\mathrm{D} - \mathrm{P}\left( {z = {2.934}, p < {.01}}\right)$ . Moreover PM was liked significantly more than $2\mathrm{D} - \mathrm{P}\left( {z = {2.079}, p < {.05}}\right)$ . There were no significant differences between remaining conditions.
226
+
227
+ #### 4.5.7 Overall Ratings
228
+
229
+ For holding pieces up conditions, most participants preferred the spherical FTVR display (73.3%) over the flat 2D display. 26.7% of participants preferred the 2D display. For pointing, the spherical FTVR display was preferred by ${40.0}\%$ , while ${6.67}\%$ preferred 2D. ${53.3}\%$ of participants indicated that there was no difference between displays. The display rating results are shown in Figure 8.
230
+
231
+ In the overall rating of instructions mode, the ECA that was holding pieces up was ranked first by ${53.3}\%$ of participants and second by ${46.67}\%$ of participants. The paper manual was ranked first as well as second by ${40.0}\%$ and third by ${20.0}\% .{6.67}\%$ of participants ranked the pointing ECA first, 13.3% second and 80.0% of participants third.
232
+
233
+ ![01963dfe-8c41-7700-b943-6b2554cf9f14_6_928_148_704_399_0.jpg](images/01963dfe-8c41-7700-b943-6b2554cf9f14_6_928_148_704_399_0.jpg)
234
+
235
+ Figure 8: Overall rating of the display forms flat and spherical FTVR for the conditions pointing and showing pieces.
236
+
237
+ ## 5 Discussion
238
+
239
+ In this section, we discuss the three design factors display form, ECA appearance, speech and gesture to provide interpretations of our findings.
240
+
241
+ ### 5.1 Reflection on Design Factors
242
+
243
+ #### 5.1.1 ECA Appearance
244
+
245
+ The subjective feedback regarding the ECA's appearance showed no significant differences between conditions, which is not surprising as the displayed ECA was the same in all four conditions. Nevertheless, results show how the ECA design was rated overall. Participants rated the realism of the ECA's movement and speech neutral. Even though we chose a not too human-like avatar to prevent the uncanny valley effect [25], it is particularly difficult to animate movements and implement speech without causing an unnatural appearance Participants did not express comments regarding the character model and its gestures in the overall questionnaire.
246
+
247
+ #### 5.1.2 Gestures
248
+
249
+ In our study, we compared an ECA that is showing virtual pieces to a pointing ECA. Results show that instruction mode did significantly affect assembly time, which is composed of listening to a speech instruction, choosing a piece, and attaching the piece. The listening time was similar for each participant, while the duration of choosing a piece and attaching it differed. Thus, piece identification errors led to longer task completion times when participants had to replace a wrong piece for the correct one. This is in accordance with the correlation between assembly task completion time and the number of incorrectly chosen pieces (see Figure 7). It was observed that when the ECA was pointing at pieces that were unambiguous to identify, participants were much faster in picking the piece compared to other conditions, as they did not have to search on the whole table. But since pointing led to more piece identification errors, the faster piece identification for some pieces did not lead to a shorter assembly completion time for pointing compared to the other conditions.
250
+
251
+ Error data shows a significantly higher piece identification accuracy for showing virtual pieces in the spherical FTVR display than for pointing in the flat display. Although there was no significant difference found between remaining conditions, results hint that showing the virtual pieces resulted in a lower piece identification error rate than pointing towards them (see Figure 6). It was observed that many participants had difficulties finding the right pointing targets, when voice alone was ambiguous and ambiguous pieces were placed side by side. This might be caused by multiple factors. First, the pointing targets were laid out closely together and in multiple lines in front of the ECA. This could have caused worse detection results than in previous studies, where the near far dimension was not investigated [34]. Second, we observed that participants lost trust in the ECA once they identified a wrong piece and had to correct it. While a high pointing accuracy of 82.6% was shown in a previous study which was conducted using a similar ECA with arm-vector pointing [34], this accuracy might not be high enough in an assembly scenario. Even though participants identified most of the 30 pieces correctly, they seemed discouraged after they chose a wrong one, which is also reflected in the confidence participants indicated in the questionnaire and the ranking results. 7 of 15 participants preferred showing virtual pieces, and 6 of 15 participants preferred the paper manual over pointing.
252
+
253
+ #### 5.1.3 Display Form
254
+
255
+ In the flat display conditions, participants had difficulties interpreting the correct pointing targets of the ECA which led to a higher target identification error. This might be caused by the lack of depth cues, which helped to detect the correct pointing target in the spherical FTVR display. It was observed that some participants were moving around the display while the ECA was pointing, which seemed to make the piece identification easier (see Figure 9). That is in line with previous research which showed that left and right areas, where participants see the arm pointing away from them, are more prone to misjudgments, while the front area was less difficult to recognize [34]. When the ECA was for example pointing to the right side and participants were moving towards the pointing target, they might be able to identify the target more accurately because the ECA is then pointing towards them. The same difference in behavior also applies to viewing the virtual model, which was visible in ${360}^{ \circ }$ view, when participants moved around.
256
+
257
+ We were surprised to find that while 11 participants preferred the spherical FTVR display when the ECA was showing pieces, only 6 participants preferred the spherical FTVR display for pointing. Eight participants reported no difference, although target identification errors were lower with the spherical display. Participants who preferred the spherical FTVR display noted that they felt like it was more accurate and easier to interpret the pointing targets, while participants that answered with "no difference" had difficulties detecting pointing targets in general so they had no preference. That is surprising, as the spherical FTVR display provides more depth cues and led to fewer errors. A possible explanation is that participants who were not confident in identifying pointing targets got discouraged resulting in feedback like "pointing is inaccurate in general" and the rating "no difference" between display forms, even though the spherical FTVR resulted in fewer errors.
258
+
259
+ There was a large difference between participants and their behavior interacting with the spherical FTVR display in general. Some participants moved around the display more and therefore have taken more advantage of depth cues and the possibility to get different perspectives of the ECA, the pieces and the displayed model. Others did not move at all, although all participants received the same instructions in the beginning. Therefore there was a smaller difference between both displays for participants who were standing at the same position during the whole assembly round.
260
+
261
+ ### 5.2 Comparison to Paper Manual
262
+
263
+ The recorded assembly errors were below one in all conditions and therefore differences were not significant, even though piece identification errors were much larger. This shows that participants recognized incorrect pieces when the model state including the previously chosen piece was shown and thus corrected the piece, leading to a correctly assembled model. It was surprising to find that there was no significant difference in the assembly error between conditions, as the paper manual and the flat display did not provide a side or back view of the virtual model, making it "hard to see the other side of the model", as noted by participants. Others mentioned that the "virtual 3D model is always more helpful than paper manual" and "paper manual needs detail attention". Nevertheless, participants were able to assemble the model as correctly with the paper manual as when using the ECA. A possible reason could be that the paper manual allowed participants to "[easily] go back multiple steps", which then also provided different perspectives of the models, as well as gave participants the possibility to see if they made a mistake before. Most participants were observed navigating back and forth through the manual during the assembly process. Another reason mentioned by participants is that they are more "used to a paper manual" and to "reading visual assembly scenarios". This similarity of paper manual and ECA is also reflected in the statement rankings for helpfulness and preference (see Table 3).
264
+
265
+ ![01963dfe-8c41-7700-b943-6b2554cf9f14_7_929_149_713_357_0.jpg](images/01963dfe-8c41-7700-b943-6b2554cf9f14_7_929_149_713_357_0.jpg)
266
+
267
+ Figure 9: The perspective change when moving around the display might help to identify pointing targets more accurately. Here, the observer was moving from the front position (left) to the right side (right).
268
+
269
+ ### 5.3 Design Implications
270
+
271
+ Our study revealed challenges when designing ECAs that point into the real world. While previous research found a high detection accuracy for ECAs pointing into the real world [34], this accuracy might not be high enough in an assembly scenario. We observed, that it is particularly important to reach a high target detection accuracy to avoid frustration in the assembly process. By using pointing, the ECA was able to guide the attention of participants to a broader region, which helped participants narrow down the number of possible pieces. However, when multiple similar pointing targets are located closely together, they were not able to identify the correct piece using the pointing cue only. Thus, we suggest to implement indirect methods for precision tasks, like for example displaying a virtual piece. An example of how 3D instructions can improve the assembly in combination with viewpoint control was presented by Yamaguchi et al. [36]. This could also be implemented in our spherical FTVR display, which allows for 3D view and viewpoint control. Since participants liked the ECA using pointing gestures in general, pointing could be implemented in addition to indirect methods.
272
+
273
+ ## 6 LIMITATIONS AND FUTURE WORK
274
+
275
+ In the following, we discuss six limitations along with opportunities they present for future research. First, in our study we only used one construction set and pieces were arranged in a fixed layout on the table to increase comparability between participants and conditions. Thus, it would be interesting to conduct a similar study using different pieces, such as a real furniture construction set, arrange pieces in a different layout, or use pieces without prior arrangement.
276
+
277
+ Second, we observed that participants behaved very differently when using the spherical FTVR display. While some participants used the additional cues, e.g. by moving around to get an additional perspective of the ECA or the virtual model, others did not move at all. Therefore it would be interesting to investigate whether more experience with the pointing ECA would improve the ability to identify correct pointing targets. A possibility could be to include a training phase where participants are animated to move around the display and identify pointing targets with feedback.
278
+
279
+ Third, while participants were able to detect pointing targets accurately when they were placed far enough apart or described detailed enough, it was difficult to distinguish closely located parts with broad voice descriptions. Since previous research showed that verbal descriptions should be substituted by gesture where possible instead of implementing both redundantly [4], it first requires future studies to quantify the detection accuracy for arm vector pointing to targets on a horizontal plane and determine at which distance targets get ambiguous.
280
+
281
+ Fourth, in our study we only compared the ECA's pointing to showing virtual pieces and a paper manual baseline. Results show a low piece identification error for the showing pieces ECA, even though the ECA was only holding the virtual pieces up. Therefore the question arises, whether an ECA does generally provide an advantage over a virtual model, especially because some participants noted that they felt pressured when using an ECA for the assembly in comparison to the paper manual. In contrast to a 2D paper manual, a 3D visualization displayed in a spherical FTVR display could provide depth cues. Future studies could investigate, whether an embodied human-like assistant provides an advantage over a 3D visualization of the assembly steps.
282
+
283
+ Fifth, our ECA only explained the assembly steps using voice and gestures. However, in an assembly scenario with a human assistant, people would not only follow the explanations, but also ask questions when they are unsure in an assembly step. Thus, a future step would be to implement a feedback mechanism and conduct further research to investigate, whether giving feedback would improve the error rate, assembly time and interaction experience.
284
+
285
+ Last, we only implemented deictic pointing gestures. Additionally, it would be possible to provide multiple gesture types, as they are used in human communication. An example was shown in previous research [22]. Their presented ECA used deictic pointing in combination with metaphoric gestures, to demonstrate how pieces should be placed, for example by crossing fingers to indicate that pieces have to be attached together in a 90 degree angle. Thus, future studies could investigate if the implementation of additional gestures enhances the interaction with ECAs.
286
+
287
+ ## 7 CONCLUSIONS
288
+
289
+ In this paper, we presented an ECA with the ability to point into the real world to investigate, whether spherical FTVR displays affect the interpretation of the ECA's pointing gestures, as well as examine the effect of ECAs with pointing gestures in an assembly scenario in general. We conducted a study to compare the pointing ECA in the spherical FTVR display to an ECA holding up virtual pieces as well as to the same ECAs in a flat display with a paper manual as baseline. Participants had to assemble different construction toy models while measuring assembly time, errors and user experience using a questionnaire.
290
+
291
+ Our results show that the spherical FTVR display had no significant effect on assembly time or errors, while it was preferred in all ECA conditions by participants and was shown to lead to a higher presence rating. The ECA with pointing gestures could not reduce assembly time or errors compared to the ECA that showed virtual pieces or the paper manual, though it was rated as helpful in the assembly process. Our findings show that pointing is helpful to guide attention to a broader region, but is not suitable for precise locations. For precise piece identification indirect methods, like showing the pieces, are more helpful and could be used in combination with direct methods, like pointing. These findings can be used to guide the design and development of ECAs that point into the real world, especially for assembly scenarios. Since home assistants are advancing in interaction possibilities, an ECA that provides gestures is expected to provide more natural human-like interactions and thus merge the boundaries between the virtual and real world.
292
+
293
+ ## REFERENCES
294
+
295
+ [1] M. Anabuki, H. Kakuta, H. Yamamoto, and H. Tamura. Welbo: An embodied conversational agent living in mixed reality space. In ${CHI}$ '00 Extended Abstracts on Human Factors in Computing Systems, CHI EA '00, p. 10-11. Association for Computing Machinery, New York, NY, USA, 2000. doi: 10.1145/633292.633299
296
+
297
+ [2] Antro. Rin new: Anime-style character for games and vrchat. https://assetstore.unity.com/packages/3d/characters/ humanoids/rin-new-anime-style-character-for-games-and-vrchat-174995, 2021. Accessed: May 25, 2022.
298
+
299
+ [3] J. N. Bailenson, K. Swinth, C. Hoyt, S. Persky, A. Dimov, and J. Blas-covich. The independent and interactive effects of embodied-agent appearance and behavior on self-report, cognitive, and behavioral markers of copresence in immersive virtual environments. Presence: Tele-operators and Virtual Environments, 14(4):379-393, 2005. doi: 10. 1162/105474605774785235
300
+
301
+ [4] A. Bangerter and M. M. Louwerse. Focusing attention with deictic gestures and linguistic expressions. In Proceedings of the Annual Meeting of the Cognitive Science Society, 27(27), 2005.
302
+
303
+ [5] A. Bangerter and D. M. Oppenheimer. Accuracy in detecting referents of pointing gestures unaccompanied by language. Gesture, 6(1):85- 102, 2006. doi: 10.1075/gest.6.1.05ban
304
+
305
+ [6] R.-J. Beun, E. de Vos, and C. Witteman. Embodied conversational agents: Effects on memory performance and anthropomorphisation. pp. 315-319. Springer Berlin Heidelberg, 2003. doi: 10.1007/978-3 $- {540} - {39396} - 2\_ {52}$
306
+
307
+ [7] J. Blattgerste, B. Strenge, P. Renner, T. Pfeiffer, and K. Essig. Comparing conventional and augmented reality instructions for manual assembly tasks. PETRA '17, p. 75-82. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10.1145/3056540.3056547
308
+
309
+ [8] G. Butterworth. Pointing is the royal road to language for babies. Pointing: Where language, culture, and cognition meet, pp. 9-33, 2003.
310
+
311
+ [9] A. Cafaro, H. H. Vilhjálmsson, T. Bickmore, D. Heylen, K. R. Jóhannsdóttir, and G. S. Valgarsson. First impressions: Users' judgments of virtual agents' personality and interpersonal attitude in first encounters. In International conference on intelligent virtual agents, pp. 67-80. Springer, 2012.
312
+
313
+ [10] J. Cassell. Embodied conversational agents: Representation and intelligence in user interfaces. AI Magazine, 22(4):67, 2001.
314
+
315
+ [11] J. Cassell, T. Bickmore, M. Billinghurst, L. W. Campbell, K. Chang, H. H. Vilhjálmsson, and H. Yan. Embodiment in conversational interfaces: Rea. In Proceedings of the SIGCHI conference on Human factors in computing systems the CHI is the limit - CHI'99. ACM Press, 1999. doi: 10.1145/302979.303150
316
+
317
+ [12] J. Cassell, T. Bickmore, H. H. Vilhjálmsson, and H. Yan. More than just another pretty face: Embodied conversational interface agents. ACM Press, 2000. doi: 10.1145/325737.325781
318
+
319
+ [13] J. Cassell, T. Stocky, T. Bickmore, Y. Gao, Y. Nakano, K. Ryokai, D. Tversky, C. Vaucelle, and H. Vilhjálmsson. Mack: Media lab autonomous conversational kiosk. In Imagina, vol. 2, pp. 12-15, 2002.
320
+
321
+ [14] H. H. Clark and S. E. Brennan. Grounding in communication. In Perspectives on socially shared cognition., pp. 127-149. American Psychological Association, 1991. doi: 10.1037/10096-006
322
+
323
+ [15] H. Cochet and J. Vauclair. Deictic gestures and symbolic gestures produced by adults in an experimental context: Hand shapes and hand preferences. Laterality: Asymmetries of Body, Brain and Cognition, 19(3):278-301, jun 2013. doi: 10.1080/1357650x.2013.804079
324
+
325
+ [16] D. B. Fafard, Q. Zhou, C. Chamberlain, G. Hagemann, S. Fels, and I. Stavness. Design and implementation of a multi-person fish-tank virtual reality display. In Proceedings of the 24th ACM Symposium
326
+
327
+ on Virtual Reality Software and Technology. ACM, nov 2018. doi: 10. 1145/3281505.3281540
328
+
329
+ [17] S. G. Hart. Nasa-task load index (NASA-TLX); 20 years later. Proceed-
330
+
331
+ ings of the Human Factors and Ergonomics Society Annual Meeting, 50(9):904-908, oct 2006. doi: 10.1177/154193120605000909
332
+
333
+ [18] W. L. Johnson, J. W. Rickel, and J. C. Lester. Animated pedagogical agents: Face-to-face interaction in interactive learning environments. International Journal of Artificial Intelligence in Education, 11(1):47- 78, 2000.
334
+
335
+ [19] A. Kendon. Do gestures communicate? a review. Research on Language &amp; Social Interaction, 27(3):175-200, jul 1994. doi: 10. 1207/s15327973rlsi2703_2
336
+
337
+ [20] K. Kim, J. Bolton, A. Girouard, J. Cooperstock, and R. Vertegaal. TeleHuman: Effects of $3\mathrm{\;d}$ perspective on gaze and pose estimation with a life-size cylindrical telepresence pod. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems - CHI'12. ACM Press, 2012. doi: 10.1145/2207676.2208640
338
+
339
+ [21] A. Kobsa, J. Allgayer, C. Reddig, N. Reithinger, D. Schmauks, K. Har-busch, and W. Wahlster. Combining deictic gestures and natural language for referent identification. In Coling 1986 Volume 1: The 11th International Conference on Computational Linguistics, 1986.
340
+
341
+ [22] S. Kopp, B. Jung, N. Lessmann, and I. Wachsmuth. Max - a multimodal assistant in virtual reality construction. KI, 17(4):11, 2003.
342
+
343
+ [23] A. Kranstedt, A. Lücking, T. Pfeiffer, H. Rieser, and M. Staudacher. Measuring and reconstructing pointing in visual contexts. Proceedings of the 10th Workshop on the Semantics and Pragmatics of Dialogue, pp. 82-89, 2006.
344
+
345
+ [24] M. Lohse, R. Rothuis, J. Gallego-Pérez, D. E. Karreman, and V. Evers. Robot gestures make difficult tasks easier: the impact of gestures on perceived workload and task performance. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, apr 2014. doi: 10.1145/2556288.2557274
346
+
347
+ [25] M. Mori, K. F. MacDorman, and N. Kageki. The uncanny valley [from the field]. IEEE Robotics Automation Magazine, 19(2):98-100, 2012. doi: 10.1109/MRA.2012.2192811
348
+
349
+ [26] K. R. Nick Yee, Jeremy N. Bailenson. A meta-analysis of the impact of the inclusion and realismof human-like faces on user experiences in interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 2007. doi: 10.1145/1240624.1240626
350
+
351
+ [27] M. Otsuki, K. Maruyama, H. Kuzuoka, and Y. SUZUKI. Effects of enhanced gaze presentation on gaze leading in remote collaborative physical tasks. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18, p. 1-11. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10.1145/ 3173574.3173942
352
+
353
+ [28] E. Schneider, Y. Wang, and S. Yang. Exploring the uncanny valley with japanese video game characters. In DiGRA Conference, 2007.
354
+
355
+ [29] K. Takashima, Y. Omori, Y. Yoshimoto, Y. Itoh, Y. Kitamura, and F. Kishino. Effects of avatar's blinking animation on person impressions. In Proceedings of Graphics Interface 2008. Canadian Information Processing Society, 2008. doi: 10.5555/1375714.1375744
356
+
357
+ [30] L. A. Thompson and D. W. Massaro. Evaluation and integration of speech and pointing gestures during referential understanding. Journal of experimental child psychology, 42(1):144-168, 1986.
358
+
359
+ [31] A. J. Wagemakers, D. B. Fafard, and S. F. Ian Stavness. Interactive visual calibration of volumetric head-tracked 3d displays. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, may 2017. doi: 10.1145/3025453.3025685
360
+
361
+ [32] I. Wang and J. Ruiz. Examining the use of nonverbal communication in virtual agents. International Journal of Human-Computer Interaction, 37(17):1648-1673, mar 2021. doi: 10.1080/10447318.2021.1898851
362
+
363
+ [33] C. Ware, K. Arthur, and K. S. Booth. Fish tank virtual reality. In Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems, CHI '93, p. 37-42. Association for Computing Machinery, New York, NY, USA, 1993. doi: 10.1145/ 169059.169066
364
+
365
+ [34] F. Wu, Q. Zhou, K. Seo, T. Kashiwagi, and S. Fels. I got your point: An investigation of pointing cues in a spherical fish tank virtual reality display. pp. 1237-1238. IEEE, Osaka, Japan, 2019. doi: 10.1109/VR.
366
+
367
+ 2019.8798063
368
+
369
+ [35] F. Wu, Q. Zhou, I. Stavness, and S. Fels. It's over there: Designing an intelligent virtual agent that can point accurately into the real world. In Graphics Interface 2022, 2022.
370
+
371
+ [36] M. Yamaguchi, S. Mori, P. Mohr, M. Tatzgern, A. Stanescu, H. Saito, and D. Kalkofen. Video-annotated augmented reality assembly tutorials. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology, UIST '20, p. 1010-1022. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/ 3379337.3415819
372
+
373
+ [37] B. Yoon, H. il Kim, G. A. Lee, M. Billinghurst, and W. Woo. The effect of avatar appearance on social presence in an augmented reality remote collaboration. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, mar 2019. doi: 10.1109/vr. 2019.8797719
374
+
375
+ [38] Q. Zhou, G. Hagemann, D. Fafard, I. Stavness, and S. Fels. An evaluation of depth and size perception on a spherical fish tank virtual reality display. IEEE Transactions on Visualization and Computer Graphics, 25:2040-2049, 2019. doi: 10.1109/TVCG.2019.2898742
376
+
377
+ [39] Q. Zhou, G. Miller, K. Wu, D. Correa, and S. Fels. Automatic calibration of a multiple-projector spherical fish tank vr display. pp. 1072-1081. IEEE, Santa Rosa, CA, USA, 2017. doi: 10.1109/WACV. 2017.124
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/N0RiLoidWE/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,339 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § ATTACH THAT THERE: INVESTIGATING 3D VIRTUAL ASSEMBLY ASSISTANTS THAT POINT INTO THE REAL WORLD
2
+
3
+ Category: Research
4
+
5
+ < g r a p h i c s >
6
+
7
+ Figure 1: Assembly assistant pointing to real world targets from within a spherical FTVR display.
8
+
9
+ § ABSTRACT
10
+
11
+ Gestures are a fundamental part of human communication. However, commonly used voice assistants do not exploit the advantages of human-like nonverbal communication. We present an Embodied Conversational Agent (ECA) with the ability to explain assembly steps and point to indicate real-world targets. To enable accurate pointing into the real world, we implemented our ECA in a spherical Fish Tank Virtual Reality (FTVR) display. We evaluated the effect of a pointing ECA on the performance and experience in an assembly scenario, as well as investigated whether spherical FTVR displays provide an advantage over 2-dimensional (2D) flat displays. Results show that, while the spherical FTVR was preferred in all conditions, pointing to real pieces did not reduce assembly time or errors compared to showing virtual pieces by holding them up. Based on our findings, we provide design insights and research directions for ECAs with pointing gestures in an assembly scenario.
12
+
13
+ Index Terms: Human-centered computing-Human computer interaction (HCI)-Interaction paradigms-Virtual reality Human-centered computing-Human computer interaction (HCI)— Interaction paradigms-Pointing
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ Spherical Fish Tank Virtual Reality Displays (FTVR) offer unique opportunities for interactions. While conventional Virtual Reality (VR) displays only support interactions in the virtual world, FTVR displays are non-immersive. Thus, they allow for pointing from within the display to the real space surrounding it, which makes them particularly suitable for implementing 3D Embodied Conversational Agents (ECAs).
18
+
19
+ Through their embodiment, ECAs have the ability to provide additional human-like nonverbal cues, like for example gestures [10]. Deictic gestures, which accompany speech, are a common method to indicate objects and guide the attention to them by substituting linguistic expressions with a pointing gesture [21]. This is particularly helpful in collaboration scenarios where establishing a mutual understanding is essential for successful communication [14]. Deictic gestures are for example used when indicating the position of an object in the room with the answer "it is over there" accompanied by a pointing gesture, instead of describing the location of the object in detail.
20
+
21
+ Deictic pointing can not only enhance the interaction in reality, it can also improve the interaction with ECAs, as they allow users or conversational agents to indicate objects they are talking about. Previous work has shown that a feature description accompanied by a deictic gesture, increases accuracy in identifying a target [4]. Moreover pointing gestures simplify the language dialog by allowing for simpler and shorter descriptions and therefore enable references in situations where descriptions alone would not be possible (e.g. when multiple similar objects are present) [21].
22
+
23
+ We believe that an ECA with the ability to point into the real world would leverage the multi-modality of human communication [32], and therefore enables more natural human agent interactions. While there are many studies on how humans perceive and use gestures, this knowledge can not directly be applied to ECAs, as there is a difference between how humans use and interpret pointing gestures [5]. Previous studies found that it is possible to implement an ECA to point into the real world with a similar or higher accuracy than a real person [35]. However, the effect of an ECA using pointing into the real world accompanied by verbal cues on the interaction experience has not been studied yet.
24
+
25
+ In this study we investigate how ECAs with pointing gestures influence the interaction experience and performance in an assembly scenario. For this purpose, we implemented an ECA with the ability to guide users through the assembly steps by using voice instructions and gestures. To enable pointing from within virtuality into the real world, we use a spherical FTVR display for our ECA. Our spherical FTVR display adapts content to the user's viewpoint by rendering perspective corrected vision and providing motion parallax as well as stereoscopic cues to improve depth and size perception [20].
26
+
27
+ Contributions: 1) We created a novel virtual assistant with the ability to point into the real world, which can be modified and used in other pointing related AR/VR/XR scenarios. 2) We evaluated assembly time, errors and user preference of different display forms and ECA gestures (see Section 4). Our results show that a spherical FTVR display is preferred over a flat 2D display for an ECA. 3) Based on our study, we provide design insights and future research directions for designing ECAs with pointing gestures in assembly scenarios.
28
+
29
+ § 2 RELATED WORK
30
+
31
+ While deictic gestures are one of the most commonly used forms of non-verbal communication, there are some challenges when implementing them for ECAs. In the following, we first provide an overview over deictic pointing in human communication. Afterwards, we discuss work on how deictic gestures can be implemented in ECAs and the advantages spherical FTVR displays provide.
32
+
33
+ § 2.1 DEICTIC POINTING GESTURES
34
+
35
+ In their everyday life, humans use deictic pointing gestures when they indicate proximal objects by extending their arm and index finger towards a pointing target. Deictic gestures are fundamental when communicating to establish a mutual understanding and help to direct attention to people or objects, especially when the use of speech only is ambiguous $\left\lbrack {{30},{32}}\right\rbrack$ . Thus, deictic gestures are particularly suitable in an assembly scenario, where spatial deixis is important, since they can substitute certain spatial linguistic expressions and indicate objects [15].
36
+
37
+ How human gestures are interpreted is a key issue in gesture research [19]. Pointing gestures can be distinguished in proximal and distal [38]. Proximal pointing occurs when the pointer touches the target, while distal pointing occurs when the target is situated too far away and the goal is to locate the target's position in a shared environment [8]. We will focus on distant pointing, as the goal is to implement an ECA that assists in an assembly process by pointing at distant pieces. The major challenge of distant pointing is detection accuracy, which quantifies how successful observers can identify pointing targets. Bangerter et al. [5] showed that bias in pointing target detection was small for both vertical and horizontal pointing, while detection accuracy was lower for peripheral targets than for central ones.
38
+
39
+ § 2.2 EMBODIED CONVERSATIONAL AGENTS (ECAS) WITH POINT- ING
40
+
41
+ ECAs are virtual agents that inhibit conversational behaviors and are human-like in the way they use their bodies in conversations [12]. Cassell [12] defined ECA as having the ability to recognize, generate and respond to verbal and non-verbal input, deal with conversational functions as well as give signals that indicate the state of the conversation and contribute new propositions.
42
+
43
+ Previous research showed that the presence of an ECA can improve the interaction between the user and the agent and has a positive effect on the retainability of information, independent of the realism of the embodiment [6]. Yee et al. [26] found that agents with a visual representation lead to more positive social interactions compared to agents without a visual representation. Furthermore, they confirm previous findings that the degree of realism may matter very little: animated highly realistic faces might appear unnatural or disturbing, which confirms Mori et al.'s [25] uncanny valley effect.
44
+
45
+ In the same way as humans use gestures, gestures can also be implemented in ECAs to enable new interaction possibilities. Previous research showed that the integration of gestures in ECAs influences the ECA's personality [9], helps to achieve a sense of co-presence [3], and improves user perceptions of friendliness and trust [32]. Research in human-robot interaction found that robots using gestures increased the user performance while decreasing perceived workload for challenging tasks [24].
46
+
47
+ § 2.3 ECAS IN DIFFERENT VR/XR PLATFORMS
48
+
49
+ While considerable research has been done targeting how gestures in the virtual world are perceived and influence the interaction experience with ECAs, there is only little research on ECAs pointing into the real world. Wu et al. [34] investigated different pointing cues for ECAs pointing into the real world using a spherical FTVR display. Results show that a combination of head and hand cues yielded the best accuracy with ${82.6}\%$ for fine pointing $\left( {15}^{ \circ }\right)$ compared to hand-only or head-only cues. In a second study, Wu et al. demonstrated that an ECA using arm vector pointing can point to a physical location with comparable or even better accuracy than a real person [35]. Unlike humans who use an eye-fingertip alignment for pointing, which yields a perceptual bias [4], ECAs can be implemented using arm vector pointing to improve detection accuracy [35]. Since previous work already showed high pointing accuracy for ECAs in spherical FTVR displays, we are interested in how pointing gestures in combination with verbal cues can help to establish a joint attention in a real world assembly scenario.
50
+
51
+ An early example of how pointing can be implemented in ECAs is Rae [11], a real estate agent using iconic, metaphoric, and deictic gestures. Rae uses pointing to indicate or emphasize objects in its virtual environment, such as features of homes, either complementary to speech or fully redundant [10]. Kopp et al. [22] designed Max, a human-size agent for cooperative construction tasks in a Collaborative Virtual Environment (CAVE). The agent employs speech, gaze, facial expressions, and gesture to guide the user through construction tasks.
52
+
53
+ While previous examples point in the virtual world only, MACK is an example of an ECA in mixed reality. The agent gives location directions and answers questions by using a combination of speech, gestures, and pointing into the real world, to the paper map in front of the user or to its surroundings to support voice directions [13]. Another example of an agent pointing in MR was presented by Anabuki et al. [1]. They created Welbo, a human-like robot agent that helps users in an MR living room. In the living room, users can interact with objects and simulate virtual furniture in the physical space. Welbo has the ability to have conversations with users and react to their instructions by moving furniture and guiding users with pointing gestures. These examples show the promise that ECAs have for pointing in MR spaces. However, additional research is needed to examine if pointing in MR space improves the interaction with ECAs. In our user study, we investigate the interaction with an ECA pointing from within virtuality to reality in an assembly scenario.
54
+
55
+ A novel approach to guide attention towards distant objects by using gaze was presented by Otsuki et al. [27]. To support remote collaborative tasks, they created "ThirdEye", a hemispherical display that shows tracked eye movement of remote participants. In a user study Otsuki et al. [27] showed that ThirdEye can lead the observer's attention to objects faster compared to only showing the image of the remote participant's face. The results underline the importance of using additional gaze cues for leading attention for remote collaborative tasks. Following this result, we include in addition to pointing cues.
56
+
57
+ All these examples show how ECAs, much like humans, are able to use gestures and gaze to enable more natural interactions and help in completing tasks. With respect to user testing, previous work already showed good pointing accuracy of ECAs in a spherical FTVR displays. In reality, people do not rely on pointing gestures exclusively [5]. Thus, we evaluated the interaction experience of an ECA with pointing gestures in a real world assembly scenario in combination with voice.
58
+
59
+ § 2.4 ASSEMBLY INSTRUCTIONS
60
+
61
+ The most commonly used method for assembly instructions is the traditional paper manual. While a paper manual can show explanations in combination with pictures of the model status for each step, it does not help in identifying similar pieces and viewing the steps in 3D or from different viewpoints. Previous work has already presented different approaches for improving assembly instructions through technology. Two AR approaches were presented by Blattger-ste et al. [7]. The first approach is to display the 2D images of the paper instructions into the user's field of view. The second approach uses in-situ instructions to overlay a marker for piece identification as well as a virtual model of the piece at the correct assembly position using AR glasses or smartphones. Their study results suggest a combination of in-situ feedback for picking the correct piece and pictorial feedback for assembly. Instead of using in-situ feedback for piece identification, our ECA points to the pieces. Based on the shown importance of pictorial feedback for assembly, we include a 3D model as assembly help.
62
+
63
+ To enable more helpful visual instructions for assembly, Yam-aguchi et al. [36] presented a novel approach for generating and visualizing 3D AR tutorials with viewpoint control at runtime. The instructions are shown in an AR "magic mirror" display, which aligns the user's viewpoint of the physical object with the virtual 3D instructions. While the results of their user study did not show significant differences in task completion time and number of errors compared to traditional video tutorials, the AR mirror system led to significantly less mental effort. Subjective results also demonstrated the advantages of the system.
64
+
65
+ Another possibility to guide people through an assembly process is by using an agent. As described above, an example is Max, an ECA using pointing and other gestures to indicate virtual pieces and collaborate with users in the assembly of a virtual model in a CAVE [22]. In contrast to Max, our ECA points from within a FTVR display to real pieces with the goal of guiding users through a real world assembly process. We use an assembly task to investigate our ECA with pointing gestures, since assembly tasks require piece identification and thus, pointing is especially helpful. We focus on using pointing gestures for piece identification, but provide a virtual 3D model in front of the avatar as an assembly help since pictorial feedback for assembly was shown as most helpful [7].
66
+
67
+ § 3 DESIGN FACTORS
68
+
69
+ This section provides a description of the key aspects of our ECA design and implementation, including display form, appearance, speech, gestures and virtual model.
70
+
71
+ § 3.1 DISPLAY FORM
72
+
73
+ We chose to use a spherical FTVR display for our ECA. Since FTVR displays are situated in MR space, compared to immersive Virtual Reality (VR) displays, they can enable pointing from within the display to real objects surrounding it. FTVR displays, introduced by Ware et al. [33], have been shown to increase the perception of three dimensionality of virtual objects. Motion parallax and stereoscopic cues are essential for interpreting pointing gestures and therefore FTVR displays, which provide these cues and create spatial 3D effects by rendering perspective-corrected vision, are particularly suitable for pointing [20]. Spherical FTVR displays improve depth and size perception compared to flat FTVR displays, hence are a more suitable form of FTVR for interpreting pointing targets [38]. Previous research already compared spherical FTVR displays to flat FTVR displays to illustrate improved performance. Thus, we compare the spherical FTVR display to a traditional flat 2D screen, as used in current state-of-the-art home assistants.
74
+
75
+ Table 1: Example voice instruction for both the showing pieces and pointing ECA for both steps, indicating a piece and explaining the assembly.
76
+
77
+ max width=
78
+
79
+ X Showing Pieces Pointing
80
+
81
+ 1-3
82
+ Indicate a Piece "Take this blue screw" "Take that blue screw”
83
+
84
+ 1-3
85
+ Explain Assembly "Use it to attach the 2 black connectors to the left yellow tube” "Use it to attach the 2 black connectors to that yellow tube"
86
+
87
+ 1-3
88
+
89
+ § 3.2 ECA APPEARANCE
90
+
91
+ Human-like representations of ECAs are subject to the uncanny valley effect, which occurs when ECAs mimic human features in too much detail, while not fully succeeding, so that they appear unnatural, with an even bigger effect when movement is added [25]. Therefore we decided to use a female Japanese cartoon character with human-like traits while keeping a non-human appearance, as suggested by Schneider et al. [28]. The ECA we used has non-human proportions with big eyes a small nose and mouth. Considering the limited display size and the fact that our assembly task only requires seeing the upper body, Yoon et al. [37] suggest using a half body avatar. We scaled our upper body ECA as big as possible to improve gesture perception while allowing to extend the arm completely for pointing in both displays. This is in accordance with the use case of ECAs since, even though prototypes for life-size displays exist in research [20], in practice display sizes of home assistants are relatively small.
92
+
93
+ We implemented an idle animation state that is played in a infinite loop and consists of subtle arm and upper body movements, to make the ECA appear more active and alive [18]. To make our ECA feel more vivid, we added a blinking animation with random blinks of a rate between 3-5s, following the findings of Takashima et al. [29].
94
+
95
+ § 3.3 SPEECH
96
+
97
+ For the ECA's speech we used IBM's Watson Text-to-Speech (TTS) to generate verbal instructions for each assembly step from written text. We used the Oculus LipSync asset to match lip movement with spoken utterances. The asset uses blend shapes included in the avatar model to animate lip movement accordingly.
98
+
99
+ In every assembly step, the ECA first indicates which piece is needed, followed by a description of where the piece has to be attached. In the first step, the ECA broadly describes a piece accompanied by either a pointing gesture or showing a virtual piece by holding it up. In the second step, the ECA either gives broad verbal cues while pointing to the target position or only gives more detailed voice instructions explaining where to attach the piece. The pure voice version describes where the parts need to go in a more detailed manner than the voice in the pointing-added version to help the user complete the task with similar aid level. This is suggested by the substitution hypothesis of Bangerter et al. [4]. Voice instructions for an example assembly step for both the pointing and showing pieces ECA are shown in Table 1.
100
+
101
+ § 3.4 GESTURES
102
+
103
+ To examine an assembly assistant pointing into the real world, two different gestures, pointing and showing pieces by holding them up, were implemented. Both gestures accompany a voice instruction and substitute a spatial location expression of a piece.
104
+
105
+ < g r a p h i c s >
106
+
107
+ Figure 2: ECA showing a virtual piece (left) and pointing (right) in the spherical FTVR display. The virtual model state is displayed in front of the ECA.
108
+
109
+ § POINTING
110
+
111
+ Previous research suggests that hand gestures combined with head rotation provide the highest accuracy and naturalness compared to hand or head only cues, especially for fine pointing [34]. Thus, we implemented hand as well as head animations to facilitate distinction between close pointing targets. Humans point by aligning their fingertip with the gaze of their dominant eye, while the observer interprets the pointing gesture by referring to the pointer's arm vector [5]. This might lead to ambiguity because the target interpreted by the observer is different from the actual intended pointing target by the pointer followed by the eye-fingertip line. Wu et al. [35] showed that using arm vector pointing for virtual avatars provides pointing with comparable and in some cases better accuracy compared to the pointing of a real person. Therefore our ECA uses arm vector pointing by outstretching the arm and index finger as well as rotating the head towards the target without eye-fingertip alignment (see Figure 2).
112
+
113
+ We implemented the pointing animation in Unity3D using inverse kinematics (IK) to enable the ECA to adapt the pointing animation to variable pointing targets during runtime. This allows a natural-looking arm raise animation while implementing a variable end position where the ECA's arm is outstretched, by building a vector from the shoulder to the index finger and towards the distant target. Instead of using object recognition, we decided to run a Wizard of $\mathrm{{Oz}}$ experiment to avoid recognition errors. Eye movement was not included, since testing revealed that there was no recognizable difference due to the big cartoon style eyes, which were always looking like they would face the target when the head was rotated towards it.
114
+
115
+ § SHOWING PIECES
116
+
117
+ As a comparison to the pointing ECA, we also implemented a showing animation, where the ECA holds up virtual pieces instead of pointing to physical pieces in the real world (see Figure 2). The virtual pieces were created by measuring the physical Brio Builder pieces and modeling a virtual representation of them using Blender. The main animation was created using a video of a person holding a piece up as a reference and adding keystrokes to reconstruct the motion for the avatar.
118
+
119
+ § 3.5 VIRTUAL MODEL
120
+
121
+ In front of the ECA, we displayed the model state after each assembly step on a small table that is floating in front of the avatar (see Figure 2). In a small pilot trial, we first tested the system without an additional visual representation of the model. The trial showed that it is very difficult to complete an assembly task without a visual aid while relying on voice instructions and gestures only, especially because humans are used to rely on visual aids, like paper manuals, for assembly tasks. Thus, we decided to provide a virtual representation of the model state, allowing participants to verify if they picked the right piece, as well as give an additional visual aid for the assembly. In order to prevent participants from picking a piece based on the virtual model instead of the pointing or showing cue and therefore having a confounding influence on the study results, we displayed the model state only after the piece indication step, while the ECA explains the assembly (see Table 1).
122
+
123
+ Table 2: Overview of the five conditions of the user study.
124
+
125
+ max width=
126
+
127
+ X Showing Pieces Pointing
128
+
129
+ 1-3
130
+ Flat (2D) X $X$
131
+
132
+ 1-3
133
+ Spherical FTVR (3D) X $X$
134
+
135
+ 1-3
136
+ Paper Manual - -
137
+
138
+ 1-3
139
+
140
+ § 4 EXPERIMENT
141
+
142
+ The goal of our experiment is to investigate the effect of our ECA with pointing gestures in an assembly scenario. We compare our pointing ECA in a spherical FTVR display to the same ECA in a traditional flat display. To provide a fairer comparison for the flat 2D display we decided to include a condition that is more optimized for the flat display: an ECA holding up virtual pieces in front of its body. With a paper manual as baseline, we measured assembly task completion time, errors and the interaction experience. The five conditions are shown in Table 2.
143
+
144
+ § 4.1 PARTICIPANTS
145
+
146
+ Fifteen paid participants ( 7 male and 8 female) aged between 18 and 45 were recruited from a local university to participate with a compensation of $\$ {10}$ . All participants had normal or corrected to normal vision. None of them used Brio Builder construction sets before.
147
+
148
+ § 4.2 APPARATUS
149
+
150
+ We used a ${30}\mathrm{\;{cm}}$ diameter spherical FTVR and a flat display to conduct the experiment. To create a ${360}^{ \circ }$ image, four Optoma GT750ST stereo projectors with a ${1024} \times {768}$ pixel resolution and a frame rate of ${120}\mathrm{{hz}}$ rear project onto the spherical surface, making a total NVIDIA Mosaic resolution of ${4096} \times {768}$ at 34.58 ppi [16]. A computer equipped with a NVIDIA Quadro K5200 graphics card runs the Unity application and sends the rendering content to all four projectors. We adopted an automated camera-based multi-projector calibration technique [39], to enable a seamless image with 1-2 millimeter accuracy. NVIDIA Mosaic synchronizes all screens in resolution and frame rate for stereo rendering and enables synchronization of XPand RF shutter glasses to generate stereo images with ${60}\mathrm{\;{hz}}$ for each eye. The total latency lies between ${10} - {20}\mathrm{{msec}}$ [16]. The OptiTrack optical tracking system was used for head tracking by attaching passive markers to the shutter glasses. To adapt the viewpoint to each participant, we used a pattern-based viewpoint calibration [31] with an average error of less than ${1}^{ \circ }$ . The spherical FTVR provides depth cues such as stereoscopic cues and motion parallax.
151
+
152
+ For the flat display condition, we also used an Optoma GT750ST projector with the same ${1024} \times {768}$ pixel resolution and ${120}\mathrm{\;{Hz}}$ frame rate to rear-project on a flat screen to minimize differences between the flat and spherical display. The flat display's physical screen size is ${36}\mathrm{\;{cm}} \times {27}\mathrm{\;{cm}}$ which results in a similar screen area as the spherical screen with a ${30}\mathrm{\;{cm}}$ diameter. In contrast to the spherical screen, the flat screen does not provide motion parallax, stereo rendering or perspective corrected images.
153
+
154
+ < g r a p h i c s >
155
+
156
+ Figure 3: Top view of the table used in the experiment where the pieces were laid out. The free space was used for the assembly.
157
+
158
+ < g r a p h i c s >
159
+
160
+ Figure 4: Extract of the paper manual used in the study, showing four assembly steps.
161
+
162
+ The physical Brio Builder pieces were laid out on a table within a marked area of the size ${83} \times {76}\mathrm{\;{cm}}$ , as close to the ECA as possible, since the detection accuracy of pointing gestures decreases in distance [23]. All pieces were laid out in the same layout for all conditions and participants, to minimize differences. The same table was used for all the conditions. The study setup with all pieces laid out on the table is shown in Figure 3. In front of the pieces, there was free space where participants assembled the model. Both the spherical and flat display was placed so that the perceived size and distance of the avatar is similar.
163
+
164
+ We developed a Unity3D application for the experiment to animate and render the ECA and record task completion time. Our ECA was based on [2], and the virtual Brio Builder pieces used in the application were modeled using Blender. For the paper manual, we used the same models as shown virtually in the ECA conditions. The paper manual was color printed single-sided on large (11x17") paper (see Figure 4).
165
+
166
+ § 4.3 DESIGN
167
+
168
+ The experiment was conducted using a $2 \times 2$ within-subjects factorial design with a baseline paper manual condition:
169
+
170
+ * C1 Display Form: spherical FTVR display (3D) or flat display (2D).
171
+
172
+ * C2 Gesture: pointing (P) or holding a piece up (H).
173
+
174
+ For every condition, we used a different model, resulting in 5 models used throughout the experiment each consisting of 30 pieces (see Figure 5). The combination of display form/gesture and model as well as the sequence of conditions was fully counterbalanced using Latin squares. For quantitative analysis, we measured task completion time and errors. We collected subjective data about the interaction experience through a questionnaire. Furthermore, we measured the perceived workload using the raw Nasa TLX [17].
175
+
176
+ § 4.4 PROCEDURE
177
+
178
+ First, we asked participants to sign a consent form and fill in a demographic questionnaire. We then explained the procedure of the study and guided them through a viewpoint calibration. Each participant performed every condition once: two different display form factors combined with two different gesture types and a paper manual as a baseline, resulting in five assembly rounds per participant. Participants were asked to stand in front of the table with the laid-out pieces. In the paper manual condition, participants were instructed to follow the assembly steps shown on the images. They were allowed to navigate through the manual in their own pace and if needed jump back to previous pages, as they would naturally use a paper manual by themselves.
179
+
180
+ < g r a p h i c s >
181
+
182
+ Figure 5: Photos of the five physical models that were assembled in the study.
183
+
184
+ In the assembly assistant conditions, participants were instructed to follow the instructions given by the ECA. Participants were instructed to always pick a piece after the indication step and were allowed to change the piece in the next step if they later notice that they picked a wrong one. They were also allowed to move freely around the table during the assembly process. Each assembly step started with the ECA showing a piece or pointing at a piece required for the following assembly step accompanied by a verbal cue. Once participants decided for a piece, the avatar either only explains the next step or explains and points at the assembly position. At the same time, the model state is shown in front of the avatar as seen in Figure 2. Once the ECA received a verbal response, the next assembly step is started. It took about 5-10 minutes to complete one model assembly.
185
+
186
+ At the end of each assembly round, we presented twelve five-level Likert scale questions to participants and asked them to rate each in the range between "strongly disagree" and "strongly agree". The questions addressed character behavior, presence, and perception as well as general questions about the experience. After the paper manual round, participants were only asked to answer the four general experience questions.
187
+
188
+ Once participants completed the entire experiment, they filled out an overall questionnaire. They were asked to rate and explain which display form they prefer for both the showing pieces condition and the pointing condition. Additionally, they were asked to rank the instruction modes: paper manual, showing pieces and pointing and specify reasons for their preference. The entire experiment took about 60 minutes.
189
+
190
+ § 4.5 RESULTS
191
+
192
+ In the following section, we describe the findings of our user study regarding work load, assembly completion time, errors and user experience.
193
+
194
+ § 4.5.1 WORK LOAD
195
+
196
+ First, we analyzed the raw TLX score over the different rounds to determine if potential work load or fatigue effects had to be considered in the further analysis. The mean raw TLX score was $M = {27.0}$ $\left( {{SD} = {14.5}}\right)$ after the first, $M = {33.9}\left( {{SD} = {18.9}}\right)$ after the second, $M = {25.9}\left( {{SD} = {14.0}}\right)$ after the third, $M = {26.3}\left( {{SD} = {12.1}}\right)$ after the fourth and $M = {24.4}\left( {{SD} = {13.5}}\right)$ after the last assembly round. A RM-ANOVA was conducted to reveal if the order significantly influenced the work load. The analysis did not reveal a significant effect of assembly round on work load $\left( {F\left( {4,{56}}\right) = {1.848},p = {.133}}\right)$ . Therefore, we assume that effects on the assembly performance caused by work load or fatigue are negligible.
197
+
198
+ < g r a p h i c s >
199
+
200
+ Figure 6: Recorded piece identification errors for all four ECA conditions with medians and 95% Cls. Significant values are reported in brackets for $p < {.05}\left( *\right)$ .
201
+
202
+ We also analyzed all sub categories of the raw TLX using a RM-ANOVA. The only category for which a significant difference between conditions was found is frustration $(F\left( {4,{56}}\right) = {4.054},p <$ .01). A two tailed t-test revealed the ECA pointing in the flat display $\left( {M = {42.0},{SD} = {26.9}}\right)$ led to a significantly higher frustration rating than the ECA that was holding up pieces in the spherical display ( $t =$ $- {2.598},p < {.05})$ , as well as the paper manual $\left( {t = {2.327},p < {.05}}\right)$ . There were no significant differences across remaining conditions.
203
+
204
+ § 4.5.2 TIME
205
+
206
+ We measured task completion time for every condition. We performed a RM-ANOVA and found that instruction mode had no significant effect on assembly time $\left( {F\left( {4,{56}}\right) = {0.816},p = {0.521}}\right)$ .
207
+
208
+ § 4.5.3 PIECE IDENTIFICATION ERRORS
209
+
210
+ During the assembly process, errors were recorded and categorized in piece identification (finding the right piece) and assembly errors. The piece identification error includes wrongly picked pieces after the ECA was referring to them by showing a piece or pointing, including pieces that were corrected in the next assembly step. Since participants were able to see the model state right away in the paper manual condition and there was no separate piece identification step, the paper manual is not included in the piece identification statistics.
211
+
212
+ Results of the RM-ANOVA show a significant difference between conditions $\left( {F\left( {3,{38}}\right) = {4.174},p < {.05}}\right)$ . A two tailed t-test revealed that piece identification error was significantly lower $\left( {t = - {3.057},p < {.01}}\right)$ when the ECA was holding up pieces in the spherical display $\left( {M = {1.6},{SD} = {1.3}}\right)$ compared to when the ECA was pointing in the flat display $\left( {M = {3.6},{SD} = {2.1}}\right)$ . There was no significant difference across remaining conditions.
213
+
214
+ § 4.5.4 ASSEMBLY ERRORS
215
+
216
+ Assembly errors were calculated by counting each incorrectly chosen and not corrected piece as well as wrongly attached pieces (e.g. pieces attached to a wrong hole or incorrectly rotated). The RM-ANOVA did not show a significant difference for the assembly errors between conditions $\left( {F\left( {4,{52}}\right) = {0.640},p = {.636}}\right)$ .
217
+
218
+ § 4.5.5 TIME AND PIECE IDENTIFICATION ERROR CORRELATION
219
+
220
+ A Pearson correlation coefficient test was conducted and found a moderate positive correlation between assembly completion time and number of incorrectly identified pieces $\left( {r\left( {54}\right) = {523},p < {.001}}\right)$ . A visualization of the correlation can be found in Figure 7. As there was no separate identification step in the paper manual condition, target identification errors were only analyzed for the ECA conditions and therefore paper manual times are not included in the correlation analysis.
221
+
222
+ < g r a p h i c s >
223
+
224
+ Figure 7: Correlation between number of incorrectly chosen pieces in each ECA assembly round and assembly task completion time in seconds.
225
+
226
+ max width=
227
+
228
+ 2|c|X 2|c|Flat (2D) 2|c|$\mathbf{{Spherical}\left( {3D}\right) }$
229
+
230
+ 1-6
231
+ Statements $\mathbf{{PM}}$ $\mathbf{H}$ $\mathbf{P}$ H $\mathbf{P}$
232
+
233
+ 1-6
234
+ Felt like ECA was present - 2.4 (1.2) 2.1 (0.9) 3.0 (1.2) 2.8 (0.8)
235
+
236
+ 1-6
237
+ Correct piece identification - 3.5 (1.2) 1.8 (0.9) 3.9 (1.2) 2.2 (0.9)
238
+
239
+ 1-6
240
+ Enjoyed dis- play form - 3.3 (0.9) 2.9 (1.0) 4.2 (0.8) 3.8 (0.9)
241
+
242
+ 1-6
243
+ ECA / manual was helpful 4.2 (0.8) 3.9 (0.9) 3.0 (0.9) 4.1 (0.8) 4.0 (0.7)
244
+
245
+ 1-6
246
+ Easy to follow steps 3.7 (1.1) 4.1 (0.6) 2.1 (1.1) 4.4 (0.9) 2.7 (0.9)
247
+
248
+ 1-6
249
+ Liked gesture / manual 3.9 (1.1) 4.1 (0.7) 2.8 (1.1) 4.3 (1.1) 3.2 (0.8)
250
+
251
+ 1-6
252
+
253
+ Table 3: Mean and standard deviation of significant questionnaire responses for all five conditions: paper manual (PM), holding pieces up (H) and pointing (P) for the spherical and flat display. Higher scores indicate stronger agreement ranging from 1 (strongly disagree) to 5 (strongly agree).
254
+
255
+ § 4.5.6 SUBJECTIVE RATINGS
256
+
257
+ A Friedman ranked sum test was performed on all twelve five-level Likert scale questions. Participants rated each in the range between 1 ("strongly disagree") and 5 ("strongly agree"). The first eight questions which addressed character behavior, character presence and perception were only asked after the four ECA conditions. For all significant statements, mean and standard deviation values are shown in Table 3.
258
+
259
+ Realism of Gestures The Friedman ranked sum test did not reveal a significant difference between conditions for realism of gestures $\left( {{X}^{2}\left( 3\right) = {3.44},p = {.329}}\right)$ , speech $\left( {{X}^{2}\left( 3\right) = {5.64},p = {.130}}\right)$ and fidelity $\left( {{X}^{2}\left( 3\right) = {3.660},p = {.301}}\right)$ . There was also no difference between conditions for the statement that "gestures made ECA seem more realistic" $\left( {{X}^{2}\left( 3\right) = {3.74},p = {.291}}\right)$ and that "gestures strengthen the connection" between the ECA and themselves $\left( {{X}^{2}\left( 3\right) = {5.640},p = {.130}}\right)$ .
260
+
261
+ ECA Presence The statement "I felt like ECA was present in the real world" was rated significantly different between conditions, as shown by the Friedman ranked sum test $\left( {{X}^{2}\left( 3\right) = {8.060},p < {.05}}\right)$ . Post hoc analysis with Wilcoxon signed-rank tests for multiple comparisons resulted in a significant higher rating of the presence for both spherical conditions $\mathrm{H}\left( {z = {2.223},p < {.05}}\right)$ and $\mathrm{P}$ $\left( {z = {1.988},p < {.05}}\right)$ compared to 2D-P. No significant differences were found for remaining pairs.
262
+
263
+ Target Identification Confidence Level A Friedman ranked sum test revealed a significant difference across conditions regarding confidence level for target identification $\left( {{X}^{2}\left( 3\right) = {21.460},p < {.001}}\right)$ . Post-hoc analysis with Wilcoxon signed rank test shows a significant effect between 3D-H and 3D-P $\left( {z = {3.076},p < {.01}}\right)$ or 2D-P $\left( {z = {3.180},p < {.01}}\right)$ . It also revealed a significant effect between $2\mathrm{D} - \mathrm{H}$ and $3\mathrm{D} - \mathrm{P}\left( {z = {2.667},p < {.01}}\right)$ or $2\mathrm{D} - \mathrm{P}\left( {z = {3.040},p < {.01}}\right)$ . No significant difference was found for remaining pairs.
264
+
265
+ Assembly Confidence Level The participants' confidence level of correct assembly did not show a significant difference between conditions $\left( {{X2}\left( 4\right) = {7.747},p = {.101}}\right)$ .
266
+
267
+ Enjoyment of Display Form The enjoyment was rated significantly different as revealed by a Friedman ranked sum test $\left( {{X}^{2}\left( 3\right) = {11.340},p < {.05}}\right)$ . Post-hoc analysis with Wilcoxon signed rank indicated higher enjoyment of $3\mathrm{D} - \mathrm{H}$ compared to $2\mathrm{D} - \mathrm{H}$ $\left( {z = {2.934},p < {.01}}\right)$ and $2\mathrm{D} - \mathrm{P}\left( {z = {2.497},p < {.05}}\right)$ . There was no significant effect between remaining conditions.
268
+
269
+ Helpfulness The Friedman ranked sum test revealed a significant difference for helpfulness $\left( {{X}^{2}\left( 4\right) = {11.800},p < {.05}}\right)$ . Results of the post-hoc analysis with Wilcoxon signed rank test revealed that $2\mathrm{D} - \mathrm{P}$ was significantly less helpful than the other four instruction modes $3\mathrm{D} - \mathrm{H}\left( {z = {2.548},p < {.05}}\right)$ , 3D-P $\left( {z = {2.623},p < {.01}}\right) ,2\mathrm{D} - \mathrm{H}\left( {z = {2.578},p = {.01}}\right)$ and $\mathrm{{PM}}$ $\left( {z = {2.785},p = {.01}}\right)$ . Results showed no significant differences between remaining conditions.
270
+
271
+ Easy to Follow For the statement "It was easy to follow the assembly steps", the Friedman ranked sum test revealed a significant difference $\left( {{X}^{2}\left( 4\right) = {30.160},p < {.001}}\right)$ . Post-hoc Wilcoxon signed rank indicated that it is significantly more difficult for 3D-P than for $3\mathrm{D} - \mathrm{H}\left( {z = {3.076},p < {.01}}\right)$ or $2\mathrm{D} - \mathrm{H}$ $\left( {z = {2.934},p < {.01}}\right)$ and PM $\left( {z = {2.192},p < {.05}}\right)$ . It was also significantly harder to follow the instructions for 2D-P compared $3\mathrm{D} - \mathrm{H}\left( {z = {3.408},p < {.001}}\right)$ or $2\mathrm{D} - \mathrm{H}\left( {z = {3.296},p < {.001}}\right)$ as well as PM $\left( {z = {2.803},p < {.01}}\right)$ . No significant difference was found across remaining conditions.
272
+
273
+ General Preference For participants' preference, the Friedman ranked sum test revealed a significant difference $\left( {{X}^{2}\left( 4\right) = {20.680},p < {.001}}\right)$ . Post-hoc Wilcoxon signed rank showed that $3\mathrm{D} - \mathrm{H}$ was liked significantly more than $3\mathrm{D} - \mathrm{P}$ $\left( {z = {2.934},p < {.01}}\right)$ and $2\mathrm{D} - \mathrm{P}\left( {z = {3.060},p < {.01}}\right)$ . $2\mathrm{D} - \mathrm{H}$ was also liked significantly more than $2\mathrm{D} - \mathrm{P}\left( {z = {2.934},p < {.01}}\right)$ . Moreover PM was liked significantly more than $2\mathrm{D} - \mathrm{P}\left( {z = {2.079},p < {.05}}\right)$ . There were no significant differences between remaining conditions.
274
+
275
+ § 4.5.7 OVERALL RATINGS
276
+
277
+ For holding pieces up conditions, most participants preferred the spherical FTVR display (73.3%) over the flat 2D display. 26.7% of participants preferred the 2D display. For pointing, the spherical FTVR display was preferred by ${40.0}\%$ , while ${6.67}\%$ preferred 2D. ${53.3}\%$ of participants indicated that there was no difference between displays. The display rating results are shown in Figure 8.
278
+
279
+ In the overall rating of instructions mode, the ECA that was holding pieces up was ranked first by ${53.3}\%$ of participants and second by ${46.67}\%$ of participants. The paper manual was ranked first as well as second by ${40.0}\%$ and third by ${20.0}\% .{6.67}\%$ of participants ranked the pointing ECA first, 13.3% second and 80.0% of participants third.
280
+
281
+ < g r a p h i c s >
282
+
283
+ Figure 8: Overall rating of the display forms flat and spherical FTVR for the conditions pointing and showing pieces.
284
+
285
+ § 5 DISCUSSION
286
+
287
+ In this section, we discuss the three design factors display form, ECA appearance, speech and gesture to provide interpretations of our findings.
288
+
289
+ § 5.1 REFLECTION ON DESIGN FACTORS
290
+
291
+ § 5.1.1 ECA APPEARANCE
292
+
293
+ The subjective feedback regarding the ECA's appearance showed no significant differences between conditions, which is not surprising as the displayed ECA was the same in all four conditions. Nevertheless, results show how the ECA design was rated overall. Participants rated the realism of the ECA's movement and speech neutral. Even though we chose a not too human-like avatar to prevent the uncanny valley effect [25], it is particularly difficult to animate movements and implement speech without causing an unnatural appearance Participants did not express comments regarding the character model and its gestures in the overall questionnaire.
294
+
295
+ § 5.1.2 GESTURES
296
+
297
+ In our study, we compared an ECA that is showing virtual pieces to a pointing ECA. Results show that instruction mode did significantly affect assembly time, which is composed of listening to a speech instruction, choosing a piece, and attaching the piece. The listening time was similar for each participant, while the duration of choosing a piece and attaching it differed. Thus, piece identification errors led to longer task completion times when participants had to replace a wrong piece for the correct one. This is in accordance with the correlation between assembly task completion time and the number of incorrectly chosen pieces (see Figure 7). It was observed that when the ECA was pointing at pieces that were unambiguous to identify, participants were much faster in picking the piece compared to other conditions, as they did not have to search on the whole table. But since pointing led to more piece identification errors, the faster piece identification for some pieces did not lead to a shorter assembly completion time for pointing compared to the other conditions.
298
+
299
+ Error data shows a significantly higher piece identification accuracy for showing virtual pieces in the spherical FTVR display than for pointing in the flat display. Although there was no significant difference found between remaining conditions, results hint that showing the virtual pieces resulted in a lower piece identification error rate than pointing towards them (see Figure 6). It was observed that many participants had difficulties finding the right pointing targets, when voice alone was ambiguous and ambiguous pieces were placed side by side. This might be caused by multiple factors. First, the pointing targets were laid out closely together and in multiple lines in front of the ECA. This could have caused worse detection results than in previous studies, where the near far dimension was not investigated [34]. Second, we observed that participants lost trust in the ECA once they identified a wrong piece and had to correct it. While a high pointing accuracy of 82.6% was shown in a previous study which was conducted using a similar ECA with arm-vector pointing [34], this accuracy might not be high enough in an assembly scenario. Even though participants identified most of the 30 pieces correctly, they seemed discouraged after they chose a wrong one, which is also reflected in the confidence participants indicated in the questionnaire and the ranking results. 7 of 15 participants preferred showing virtual pieces, and 6 of 15 participants preferred the paper manual over pointing.
300
+
301
+ § 5.1.3 DISPLAY FORM
302
+
303
+ In the flat display conditions, participants had difficulties interpreting the correct pointing targets of the ECA which led to a higher target identification error. This might be caused by the lack of depth cues, which helped to detect the correct pointing target in the spherical FTVR display. It was observed that some participants were moving around the display while the ECA was pointing, which seemed to make the piece identification easier (see Figure 9). That is in line with previous research which showed that left and right areas, where participants see the arm pointing away from them, are more prone to misjudgments, while the front area was less difficult to recognize [34]. When the ECA was for example pointing to the right side and participants were moving towards the pointing target, they might be able to identify the target more accurately because the ECA is then pointing towards them. The same difference in behavior also applies to viewing the virtual model, which was visible in ${360}^{ \circ }$ view, when participants moved around.
304
+
305
+ We were surprised to find that while 11 participants preferred the spherical FTVR display when the ECA was showing pieces, only 6 participants preferred the spherical FTVR display for pointing. Eight participants reported no difference, although target identification errors were lower with the spherical display. Participants who preferred the spherical FTVR display noted that they felt like it was more accurate and easier to interpret the pointing targets, while participants that answered with "no difference" had difficulties detecting pointing targets in general so they had no preference. That is surprising, as the spherical FTVR display provides more depth cues and led to fewer errors. A possible explanation is that participants who were not confident in identifying pointing targets got discouraged resulting in feedback like "pointing is inaccurate in general" and the rating "no difference" between display forms, even though the spherical FTVR resulted in fewer errors.
306
+
307
+ There was a large difference between participants and their behavior interacting with the spherical FTVR display in general. Some participants moved around the display more and therefore have taken more advantage of depth cues and the possibility to get different perspectives of the ECA, the pieces and the displayed model. Others did not move at all, although all participants received the same instructions in the beginning. Therefore there was a smaller difference between both displays for participants who were standing at the same position during the whole assembly round.
308
+
309
+ § 5.2 COMPARISON TO PAPER MANUAL
310
+
311
+ The recorded assembly errors were below one in all conditions and therefore differences were not significant, even though piece identification errors were much larger. This shows that participants recognized incorrect pieces when the model state including the previously chosen piece was shown and thus corrected the piece, leading to a correctly assembled model. It was surprising to find that there was no significant difference in the assembly error between conditions, as the paper manual and the flat display did not provide a side or back view of the virtual model, making it "hard to see the other side of the model", as noted by participants. Others mentioned that the "virtual 3D model is always more helpful than paper manual" and "paper manual needs detail attention". Nevertheless, participants were able to assemble the model as correctly with the paper manual as when using the ECA. A possible reason could be that the paper manual allowed participants to "[easily] go back multiple steps", which then also provided different perspectives of the models, as well as gave participants the possibility to see if they made a mistake before. Most participants were observed navigating back and forth through the manual during the assembly process. Another reason mentioned by participants is that they are more "used to a paper manual" and to "reading visual assembly scenarios". This similarity of paper manual and ECA is also reflected in the statement rankings for helpfulness and preference (see Table 3).
312
+
313
+ < g r a p h i c s >
314
+
315
+ Figure 9: The perspective change when moving around the display might help to identify pointing targets more accurately. Here, the observer was moving from the front position (left) to the right side (right).
316
+
317
+ § 5.3 DESIGN IMPLICATIONS
318
+
319
+ Our study revealed challenges when designing ECAs that point into the real world. While previous research found a high detection accuracy for ECAs pointing into the real world [34], this accuracy might not be high enough in an assembly scenario. We observed, that it is particularly important to reach a high target detection accuracy to avoid frustration in the assembly process. By using pointing, the ECA was able to guide the attention of participants to a broader region, which helped participants narrow down the number of possible pieces. However, when multiple similar pointing targets are located closely together, they were not able to identify the correct piece using the pointing cue only. Thus, we suggest to implement indirect methods for precision tasks, like for example displaying a virtual piece. An example of how 3D instructions can improve the assembly in combination with viewpoint control was presented by Yamaguchi et al. [36]. This could also be implemented in our spherical FTVR display, which allows for 3D view and viewpoint control. Since participants liked the ECA using pointing gestures in general, pointing could be implemented in addition to indirect methods.
320
+
321
+ § 6 LIMITATIONS AND FUTURE WORK
322
+
323
+ In the following, we discuss six limitations along with opportunities they present for future research. First, in our study we only used one construction set and pieces were arranged in a fixed layout on the table to increase comparability between participants and conditions. Thus, it would be interesting to conduct a similar study using different pieces, such as a real furniture construction set, arrange pieces in a different layout, or use pieces without prior arrangement.
324
+
325
+ Second, we observed that participants behaved very differently when using the spherical FTVR display. While some participants used the additional cues, e.g. by moving around to get an additional perspective of the ECA or the virtual model, others did not move at all. Therefore it would be interesting to investigate whether more experience with the pointing ECA would improve the ability to identify correct pointing targets. A possibility could be to include a training phase where participants are animated to move around the display and identify pointing targets with feedback.
326
+
327
+ Third, while participants were able to detect pointing targets accurately when they were placed far enough apart or described detailed enough, it was difficult to distinguish closely located parts with broad voice descriptions. Since previous research showed that verbal descriptions should be substituted by gesture where possible instead of implementing both redundantly [4], it first requires future studies to quantify the detection accuracy for arm vector pointing to targets on a horizontal plane and determine at which distance targets get ambiguous.
328
+
329
+ Fourth, in our study we only compared the ECA's pointing to showing virtual pieces and a paper manual baseline. Results show a low piece identification error for the showing pieces ECA, even though the ECA was only holding the virtual pieces up. Therefore the question arises, whether an ECA does generally provide an advantage over a virtual model, especially because some participants noted that they felt pressured when using an ECA for the assembly in comparison to the paper manual. In contrast to a 2D paper manual, a 3D visualization displayed in a spherical FTVR display could provide depth cues. Future studies could investigate, whether an embodied human-like assistant provides an advantage over a 3D visualization of the assembly steps.
330
+
331
+ Fifth, our ECA only explained the assembly steps using voice and gestures. However, in an assembly scenario with a human assistant, people would not only follow the explanations, but also ask questions when they are unsure in an assembly step. Thus, a future step would be to implement a feedback mechanism and conduct further research to investigate, whether giving feedback would improve the error rate, assembly time and interaction experience.
332
+
333
+ Last, we only implemented deictic pointing gestures. Additionally, it would be possible to provide multiple gesture types, as they are used in human communication. An example was shown in previous research [22]. Their presented ECA used deictic pointing in combination with metaphoric gestures, to demonstrate how pieces should be placed, for example by crossing fingers to indicate that pieces have to be attached together in a 90 degree angle. Thus, future studies could investigate if the implementation of additional gestures enhances the interaction with ECAs.
334
+
335
+ § 7 CONCLUSIONS
336
+
337
+ In this paper, we presented an ECA with the ability to point into the real world to investigate, whether spherical FTVR displays affect the interpretation of the ECA's pointing gestures, as well as examine the effect of ECAs with pointing gestures in an assembly scenario in general. We conducted a study to compare the pointing ECA in the spherical FTVR display to an ECA holding up virtual pieces as well as to the same ECAs in a flat display with a paper manual as baseline. Participants had to assemble different construction toy models while measuring assembly time, errors and user experience using a questionnaire.
338
+
339
+ Our results show that the spherical FTVR display had no significant effect on assembly time or errors, while it was preferred in all ECA conditions by participants and was shown to lead to a higher presence rating. The ECA with pointing gestures could not reduce assembly time or errors compared to the ECA that showed virtual pieces or the paper manual, though it was rated as helpful in the assembly process. Our findings show that pointing is helpful to guide attention to a broader region, but is not suitable for precise locations. For precise piece identification indirect methods, like showing the pieces, are more helpful and could be used in combination with direct methods, like pointing. These findings can be used to guide the design and development of ECAs that point into the real world, especially for assembly scenarios. Since home assistants are advancing in interaction possibilities, an ECA that provides gestures is expected to provide more natural human-like interactions and thus merge the boundaries between the virtual and real world.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/U8p66V2PeEa/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,533 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Peek-At-You: An Awareness, Navigation, and View Sharing System for Remote Collaborative Content Creation
2
+
3
+ ![01963e07-504a-7a8f-bdfb-a0f5ea1919fe_0_227_350_1367_516_0.jpg](images/01963e07-504a-7a8f-bdfb-a0f5ea1919fe_0_227_350_1367_516_0.jpg)
4
+
5
+ Figure 1 Overview of our research process
6
+
7
+ ## Abstract
8
+
9
+ Remote work plays a critical and growing role in modern workplaces. A particular challenge for remote workers is mixed-focus collaboration, which involves frequent switching between individual and group tasks while maintaining awareness of others' activities. Mixed focus collaboration is important in content creation as it can benefit from the greater perspective, larger skill set, and reduced bias of a group, but this work is difficult to do remotely because existing systems only provide information about collaborators passively or through cumbersome interactions. In this paper, we present Peek-at-You, a system of collaborative features leveraging integration between collaboration and communication software, including conversational position indicators, speaker's view peeking, and view pushing. Our evaluation shows these features help support awareness, understanding, and working state transitions. Finally, we discuss adapting the features to manage distractions and support various work artifacts.
10
+
11
+ Keywords: Groupwork, Remote Collaboration, Content Creation
12
+
13
+ Index Terms: Human-centered computing-Collaborative and social computing-Collaborative and social computing theory, concepts and paradigms-Computer supported cooperative work
14
+
15
+ ## 1 INTRODUCTION
16
+
17
+ Distributed teams require successful technology-based collaboration to functioning effectively [1]. Collaborative work, in which actions are "influenced by the presence of, knowledge of, or the activities of another person" [63:145] is necessary for distributed teams and can be conducted asynchronously or synchronously. We focus on synchronous collaboration, which has become rarer among remote workers due to barriers not solved by existing systems [43]. Recent shifts to remote work due to COVID—19 are associated with a decrease in synchronous communication [86]. Ellis et al.'s time-space matrix refers to this type of collaboration as "same time, different place" [15].
18
+
19
+ Synchronous collaboration comes in various forms, such as parallel work on separate tasks or collective work on one task. However, for remote teams, the more complex mixed-focus collaboration presents challenges [31]. This type of collaboration involves moving back and forth between individual tasks and shared work with other group members while maintaining awareness of their whereabouts and activities [30]. Quick and fluid transitions between individual and shared work are key to successful mixed-focus collaboration [35]. Poor support for transitions can add significant friction to collaboration.
20
+
21
+ Collaborative content creation, an instance of mixed-focus collaboration, may be undertaken collaboratively for many reasons, e.g., distributing tasks, leveraging different expertise, avoiding bias, and gaining multiple perspectives [70]. It rates highly on the shared task and shared environment dimensions of collaboration [15] as people aim to create a cohesive final artifact. In contrast to collaborative writing, which has received attention in early groupware systems (e.g., $\left\lbrack {{15},{22},{59}}\right\rbrack$ ), we focus on the broader term "content creation" as formatting and graphical capabilities in modern groupware go beyond simple text and we design for steps other than writing, such as researching and decision making.
22
+
23
+ Ishii et al. described two types of collaborative spaces that tools can create: shared workspaces (spaces that allow sharing information, pointing to specific items, marking, etc.) and interpersonal space (spaces that allow verbal and nonverbal communication, eye contact, etc.) [40]. Synchronous remote collaboration requires multiple tools, such as audio and video calls for interpersonal space and real-time groupware for shared workspaces [68]. However, combining these tools can result in a disparate and unoptimized set of tools, such as conflicting signals and inaccurate perceptions. For example, screensharing allows everyone to see the same thing but does not allow everyone to work on it and transitioning between different user's sharing is tedious. Further, awareness widgets (e.g., Mural's presence icons or mini-map; Figure 2 bottom) are passive and may not provide accurate awareness of ongoing conversation and transitioning from individual to subgroup work requires searching a list of users or decoding colors to find out what part of an artifact others are working on. Lastly, the combination of signals from shared workspace and interpersonal space may lead to conflicting signals or inaccurate perceptions. For example, in video chat a collaborator's face becomes visible when they speak, giving the impression that they share a perspective on the workspace, even if they are seeing different parts of an artifact.
24
+
25
+ To address these challenges, we propose Peek-at-You, a system that integrates communication and collaborative software with elements that respond to users' activities and conversations. Peek-At-You includes Conversation-Based Location Indicators that overlay icons on users' video feeds to show which part of an artifact they are looking at (Figure 1D); a popup over the shared artifact to indicate the current speaker's position (Figure 1C); Speaker's View Peeking which allows users to preview the active speaker's view without leaving their current location (Figure 1A); and View Pushing (Figure 1 E) which streams a user's view as an overlay in the collaborative application, enabling quick transitions between individual and shared work. These features illustrate the potential of integrated systems for remote mixed-focus content creation.
26
+
27
+ This work makes three main contributions: 1) a series of formative observations using existing tools that contribute to our system's development. 2) Peek-At-You, a system and prototype implementation extending Google Docs. 3) findings form our evaluation, that show how integrated systems can foster the awareness, understanding, and transitions critical to enacting mixed-focus collaboration. We further discuss design iterations to minimize distractions and adapting the system to a variety of artifacts. By expanding the understanding and system support for mixed-focus content creation, our work advances the ability of systems to support synchronous collaboration for remote workers.
28
+
29
+ ## 2 RELATED WORK
30
+
31
+ Our work builds upon three areas: 1) systems for synchronous remote collaboration; 2) tools for supporting fundamentals of remote collaboration, and 3) specific characteristics and requirements of mixed-focus collaboration.
32
+
33
+ ### 2.1 Synchronous remote collaboration
34
+
35
+ Distributed teams "work together on a mutual goal or work assignment, interact from different locations, and therefore communicate and cooperate by means of information and communication technology" [18:459-460]. Remote collaboration plays a growing role [6,52] in many types of work [3,4,19,41].
36
+
37
+ Video-mediated communication (i.e., VMC or video chat) is a useful tool for supporting remote teams, work and learning $\left\lbrack {{27},{41},{73}}\right\rbrack$ . VMC apps often allow text chat and screensharing [76], but are not aware of interactions in other collaborative apps.
38
+
39
+ Another important tool for remote work is Real-time groupware, which lets "distance-separated people work on a shared task in real time" [68:66] (e.g., editing documents, slides, or interface prototypes). Traditionally, real-time groupware has not integrated communication (e.g., [2,38,67]), even relying on speaking over room dividers for studies (e.g., [37,83]). Many modern apps also act as "silos" with little integration (e.g., Slack and Zoom focus on communication while Figma and Microsoft Word focus on content). Increasingly, VMC is being integrated with groupware (e.g., video calls in Google Docs [11] or whiteboards [58] and third-party apps [57] in Microsoft Teams calls), but conversational data such as the current speaker is not used to support collaboration.
40
+
41
+ Some research projects have sought to integrate the shared workspace with the interpersonal space: Ishii et al. did so for pairs by overlaying drawing atop a remote partner's video feed [40] and Grønbæk et al. did so for groups by allowing both spaces to become semi-transparent and overlay each other [26]. However, these systems have limitations including number of users, manually managing positions and opacities, and interfaces that differ significantly from single-user apps. Peek-At-You uses a logical integration: it surfaces workspace data in the interpersonal space and vice-versa, but does not overlay the two; this builds on the familiarity and scalability of traditional groupware and VMC.
42
+
43
+ Collaborative content creation tasks involve working with others to generate content; writing is a popularly studied example (e.g., $\left\lbrack {{45},{46},{72},{47},8}\right\rbrack$ ), but other examples include presentations and interface designs. Research suggests remote content generation is associated with less communication, more focus on organization of work, and less focus on feedback or content [8]. One particular aspect that collaborators negotiate is territories, i.e., portions of an artifact that are primarily edited or controlled by one person. These can both explicit or implicit and vary in duration [46]. Collaborators also negotiate transitions between tools (e.g., a document editor, notepad, and LaTeX editor) depending on their current needs [47].
44
+
45
+ ### 2.2 Views, awareness, and gestures
46
+
47
+ Screensharing is a longstanding paradigm for making applications "collaborative" [20]. Screensharing is asymmetrical: typically, one person shares at a time, decides what is shown, and manipulates the interface. "Remote control" may allow another user to move the cursor, but still does not support multiple real-time collaborators [20]. Screensharing provides WYSIWIS ("what you see is what I see") collaboration, but real-time groupware can go beyond this limitation as people use their own instance of the software. Real-time groupware can be WYSIWIS [17] or relaxed-WYSIWIS: viewports, representations, and formatting, and can vary per-user $\left\lbrack {{21},{72}}\right\rbrack$ . This increases the Independence of users [66]. While tools that spatially integrate interpersonal space and workspace use screensharing and WYSIWIS views to mix content and VMC [26,88], Peek-at-You benefits from the independence and shared control via integrated relaxed-WYSIWIS groupware.
48
+
49
+ In relaxed-WYSIWIS groupware, it can be difficult to maintain workspace awareness-i.e., "the up-to-the-moment understanding of another person's interaction with the shared workspace" [31:417]. Awareness includes multiple elements: who (presence, identity, authorship), what (action, intention, artifact), and where (location, gaze, view, reach) [31]. Gutwin et al. [32] devised several awareness supports, including Radar Views (a scaled down overview of the workspace), Multiple-WYSIWIS Views (scaled down mirrors of others' views), WYSIWID Views (a full-size view of the area around another's cursor), and Teleportals (temporary navigation to someone's viewport). Showing another user's screen or the area around their cursor is helpful, but poorly scales to groups because of limited screen real-estate [30]. More generally, existing awareness supports have significant drawbacks: because they are not aware of who the user is communicating with, they cannot optimize screen usage or highlight the most relevant information.
50
+
51
+ Supporting awareness involves a tradeoff with distractions. This can relate to usage of screen real-estate, visual feedback of others' work [30], and collaborators interrupting. For best results, care should be taken before, during, and after interruptions, to ensure that it occurs at an opportune time, the interruption is handled completely, and the original task is resumed easily [55]. There are multiple approaches to this, including immediate interruptions, negotiating when an interruption will occur, or having a mediator or schedule for interruptions [54]. Systems can employ these directly or support people in using them.
52
+
53
+ Like awareness, gestures and references are also critical elements of collaboration that are difficult to leverage in remote contexts [75]. Gestures allow people to communicate things that are difficult to verbalize, e.g., where an item is located, and occur very frequently during face-to-face collaboration [28,75]. One common type of gesture, deictic referencing, involves pointing to establish what object a person is referring to as they speak [56]. "Pointing" using a remotely displayed cursor (i.e., a telepointer) is common, but with relaxed-WYSIWIS the content being pointed to may be rendered differently or even outside remote user's viewport [21]. References and gestures also require we-awareness ("the socially recursive inferences that let collaborators know that all are mutually aware of each other's awareness" [23:279]). The first step of gestures is establishing mutual orientation ("that both parties can see the gesture and the target") [84:1378], so systems must allow collaborators to establish a shared view and also be aware of this state. Our system allows people to quickly establish mutual orientation, by pushing a view and seeing the current viewers or by jumping to others' positions and seeing who is in the same area.
54
+
55
+ ### 2.3 Configurations, transitions, and activities in mixed- focus collaboration
56
+
57
+ Broadly, mixed-focus collaboration involves "individual tasks ... and shared work" [30:207]. To be more specific, the Coupling typology characterizes work as Light-weight Interactions, Information Sharing, Coordination, Collaboration, or Cooperation [60]. Mixed-focus collaboration occurs at the more tightly coupled levels, which are rarely done remotely [43]. Another way to characterize group work is subgroupings. Informally, this may include parallel (individual), pair/small-group, and group work [81]; formally, subgrouping can be described in more detail [61]. A third way to characterize groupwork is content focus. For example, one such categorization includes discussion, view engaged, sharing of the same view, same information but different views, same specific problem, same general problem, different problems, and disengaged [39]. These characterizations raise key concepts-coupling, groupings, and shared views-that we use to define important configurations for our system to support.
58
+
59
+ In mixed-focus collaboration, transitions between working configurations are key to success [35]. Transitions facilitate the three typical phases of collaboration: pre-process, in-process, and post-process [10]. Further, transitions facilitate various activities while in-process (e.g., creating content, presenting results, comparing results, and sharing content) [44]. Several research projects seek to support transitions. For classrooms, one allows teachers to plan and make planned or fluid transitions between individual, small-group, and whole-group phases [64]. For in-person collaboration, shape-changing furniture can aid transitions [25] or an extra shared device can aid moves from individual to group work [5]. For remote work between pairs, continuous screen sharing using a second monitor supports transitions [14]. For other remote work, the TeamWave system uses a room metaphor to ease transitions [24]. The Peek-At-You system supports transitions for fully remote groups, with a design intended for a variety of artifacts.
60
+
61
+ ### 2.4 Support for collaborative content creation
62
+
63
+ Creating content collaboratively requires both planning (defining the goals for the content, discussing resources of each collaborator, defining the forms of collaboration to occur, and allocating tasks) and production (sketching, composing, and reviewing content an individual and group level) [8]. The collaborators must communicate, coordinate, cooperate, and maintain awareness [60].
64
+
65
+ Commercial and research systems have worked to advance support for these key elements. Video chat supports conversations and awareness [12], which is important for planning (e.g., discussing how to distribute tasks) and production (e.g., reviewing others' individual work through discussion or speaking about how to compose individual work into a cohesive whole). Relaxed-WYSIWIS groupware allows people to do individual sketching or composition work (by taking on their own views) as well as group composing and reviewing work (because the task space is shared) [68]. Within groupware, awareness tools provide support for monitoring and understanding what others in the group are doing [32]. However, research suggests that existing tools still require remote groups to spend a large amount of time organizing their work, limiting their ability to focus on planning and discussing the content itself [8]. Research testing non-traditional spatial interfaces has shown that combining communication and collaborative tools has the potential to further support for communication, organization, and awareness $\left\lbrack {{26},{40}}\right\rbrack$ . To best support collaborative content creation, we consider a non-spatial approach to integrating task and interpersonal space, seeking to support communication, awareness, and group work processes while maintaining the familiar interfaces of productivity and communication tools.
66
+
67
+ Integrating and sharing data between the task and interpersonal spaces may offer many benefits. First, it could reduce the burden of managing windows [62] and help avoid a sense of impoliteness related to multitasking [53]. Second, since collaborators using VMC spend 5-17% of the time looking at the video feeds [78,79], an integrated system could place awareness widgets near video feeds to make them more consistently visible. Third, awareness indicators on peoples' video feeds could tie information to easily scannable of video feeds, rather than a row of circles that must be searched or interacted with to locate others' positions. Signals could also be prioritized based on the current speaker. Fourth, integration could enable unique view sharing tools, reducing difficulties with starting and managing shared views [30]. For example, privacy preserving 'push' and 'pull' view sharing, quick transitions to co-editing, highly visible gesture cursors only when needed, and prioritized access to the current speaker's view. Fifth, an integrated approach could support multiple working styles; for example, using fewer awareness supports when a call is not active. Finally, an integrated approach could automatically respect boundaries (e.g., breakout rooms), avoid inconsistent information, and help everyone in a group call establish a shared workspace.
68
+
69
+ 3 FORMATIVE OBSERVATIONS
70
+
71
+ ![01963e07-504a-7a8f-bdfb-a0f5ea1919fe_2_935_1164_690_369_0.jpg](images/01963e07-504a-7a8f-bdfb-a0f5ea1919fe_2_935_1164_690_369_0.jpg)
72
+
73
+ Figure 2. A collaboration setup used in our sessions: Zoom and Mural (people and content are for illustration, not from our data).
74
+
75
+ Previous research suggests that tightly coupled remote work is difficult, even with video chat [41]. To build on this understanding in the case of collaborative content, we conducted two formative sessions in which groups of five ( 7 man, 3 woman; participants were all office workers employed within a research unit; remote work experience: all were employed remotely at the time of the study) collaborated with existing tools (see Figure 2).
76
+
77
+ The sessions used Zoom and two real-time collaborative apps: Microsoft Word online and Mural (a digital whiteboard). The task was to create a business plan for two prompts: "A stall on a tropical beach full of tourists" and "A kiosk in a busy mall". Group members were assigned a role-Product Developer (three people) or Writer (two people)—and worked on the following activities: (1) create a name for the business [All Roles], (2) create 10 products, each with a name and image [Product Developers], (3) write a paragraph explaining why people should come to the new business [Writers], and (4) agree on prices for the products [All Roles].
78
+
79
+ Each group completed the task twice, once using Microsoft Word and once using Mural (the order was switched between groups). For each tool, participants received an overview of available collaborative functionality (Microsoft Word: list of editors, jump to others' cursor; Mural: list of editors, shared selections, telecursors, mini-map, jump to or follow others' locations), then collaborated for 12 minutes. The prompt and roles differed for each tool.
80
+
81
+ The collected data included participants' screens, audio and video, and questionnaires after each task (NASA-TLX [36] and questions about who they worked with most, what parts of the task they worked on most, and any issues noticed while collaborating). A final questionnaire asked about preference between Word and Mural. Lastly, a semi-structured interview explored the group's organization, feeling of connectedness, and awareness of others.
82
+
83
+ ### 3.1 Observations
84
+
85
+ We reviewed the recordings and survey data to identify issues participants encountered while collaborating remotely.
86
+
87
+ Audio channel limits small-group work. Our observations suggested that during the middle phase (the role-specific tasks) the product developers tended to occupy the audio channel. Annotating the recorded calls showed that product developers spent a total of 17.73 minutes speaking while writers spent 11.04 minutes speaking (total across both groups and tasks). The cause for this disparity may simply be the larger number of product developers, a reluctance to break into conversation on the part of the smaller subgroup, or the fact that writing work does not facilitate multitasking and discussion. This finding suggests that multiple subgroups may not benefit equally from a shared audio channel.
88
+
89
+ Written content can be more difficult to get feedback on. In one group, a writer asked for others to check over their paragraph, but no one did. In the other group, a writer said they were not happy with their paragraph and others should take a look, but again no one did. In contrast, ideas for products or names, which could be raised verbally, generally received quick feedback from others.
90
+
91
+ Misunderstandings and duplicated work were common and often unnoticed. In several instances, multiple people added the same product or created a heading and area for the same section. In several other cases, recordings showed two people simultaneously searching for images of the same product; this lack of coordination was not revealed until they returned to the workspace to see an image already added. While duplicated work can be desirable in some circumstances (e.g., brainstorming), the duplicated work we observed was silently discarded, not considered as an improvement.
92
+
93
+ Collaboration tools infrequently used. Recordings revealed that participants did not use jump and follow. While recordings cannot show with certainty whether participants looked at Mural's mini-map, none interacted with it, and several participants were unaware of changes outside their viewport (which it displays). The infrequent usage may relate to the session length, task requirements not calling for such interactions, or friction when using these tools.
94
+
95
+ ## 4 DESIGN CONSIDERATIONS FOR THE PEEK-AT-YOU SYSTEM
96
+
97
+ In addition to our formative observations, we based our design on five team configurations and four design goals.
98
+
99
+ ### 4.1 Team configurations to support
100
+
101
+ Researchers have developed frameworks for describing mixed-focus collaboration $\left\lbrack {{39},{61},{81}}\right\rbrack$ , but not for fully remote synchronous collaboration. Guided by these frameworks and our formative observations, we focus on five key group configurations. To describe these configurations, we use concepts identified in previous work $\left\lbrack {{61},{81}}\right\rbrack$ that we define as follows: a team is the collection of individuals collaborating in a call; a subgroup is a unit of two or more people collaborating and a main group is a special subgroup that is maintaining the conversational floor. We introduce the "main group" unit because of poor support for parallel conversations in video calls [82] (e.g., this was observed with the product developer subgroups in our formative observations).
102
+
103
+ Considering previous work $\left\lbrack {{61},{81}}\right\rbrack$ and the peculiarities of video calls (e.g., breakout rooms, limited parallel conversations), we define five key configurations to support (see Table 1):
104
+
105
+ 1. Individual Work: each person works alone
106
+
107
+ 2. Individual + Subgroups Work: some people work alone, while others work together in pairs or small groups
108
+
109
+ 3. Subgroups Work: all people work in pairs or small groups
110
+
111
+ 4. Splintered Team Work: most people work together in a main group, while a few work individually
112
+
113
+ 5. Team Work: all people are working together in a main group To define subgroup more precisely than "people collaborating", we considered states identified in co-located work [39] and adapted them to a remote and task-agnostic context by recognizing two key concepts: sharing a view and discussing. These concepts have been used in coding remote [77] and hybrid [61] collaboration, and help formalize differences between conversational and visual feedback seen in our formative observations. Therefore, we consider four subgroup states: not existing, discussing, working on the same content, and working on the same content and discussing.
114
+
115
+ Table 1. Our five team configurations illustrated for a team of eight.
116
+
117
+ <table><tr><td rowspan="7">Group states</td><td/><td colspan="3">Individual States</td></tr><tr><td/><td>Working individually</td><td>Working with a subgroup</td><td>Working with the main group</td></tr><tr><td>Individual work</td><td>1212 2022</td><td/><td/></tr><tr><td>Individual + subgroups Work</td><td>2018 2.2</td><td>二二</td><td/></tr><tr><td>Subgroups work</td><td colspan="3">2022 2023</td></tr><tr><td>Splintered team work</td><td>三三</td><td/><td> <img src="https://cdn.noedgeai.com/01963e07-504a-7a8f-bdfb-a0f5ea1919fe_3.jpg?x=1541&y=1171&w=86&h=21&r=0"/> 三三</td></tr><tr><td>Team work</td><td/><td/><td>2222 2022</td></tr></table>
118
+
119
+ The heart of mixed-focus collaboration is fluid transitions between states [35]. Therefore, it is important to consider not only the configurations or states that individuals, subgroups, and teams can take on, but also the variety of transitions that can occur between them (e.g., from Individual Work to Subgroup Work). The transitions from individual to subgroup or team work involve particular challenges: unlike face-to-face collaboration, physical movements and reconfigurations of the workspace that can support transitions [25] are not possible. People must quickly and accurately understand what others are working on to assess when a transition is appropriate and whether it has succeeded. On a subgroup level, transitions between discussing and not discussing are mainly constrained by the availability of the audio channel. Transitions from not sharing a view to sharing a view can be more difficult to do quickly without system support.
120
+
121
+ ### 4.2 Design goals
122
+
123
+ Based on existing literature, formative observations, and our configurations to support, we address blockers to remote mixed-focus content creation with a system designed around four goals:
124
+
125
+ DG1. Build awareness when and where needed. Systems for mixed-focus collaboration should actively build awareness of collaborators' actions and positions, rather than relying on passive indicators. Previous approaches to this goal include detecting references to documents in conversation to surface relevant files $\left\lbrack {{34},{65}}\right\rbrack$ , detecting periods of inattention and using highlighting, motion traces, or replays to catch up [29], or manually configuring avatars that can notify users of certain actions by others [16]; we focus on automatically and continuously supporting awareness.
126
+
127
+ DG2. Support understanding of conversation. Conversations can be difficult to understand when views differ between collaborators, and existing solutions for maintaining awareness are passive in these situations. Previous approaches to this goal include awareness widgets like mini-maps [30,32], detecting content references in text messages and determining the probability of misunderstandings based on gaze detection [9], or sharing gaze positions with other users [45]. We focus on using integration with the interpersonal space to go beyond traditional awareness indicators without requiring gaze detection hardware.
128
+
129
+ DG3. Allow fast and simple transitions between collaborative states. Understanding what others can see and establishing shared views should be quick and easy, supporting transitions. We focus on supporting lightweight transitions that occur without changing the communication medium or work artifact. Previous approaches have also enforced additional structure. One structured approach is turn-taking of control (e.g., driver and viewer roles for editing documents [48] or music live coding [85]); however, research suggests that verbal and non-verbal communication can obviate the need for rigid turn-taking protocols [13] so we focus on more flexible state transitions in our video-chat based system. A second structured approach is handoff of information (e.g., using a specialized visualization for collaborative sensemaking [87]); however, for collaborative content creation, we focus on leveraging views and positions in the existing artifact to support transitions.
130
+
131
+ DG4. Means for lightweight feedback. Assistance and feedback should be easy to provide. Visual communication should be supported for gestures, referencing, and times when the audio channel is occupied. The primary previous approaches for lightweight feedback include telecursors [21] and screensharing [20]. We focus on simpler and more transient view-sharing and enabling gesture-friendly cursors within these views.
132
+
133
+ ## 5 PEEK-AT-YOU: NEW COLLABORATIVE SYSTEM LEVERAGING INTEGRATION
134
+
135
+ Based on our design goals, we created Peek-at-you, a set of collaborative features developed with our four design goals in mind, implemented as a Chrome extension that extends Google Docs. The design reflects the specific case of document editing, which is the focus of our evaluation, but the features are designed to apply to various artifacts (e.g., slides, digital whiteboards, 3D models, or interface designs). Our system focuses on allowing a tightly coupled group to successfully leverage the rich communication and awareness possible within a single video call; therefore, we did not include other established methods of managing communication that split up conversations (e.g., text chat [69], breakout rooms, or spatial video chat [88-91])
136
+
137
+ ### 5.1 Conversation-based position indicators
138
+
139
+ Position indicators help collaborators understand where in the document others are working (DG2). Because our system integrates video chat with collaborative software, this information can be surfaced where and when we expected it to be most useful (DG1). First, icons are shown in the corner of each users' video feed (see Figure 3, A), indicating the collaborator's current page and whether they are in the same place, above/below, or in another tab. This is based on our observation that users who are speaking might erroneously assume they are looking at the same thing. Clicking the icon scrolls to the collaborator's position (DG3). Placing awareness supports on others' video feeds is unique approach that does not use screen space within the task area. Second, an active speaker popup is shown at the bottom of the work area when a collaborator is speaking (see Figure 3, B), containing the same icons as in the speaker's video feed and a description of their state (e.g., "below you (page 6) in the document" or "in another tab"). For users focused on the shared workspace, this actively shows relevant information from the interpersonal space without the need to scan the indicators in the video chat. Conversational position indicators on video feeds provide many components of awareness: presence and identity via video feeds, location and view via position indicators, and action and artifact via jumping. Further, understanding others' viewpoints and navigating to them afford the fundamentals of we-awareness [23], a key requirement for discussing content. This is important as effective communication and organization could allow more content-focused work time [8].
140
+
141
+ ![01963e07-504a-7a8f-bdfb-a0f5ea1919fe_4_929_551_719_469_0.jpg](images/01963e07-504a-7a8f-bdfb-a0f5ea1919fe_4_929_551_719_469_0.jpg)
142
+
143
+ Figure 3. Video call integrated into Google Docs using Peek-at-You. Conversation-Based Position Indicators appear on others' video feeds (A) and at the bottom of the work area (B).
144
+
145
+ ![01963e07-504a-7a8f-bdfb-a0f5ea1919fe_4_929_1147_735_660_0.jpg](images/01963e07-504a-7a8f-bdfb-a0f5ea1919fe_4_929_1147_735_660_0.jpg)
146
+
147
+ Figure 4. Peeking a collaborator's view using our system. Viewers can react (A) and gesture with cursor trails (B)
148
+
149
+ ### 5.2 Speaker's view peeking
150
+
151
+ Quick and fluid transitions between individual and shared work are key to mixed-focus collaboration [35], and getting feedback is an important component of creating content together [8]. Therefore, the active speaker pop-up described above can be hovered to quickly preview the current speaker's view (see Figure 4). This functionality is inspired by our observation that content such as writing can be difficult to get feedback on, and people may be hesitant to leave their position to see what someone else is talking about (DG2, DG3). When peeking someone else's view, viewers can react using a set of five reactions (DG4): thumbs up/down ( § / § ), eyes ( $\square \square$ ), ok ( & ), and thinking ( ; ). Viewers can also use cursor trails (colored dots that temporarily appear as their mouse cursors move on the preview); these specialized telecursors are well suited to gesturing [33] (DG2, DG4).
152
+
153
+ ### 5.3 View pushing
154
+
155
+ In addition to peeking others' views, the system allows participants to quickly share their own view with everyone by clicking a "share view" button (DG3, see Figure 5). Like view peeking, this feature is based on the need to easily get feedback and transition between working styles; this feature particularly supports transitions to full-group work. Other collaborators can dismiss the shared view if it is not relevant to them, and the sharer sees a list of current viewers (DG1). Viewers can use the same reactions and cursor trails that are available when peeking (DG4). For quick transitions, pushing ends any existing view push (DG3). By offering both View Peeking and View Pushing, the system provides robust support for passive and active maintenance of the action, artifact, and view components of workspace awareness. Additionally, by enabling a shared context for conversation, these features may allow collaborators to discuss more nascent aspects of workspace awareness such as intention.
156
+
157
+ ![01963e07-504a-7a8f-bdfb-a0f5ea1919fe_5_157_959_708_512_0.jpg](images/01963e07-504a-7a8f-bdfb-a0f5ea1919fe_5_157_959_708_512_0.jpg)
158
+
159
+ Figure 5. Pushing a view to collaborators. Pushing is started or stopped with one click (A), lists the current viewers (B), and shows reactions on the content area (B) and video feeds (C).
160
+
161
+ ### 5.4 Prototype implementation details
162
+
163
+ Our prototype extends an existing groupware system, Google Docs. Although our formative observations were done using Microsoft Word, we chose Google Docs, as its HTML was easier to extend. We integrated video chat into the Google Docs page via a sidebar on the right side, where the active speaker is highlighted with a green outline, and added the previously described features.
164
+
165
+ The prototype extends Google Docs using a Google Chrome extension. React and Typescript are used to inject the system's interface, capture camera and tab feeds, and track viewports. The video chat uses WebRTC, with Kurento Media Server [50] for server-side recording and hark.js (https://github.com/otalk/hark) for active speaker detection. A NodeJS server and WebSockets are used to sync collaborators' states (position, active tab, view shares, etc.). Position icons show whether collaborators' scroll positions are the same as the user (>66% viewport overlap) or above/below, and whether others are sharing views or in another tab. Users to share the tab when joining the call; this stream is always transmitted for recording and forwarded to others as needed for view sharing.
166
+
167
+ ## 6 SYSTEM EVALUATION
168
+
169
+ We studied six groups of five people in a mixed-focus content creation task to gather initial feedback about our Peek-at-You system. The study was approved by the institutional ethics board.
170
+
171
+ ### 6.1 Task
172
+
173
+ The study task involved creating a plan for a hypothetical business merger. It differs from the business plan task used for our formative observations in two ways. First, the task begins with an existing document, ensuring sufficient content for the need of positional awareness. Second, an editor role was added, to ensure to increase the working configurations and transitions. Participants were assigned one of three roles: Writer (two participants), Marketer (two participants), or Editor (one participant). Groups received a document containing information about a fictional company and three candidate companies for the merger. Their task was to select the best candidate and plan for the new company:
174
+
175
+ - (All Roles) Review background info on the company and three merger options, then choose a merger option
176
+
177
+ - (Writers) Write one or two paragraphs for investors about why the merger will help the company grow
178
+
179
+ - (Marketers) Create a New Company Name, New Hero Offering, and Marketing Plan for the company
180
+
181
+ - (Editor) Help out the others as needed and check all new content for quality/consistency
182
+
183
+ The evaluation included two conditions (Video Chat Only and Peek-At-You) in a within-groups design, so two merger planning documents were created, allowing groups to perform the task twice. The first document described a bakery chain choosing between dessert bakery, deli, and smoothie chains. The second document described a sportswear retail chain choosing between local-focused, sporting equipment, and yoga clothing chains. Each document included background information, a product summary, and a SWOT analysis for the company (1.5 pg.); investor statement placeholder (0.5 pg.); overview, strengths, and weaknesses for each merger option (3 pg.); merger decision placeholder (1 pg.); new name and hero offering placeholders (1 pg.); and marketing plan placeholder ( 1 pg.). Placeholders included a reminder of what to add and scratch space for ideas or notes.
184
+
185
+ ### 6.2 Procedure
186
+
187
+ The study was conducted remotely via Zoom, except for the collaborative work which used the video chat integrated in our prototype system, lasting 75 minutes. After giving informed consent, and completed the task twice, once with only the video chat only mode (Video Chat Only) and once with all features enabled (Peek-At-You). Each time, they were given instructions and roles to collaborate using Google Docs. In the Video Chat Only condition, the instructions were pointed to collaborative functionality of Google Docs such as the editor list at the top of the page. In the Peek-At-You condition, the instructions were a brief interactive tutorial. Next, participants opened a copy of the task instructions in a separate tab for reference and spent 15 minutes collaborating. After finishing, participants also completed a survey about the experience. The order of conditions and roles was counterbalanced between groups and tasks.
188
+
189
+ ### 6.3 Measures
190
+
191
+ After each condition, a questionnaire was used regarding: - NASA-TLX [36]
192
+
193
+ - Collaborative Experience: three items rating participants' Distractedness ("I was frequently distracted as I tried to
194
+
195
+ work"), Awareness ("I had a good sense of what other people were working on at all times"), and Understanding of Discussion ("It was easy to follow the ongoing discussion"). on a 7-point Likert scale (strongly disagree to strongly agree). - Feedback: open feedback about the system or experience.
196
+
197
+ After completing both conditions, a final survey asked which condition participants preferred and why. Finally, a brief (10 minutes) semi-structured group interview was conducted regarding ability to get feedback from others, ability to understand others, desire and ability to maintain awareness, and reasons for using or not using Peek-At-You's features. Participants' screens and video calls were recorded during the tasks and log data was collected.
198
+
199
+ ### 6.4 Participants
200
+
201
+ Participants were recruited from within our institution using email and Slack channels and compensated through an internal award program (approximate value pre-tax 70 USD). Twenty-nine participants were recruited in six groups of five people. Due to a last-minute cancellation, one group completed the study with four members rather than five; this was accommodated by omitting the editor role. In total, 29 participants completed the study (17 Female, 12 Male; Age: mean=29.4, SD=9.3). Professions of the participants were varied (Software Developer/Engineer=11, Design/UX=3, Analyst=5, Researcher=3, Management/ Supervision=2, Community Support=2, Marketing=2, Legal=1), but all were experienced with remote work. Most participants were unacquainted, but none were coworkers. While our sample size and the complex dynamics within a five-person group interaction did not allow us to account for these instances within our analysis, we expect the fixed task, randomly assigned roles, and within-group study design minimized any potential effects of these differences and we did not make observations related to acquaintedness.
202
+
203
+ ### 6.5 Evaluation findings
204
+
205
+ Our findings focus on understanding how much participants used Peek-At-You's features, their preferences for our system or the Video Chat Only condition, and their feedback on each condition.
206
+
207
+ #### 6.5.1 System usage
208
+
209
+ ![01963e07-504a-7a8f-bdfb-a0f5ea1919fe_6_160_1315_711_376_0.jpg](images/01963e07-504a-7a8f-bdfb-a0f5ea1919fe_6_160_1315_711_376_0.jpg)
210
+
211
+ Figure 6. Collaborative feature usage per-participant. Averages are provided across all participants and per-role.
212
+
213
+ Usage of our system's features was analyzed using log data. In the 15-minute session, participants used the jump functionality of the video overlay icons an average of 1.72 times, the peek functionality an average of 3.21 times, and the push functionality an average of 0.45 times (see Figure 6 for details). While push was used less than peek on a per-participant basis, it is worth noting that pushes affect the entire group whereas peeks are displayed only to the local user. Another important caveat to these usage numbers is that they do not capture how often participants looked at the Conversation-Based Position Indicators, usage which is better captured through survey and interview responses.
214
+
215
+ #### 6.5.2 Survey responses
216
+
217
+ Participants' responses to the NASA-TLX were similar in the Video Chat Only and Peek-At-You conditions (see Table 2).
218
+
219
+ Table 2. NASA-TLX responses. Values are mean (SD).
220
+
221
+ <table><tr><td/><td>Video Chat Only</td><td>Peek- At-You</td><td>Wilcoxon Signed-Ranks</td></tr><tr><td>Mental Demand</td><td>6.48 (2.11)</td><td>6.48 (1.45)</td><td>Z=-0.06; p=.95</td></tr><tr><td>Physical Demand</td><td>${2.48}\left( {2.52}\right)$</td><td>2.03 (2.18)</td><td>Z=-1.47; p=.14</td></tr><tr><td>Temporal Demand</td><td>6.07 (2.12)</td><td>6.52 (2.13)</td><td>Z=-0.76; p=.45</td></tr><tr><td>Performance</td><td>5.24 (2.71)</td><td>4.59 (2.21)</td><td>Z=-0.85; p=.40</td></tr><tr><td>Effort</td><td>${5.93}\left( {1.98}\right)$</td><td>5.76 (2.08)</td><td>Z=-0.17; p=.87</td></tr><tr><td>Frustration</td><td>4.86 (2.67)</td><td>4.69 (2.04)</td><td>Z=-0.38; p=.70</td></tr></table>
222
+
223
+ Responses to the Collaborative Experience questions show some differences between the Video Chat Only and Peek-At-You conditions (see Figure 7). Participants expressed greater agreement regarding their understanding of the conversation in the Peek-At-You condition, but this difference was not significant in a Wilcoxon Signed-Ranks test $\left( {\mathrm{Z} = - {1.101};\mathrm{p} = {0.267}}\right)$ . Participants rated their awareness of collaborators higher in the Peek-At-You condition (median=Somewhat agree) than in the Video Chat only condition (median=Somewhat disagree); the difference was significant in a Wilcoxon Signed-Ranks test $\left( {\mathrm{Z} = - {2.15};\mathrm{p} = {0.03}}\right)$ . Participants did not rate their level of distraction significantly differently in the two conditions (Wilcoxon Signed-Ranks: $\mathrm{Z} = - {0.26};\mathrm{p} = {0.80}$ ).
224
+
225
+ ![01963e07-504a-7a8f-bdfb-a0f5ea1919fe_6_926_983_715_334_0.jpg](images/01963e07-504a-7a8f-bdfb-a0f5ea1919fe_6_926_983_715_334_0.jpg)
226
+
227
+ "Strongly disagree "Disagree "Somewhat disagree "Neutral "Somewhat agree "Agree "Strongly agree
228
+
229
+ Figure 7. Responses to Collaborative Experience questions (*p<.05).
230
+
231
+ A majority of participants preferred the Peek-At-You condition $\left( {\mathrm{n} = {21}}\right)$ . A subset of participants preferred the Video Chat Only condition $\left( {\mathrm{n} = 8}\right)$ . Among participants preferring Video Chat Only, roles in the Video Chat Only condition were Editor $\left( {\mathrm{N} = 2}\right)$ , Writer $\left( {\mathrm{N} = 4}\right)$ , and Marketer $\left( {\mathrm{N} = 2}\right)$ while their roles in the Peet-At-You condition were Writer $\left( {\mathrm{N} = 3}\right)$ and Marketer $\left( {\mathrm{N} = 5}\right)$ . Participants provided open-ended feedback regarding the reasoning for their preferences. Among the participants who preferred the Video Chat Only condition, four did not feel the new features were needed to maintain awareness, or felt that a high degree of awareness was not needed in this task. The other four found the features distracting due to rapid visual changes. Five of these eight participants experienced Peek-At-You with the Marketer role; while the sample is not large enough to test for significant, it is possible that the marketing role in particular was well suited to verbal discussion required less in-artifact coordination. Among the participants who preferred the Peek-At-You condition, reasons for the preference were varied but related to usefulness in supporting awareness and understanding. A qualitative analysis of participants' feedback was performed to provide greater insight into these perceptions.
232
+
233
+ #### 6.5.3 Participant feedback
234
+
235
+ To analyse participants' experiences and feedback we used an open coding approach, where two authors separately coded transcripts until no new codes appeared, then reviewed each other's coding for agreement; this included data from two groups. The first author coded the remaining data and identified eight themes, which were merged into six themes after discussion between two authors.
236
+
237
+ Peek-At-You aids awareness. Participants found the Peek-At-You system to be interactive and helpful in maintaining awareness of collaborators' locations, roles, thought processes, and task progress. The conversation-based position indicators helped participants stay aware of others' locations and focus on what they wanted to share. Displaying collaborators' positions helped communicate roles by being able to see which areas of a document everyone is working on. Tracking indicators over time can also reveal thought processes such as referencing one part of the document to help with writing elsewhere; P11 explained the system "definitely helped us understand, like, who was working on what and what they were, what their thought process was." In addition to process, position indicators can communicate progress on a task: "even, I think, something as simple as whether we've finished reading, and that was easy to understand in the [Peek-At-You condition]" (P26).
238
+
239
+ Peek-At-You supports conversational understanding. Participants found it was easier to understand what others were speaking about with our system, with P13 stating that "it was easier to know what someone else was talking about or referring to". This suggests that the additional awareness of others' positions, thought processes, task progress, and roles provides context that makes following the conversation easier.
240
+
241
+ Participants specifically appreciated the popup showing the active speaker's location, as it aided with following the conversation. For P26, "it was really helpful to see the speaker's view and be notified when I was not on their view." P11 found the popup "was a little bit distracting sometimes, but it definitely was helpful." This suggests both roles of the popup (i.e., warning when the listener is not seeing the same part of the document and view peeking) are valuable for conversational understanding.
242
+
243
+ Peek-At-You aids transitions. Participants reported that Peek-At-You's features were helpful for transitioning to mixed-focused collaboration. For example, P21 found it "easier to track others, share progress, find one another". Position icons aided in grouping up; in one instance P25 explained "I couldn't find the section where we were supposed to be writing and I was able to jump up to where P27 was, was taking a look. So yeah, I found it helpful." Jumping to others' video feeds also helped with temporarily transitions, e.g., "I was doing the marketing stuff, so I was like, looking at what, P10 and P11 were like adding just so that I could like, know what was happening like on the other part, and yeah I was just like jumping to their pages with the little, little icon on the video." More generally, ${P23}$ found "the features allowed me to quickly hop back and forth between where other people were looking and working."
244
+
245
+ Pushing a view was a quick way to ensure everyone was looking at the same thing. As ${P4}$ explained,"I was able to share my screen on the merger page and everyone else could pop-up on my screen, so they didn't have to scroll all the way back up." P17 noted that Peek-At-You's features "made it easier to share views and get input without having to completely leave the work you were doing." In contrast, P6, P8, and P11 described challenges to transition to group work in the Video Chat Only condition.
246
+
247
+ Audio channel aids awareness but difficult to share. Some participants mentioned that the audio channel helps maintain awareness. However, many participants reported that sharing the audio channel was challenging. For example, at times "others had to take a pause until the main conversation was over or find another way to speak without disruption" (P3). This was especially apparent when participants were working in small-group configurations. When ${P1}$ and ${P2}$ worked together, ${P1}$ found "it was hard to coordinate with ${P2}$ because we didn’t want to, like, talk over like ${P3}$ and ${P4}$ talking." P16 likewise found that "it was annoying trying to have a discussion with just part of the team while other[s] were, are having a conversation."
248
+
249
+ While breakout rooms or selective muting are possible solutions, these approaches are also likely to reduce awareness within a group. Collaborative features like the ones in our system may attenuate the need for breakout rooms by reducing verbal articulation work (the work of working together [71,74]).
250
+
251
+ Collaborative features can be distracting. Although helpful with awareness and understanding, some participants found certain aspects of our system distracting, such as rapid visual changes and shared views taking up too much screen space. To manage these distractions, participants suggested using collaborative features only during certain phases or being able to turn them on and off as needed. For example, one participant felt the Peek-At-You feature was only important initially, during brainstorming and discussion, while another suggested having the feature be toggle-able.
252
+
253
+ Collaborative features may be more useful with experience or in other tasks. Participants also explained that because the features were new, they may not have fully learned or thought to use all of them during the study. P8 explained that "since the UI was new, we were getting distracted because of that", but "the more we use this tool, the more efficient ways we will find to make the most of it." P23 felt similarly: they "didn't use some of the features consciously due to familiarity. With more exposure to the extension and conscious effort it will become more natural."
254
+
255
+ Participants found the system useful beyond gaining experience, particularly in scenarios involving collaborative work or presentations with multiple slides, as it would reduce the amount of scrolling (P6). They highlighted the usefulness of the sharing through push and peek, as well as the preview feature for keeping track of others' progress without interrupting their own work.
256
+
257
+ ## 7 Discussion
258
+
259
+ We discuss how our system supports fluid working configurations, why existing applications should enable extensibility to support the functionality of Peek-At-You, and adapting our system to reduce distractions.
260
+
261
+ ### 7.1 How in-the-moment indicators support transitions
262
+
263
+ Our evaluation shows that Peek-At-You supports smooth transitions in mixed-focus collaboration by increasing awareness of co-editors' positions, roles, thought processes, and task progress. Survey data confirmed that our system supports this type of awareness, which is important for identifying opportune times to interrupt the current working state of the group during transitions and to understand when transitions into subgroups or a main group succeed. For example, transitioning from individual to subgroup work may involve identifying others with the same role. Similarly, transitioning to teamwork may involve identifying when everyone has made sufficient progress on their individual work first. More generally, transitions are aided by understanding the processes of collaborators and choosing an opportune time to interrupt the current working state of the group [31].
264
+
265
+ Sharing views can also support transitions. While sharing or viewing of private information is simple when face-to-face [49], we show that one-click and conversational interactions can make view sharing equally easy in a remote context. View pushing and peeking further allow to jump to the location for a complete editing experience. Unlike spatial video chat systems that allow users to share views via screensharing and move participant videos around on top of the shared view to group up around a particular element [88], our system supports full content control after jumping.
266
+
267
+ Using awareness of others' positions and actions is a quick and lightweight way to transition into different working configurations while maintaining awareness of the rest of the group. However, traditional approaches, such as breakout rooms or position-based audio muting [89-91], provide stronger separations between groups. While enabling focused work, this limits awareness of other subgroups, leading to challenges, such as unawareness about what a breakout group is working on or when to interrupt.
268
+
269
+ ### 7.2 Peek-at-you vs. our formative observations
270
+
271
+ We return to the four themes identified in our formative observations to compare the findings to our evaluation study.
272
+
273
+ "Audio channel limits small-group work." Our system's awareness features reduced the need for verbal articulation work, which may ease the experience of sharing an audio channel. However, sharing an audio channel was still difficult at times, and other solutions such as selective muting, subgrouping, or breakout rooms, are needed scale to arbitrary group sizes.
274
+
275
+ "Written content can be more difficult to get feedback on". Participants found view pushing and peeking useful to quickly establish a shared view. Grouping up around a shared view is an effective way to gather feedback on writing, as it does not require reading the text aloud or losing one's position in the document.
276
+
277
+ "Misunderstandings and duplicated work were common and often unnoticed". Participants noted that our system supported conversational understanding, with position indicators being helpful for tracking discussions. While we could not make direct comparisons with our formative observations, participants indicated that position indicators aided them in assessing what others were working on, helping to avoid duplications.
278
+
279
+ "Collaboration tools infrequently used". Participants use Peek-at-you over 5 times on average, which compares favorably to the formative observations, where collaboration tools (e.g., jump/ follow) were not used. It's worth noting that the longer content in the evaluation task makes a direct comparison difficult. However, placing collaboration tools on video feeds may have also made them easier to access, therefore contributing to increased usage.
280
+
281
+ ### 7.3 Managing distractions
282
+
283
+ Mixed-focus collaboration involves processing a lot of information, including video/audio communications, real-time artifact changes, and awareness widgets. Our system supplies real-time information, which some participants found distracting due to rapidly changing icons or overlays taking up screen space. However, our questionnaire did not show an overall increase in distractedness when using Peek-at-You, possibly due to distractions inherent to real-time collaboration overshadowing distractions related to our system. Alternatively, increased distractions from the system may have been balanced by a decrease in other distractions, such as improved conversational articulation or better leverage of interruption strategies [54].
284
+
285
+ Though a degree of distraction is inherent to mixed-focus collaboration, the design of collaborative systems involves tradeoffs between maintaining awareness and avoiding distractions [30]; the desired balance may depend on many factors including group size, task, artifact type, and roles. Because some participants in our evaluation cited distraction as a drawback, we suggest four design iterations that could reduce distractions. First, position indicators could use "calm design" [7] by displaying only a binary red/green status light until hovered and varying the active speaker notification [51] based on speaking and working activity (Figure 8, left). Second, shared views could be sized more precisely to manage screen space. Currently, our prototype sizes shared views based on the window aspect ratio of the sharer and viewer, but this may result in a larger than intended preview in some cases. Third, view pushing could incorporate a "consent" mechanism where shared views are small but expand if hovered (Figure 8, right). This approach may offer some of the benefits of continuous gestures like moving and resizing elements in spatial video chat systems [26,88- 91], while still being compatible with a standard scrolling document interface. Fourth, a focus mode could be added, which would hide collaborative features, selectively present audio using roles or proximity, or even hide others' edits. Video overlay icons could signal which collaborators are in focus mode. This may also make the system more inclusive (multiple participants cited ADHD as a particular motivator for minimizing distractions) and support hybrid work that includes loosely coupled phases [60].
286
+
287
+ ![01963e07-504a-7a8f-bdfb-a0f5ea1919fe_8_941_431_684_264_0.jpg](images/01963e07-504a-7a8f-bdfb-a0f5ea1919fe_8_941_431_684_264_0.jpg)
288
+
289
+ Figure 8. Potential design iterations: (left) a calm design that uses binary status lights instead of icons and a color-coded outline instead of the active speaker popup; (right) a pushed view that uses a consent mechanism before appearing full size.
290
+
291
+ ### 7.4 Comparing methods of sharing views
292
+
293
+ For peeking / pushing views, we implemented view sharing through video streaming of the user's view, with the option to navigate to their view by clicking the position icon. We relied on a video stream because the tab video was already being streamed for recording and deep integration is difficult with a closed-source application (Google Docs). However, using local rendering for view sharing in collaborative software would provide several benefits, such as bandwidth and quality improvements, increased accessibility, and making the multiple views editable. Regardless of rendering approach, integrating shared views can preserve privacy compared to general-purpose screen sharing, as it only shares content that collaborators already have access to [76].
294
+
295
+ Jumping to someone's view offers an alternative to temporary view sharing, but it can cause context loss for the person jumping. A possible solution is to blend jumping and peeking, as Gutwin et al. [32] did by holding the right mouse button to jump to a collaborator's view and releasing it to jump back. A "back" button could also be shown to aid within-document navigation.
296
+
297
+ ### 7.5 Supporting integration of group calls and collaborative apps
298
+
299
+ Currently commercial apps are replacing traditional screensharing with embedded collaborative apps in group calls. For example, Google Docs now integrates video calls and Zoom allows third-party apps to integrate with the shared stage. To enable consistency between collaboration and communication apps, we argue that APIs for UI extensibility and data access are needed-e.g., for assigning an icon to be displayed on top of a participant's video feed or receiving notifications about the current speaker.
300
+
301
+ These APIs would allow for features like in Peek-At-You and to support other uses (e.g., selecting video feeds to show based on viewport proximity, call recordings linked to artifact edit histories, or displaying icons to help people understand others' emotions).
302
+
303
+ ### 7.6 Generalizing to other tasks, groups, and artifacts
304
+
305
+ We designed with relaxed-WYSIWIS systems in mind, but focused on content creation using a document editor for prototyping and evaluation. Different types of systems would require adaptations to represent positional indicators. For example, a digital whiteboard with 2D navigation may need to represent "up, left, and zoomed out", while a presentation or interface design application may need to represent that a collaborator is on a different slide or screen. 3D applications may present even more challenges, but could leverage arrows [80] or a Viewcube [42]. Different types of systems would require adaptations to represent positional indicators.
306
+
307
+ Relaxed-WYSIWIS groupware may allow users to have different object formatting and representations [21], which makes establishing a shared view challenging for two reasons: "jumping" to another person's view incurs a significant loss of context and determining whether people are currently sharing a view may be difficult (e.g., if two people see the same table in a spreadsheet but have applied different data filters). Our view peeking and pushing features preserve context and guarantee identical object representation, which may be particularly helpful in these contexts.
308
+
309
+ Our system's collaborative features were designed to support a variety of tasks within content creation process as part of an individual (reading), a team-level (choosing a merger target), and small-group activities (generating investor statements and marketing materials). While other tasks may require a different configurations, our design does not impose a specific ordering or structure for collaboration; therefore, while not yet tested, our system may by useful for other mixed-focus collaboration tasks such as brainstorming, decision making, or reviewing.
310
+
311
+ Our system could scale to larger groups, but stricter approaches for supporting subgroups may be needed (e.g., breakout rooms or audio filtering based on spatial positioning [89-91]). The integration of communication and collaboration leveraged by Peek-At-You could be helpful in these cases, such as using collaborators' proximity within a document or other artifact to select which video feeds or audio feeds to present, providing the most relevant awareness information.
312
+
313
+ ## 8 LIMITATIONS & FUTURE WORK
314
+
315
+ The proposed system in this work is tailored to a specific context and may require adaptations for other contexts. While our experimental setup allowed us to recruit groups of a non-trivial size (29 participants in six groups), include a Video Chat Only condition, and recruit participants familiar with remote work, studying a single group size, task, and artifact type limits our ability to draw strong conclusions about the generalizability of our system. Future research should test the system in various contexts to evaluate its generalizability and effectiveness. Additionally, longer-term deployments of the system can help to understand how it can support sustained collaboration over time. Future work should also consider how experience affects system usage, as some participants found that our study's duration limited their ability to learn and leverage all the features. Finally, future work should further study how integrated tools can support hybrid asynchronous-synchronous collaboration.
316
+
317
+ ## 9 CONCLUSION
318
+
319
+ In summary, we contribute to research in mixed-focus content creation in multiple ways. First, we build on existing understandings of mixed-focus collaboration and our formative observations of fully-remote collaboration. Second, we then design Peek-At-You, a system of collaborative features that leverage understanding of conversation and collaborative actions to increase awareness, facilitate understanding, and support the transitions needed in mixed-focus collaboration. Finally, we evaluate the system in groups of five collaborators, demonstrating that it can foster the knowledge and actions we intended to support. By enhancing remote collaboration, we contribute to making benefits of collaboration available for remote content creation.
320
+
321
+ ## REFERENCES
322
+
323
+ [1] Ahmad Alaiad, Yazan Alnsour, and Mohammad Alsharo. 2019. Virtual Teams: Thematic Taxonomy, Constructs Model, and Future Research Directions. IEEE Transactions on Professional Communication 62,3 (September 2019), 211-238. DOI:https://doi.org/10.1109/TPC.2019.2929370
324
+
325
+ [2] Brian de Alwis, Carl Gutwin, and Saul Greenberg. 2009. GT/SD: performance and simplicity in a groupware toolkit. In Proceedings of the 1st ACM SIGCHI symposium on Engineering interactive computing systems (EICS '09), Association for Computing Machinery, New York, NY, USA, 265-274. DOI:https://doi.org/10.1145/1570433.1570483
326
+
327
+ [3] Ivo Benke, Michael Thomas Knierim, and Alexander Maedche. 2020. Chatbot-based Emotion Management for Distributed Teams: A Participatory Design Study. Proc. ACM Hum.-Comput. Interact. 4, CSCW2 (October 2020), 118:1-118:30. DOI:https://doi.org/10.1145/3415189
328
+
329
+ [4] Muhammad Wasim Bhatti and Ali Ahsan. 2016. Global software development: an exploratory study of challenges of globalization, HRM practices and process improvement. Rev Manag Sci 10, 4 (October 2016), 649-682. DOI:https://doi.org/10.1007/s11846-015-0171-y
330
+
331
+ [5] Frederik Brudy, Joshua Kevin Budiman, Steven Houben, and Nicolai Marquardt. 2018. Investigating the Role of an Overview Device in Multi-Device Collaboration. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1-13. Retrieved September 3, 2021 from https://doi.org/10.1145/3173574.3173874
332
+
333
+ [6] Kevin G. Byrnes, Patrick A. Kiely, Colum P. Dunne, Kieran W. McDermott, and John Calvin Coffey. 2021. Communication, collaboration and contagion:
334
+
335
+ "Virtualisation" of anatomy during COVID-19. Clinical Anatomy 34, 1 (2021), 82-89. DOI:https://doi.org/10.1002/ca.23649
336
+
337
+ [7] Amber Case. 2015. Calm Technology: Principles and Patterns for Non-Intrusive Design. O'Reilly Media, Inc.
338
+
339
+ [8] Teresa Cerratto Pargman. 2003. Collaborating with writing tools. Interacting with Computers 15, 6 (December 2003), 737-757.
340
+
341
+ DOI:https://doi.org/10.1016/j.intcom.2003.09.003
342
+
343
+ [9] Mauro Cherubini, Marc-Antoine Nüssli, and Pierre Dillenbourg. 2008. Deixis and gaze in collaborative work at a distance (over a shared map): a computational model to detect misunderstandings. In Proceedings of the 2008 symposium on Eye tracking research & applications - ETRA '08, ACM Press, Savannah, Georgia, 173. DOI:https://doi.org/10.1145/1344471.1344515
344
+
345
+ [10] César A. Collazos, Luis A. Guerrero, José A. Pino, Stefano Renzi, Jane Klobas, Manuel Ortega, Miguel A. Redondo, and Crescencio Bravo. 2007. Evaluating Collaborative Learning Processes using System-based Measurement. Journal of Educational Technology & Society 10, 3 (2007), 257-274.
346
+
347
+ [11] Barry Collins. 2021. Google Meet Trumps Zoom With Video Calls Inside Docs. Forbes. Retrieved September 3, 2021 from
348
+
349
+ https://www.forbes.com/sites/barrycollins/2021/05/18/goo gle-meet-trumps-zoom-with-video-calls-inside-docs/
350
+
351
+ [12] Owen Daly-Jones, Andrew Monk, and Leon Watts. 1998. Some advantages of video conferencing over high-quality
352
+
353
+ audio conferencing: fluency and awareness of attentional focus. International Journal of Human-Computer Studies 49, 1 (July 1998), 21-58. DOI:https://doi.org/10.1006/ijhc.1998.0195
354
+
355
+ [13] Debaleena Chattopadhyay. 2018. Shared Document Control in Multi-Device Classrooms. University of Illinois at Chicago. Retrieved June 26, 2022 from http://rgdoi.net/10.13140/RG.2.2.18321.68962
356
+
357
+ [14] Prasun Dewan, Puneet Agarwal, Gautam Shroff, and Rajesh Hegde. 2010. Mixed-focus collaboration without compromising individual or group work. In Proceedings of the 2nd ACM SIGCHI symposium on Engineering interactive computing systems (EICS '10), Association for Computing Machinery, New York, NY, USA, 225-234. DOI:https://doi.org/10.1145/1822018.1822054
358
+
359
+ [15] Clarence A. Ellis, Simon J. Gibbs, and Gail Rein. 1991. Groupware: some issues and experiences. Commun. ACM 34, 1 (January 1991), 39-58. DOI:https://doi.org/10.1145/99977.99987
360
+
361
+ [16] Umer Farooq, Con Rodi, John M. Carroll, and Philip Isenhour. 2003. Avatar proxies: configurable informants of collaborative activities. In CHI '03 extended abstracts on Human factors in computing systems - CHI '03, ACM Press, Ft. Lauderdale, Florida, USA, 792. DOI:https://doi.org/10.1145/765891.765994
362
+
363
+ [17] Azadeh Forghani, Gina Venolia, and Kori Inkpen. 2014. Media2gether: Sharing Media during a Call. In Proceedings of the 18th International Conference on Supporting Group Work (GROUP '14), Association for Computing Machinery, New York, NY, USA, 142-151. DOI:https://doi.org/10.1145/2660398.2660417
364
+
365
+ [18] Susanne Geister, Udo Konradt, and Guido Hertel. 2006. Effects of Process Feedback on Motivation, Satisfaction, and Performance in Virtual Teams. Small Group Research 37, 5 (October 2006), 459-489. DOI:https://doi.org/10.1177/1046496406292337
366
+
367
+ [19] Lucy L. Gilson, M. Travis Maynard, Nicole C. Jones Young, Matti Vartiainen, and Marko Hakonen. 2015. Virtual Teams Research: 10 Years, 10 Themes, and 10 Opportunities. Journal of Management 41, 5 (July 2015), 1313-1337. DOI:https://doi.org/10.1177/0149206314559946
368
+
369
+ [20] S. Greenberg. 1990. Sharing views and interactions with single-user applications. SIGOIS Bull. 11, 2-3 (March 1990), 227-237. DOI:https://doi.org/10.1145/91478.91546
370
+
371
+ [21] S. Greenberg, C. Gutwin, and M. Roseman. 1996. Semantic telepointers for groupware. In Proceedings Sixth Australian Conference on Computer-Human Interaction, 54-61. DOI:https://doi.org/10.1109/OZCHI.1996.559988
372
+
373
+ [22] Saul Greenberg. 1996. A fisheye text editor for relaxed-WYSIWIS groupware. In Conference companion on Human factors in computing systems common ground - CHI '96, ACM Press, Vancouver, British Columbia, Canada, 212-213. DOI:https://doi.org/10.1145/257089.257285
374
+
375
+ [23] Saul Greenberg and Carl Gutwin. 2016. Implications of We-Awareness to the Design of Distributed Groupware Tools. Comput Supported Coop Work 25, 4 (October 2016), 279-293. DOI:https://doi.org/10.1007/s10606-016- 9244-y
376
+
377
+ [24] Saul Greenberg and Mark Roseman. 2003. Using a room metaphor to ease transitions in groupware. In Sharing expertise: Beyond knowledge management, Mark S.
378
+
379
+ Ackerman, Volkmar Pipek, and Volker Wulf (eds.). MIT
380
+
381
+ Press, Cambridge, MA, 203-256.
382
+
383
+ [25] Jens Emil Grønbæk, Henrik Korsgaard, Marianne Graves Petersen, Morten Henriksen Birk, and Peter Gall Krogh. 2017. Proxemic Transitions: Designing Shape-Changing Furniture for Informal Meetings. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 7029-7041. Retrieved September 3, 2021 from https://doi.org/10.1145/3025453.3025487
384
+
385
+ [26] Jens Emil Grønbæk, Banu Saatçi, Carla F. Griggio, and Clemens Nylandsted Klokmose. 2021. MirrorBlender: Supporting Hybrid Meetings with a Malleable Video-Conferencing System. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, ACM, Yokohama Japan, 1-13. DOI:https://doi.org/10.1145/3411764.3445698
386
+
387
+ [27] Zixiu Guo, John D'Ambra, Tim Turner, and Huiying Zhang. 2009. Improving the Effectiveness of Virtual Teams: A Comparison of Video-Conferencing and Face-to-Face Communication in China. IEEE Transactions on Professional Communication 52, 1 (March 2009), 1-16. DOI:https://doi.org/10.1109/TPC.2008.2012284
388
+
389
+ [28] Carl Gutwin. 2002. Traces: Visualizing the immediate past to support group interaction. In Graphics interface, Citeseer, Calgary, Alberta, Canada, 43-50. DOI:https://doi.org/10.20380/GI2002.06
390
+
391
+ [29] Carl Gutwin, Scott Bateman, Gaurav Arora, and Ashley Coveney. 2017. Looking Away and Catching Up: Dealing with Brief Attentional Disconnection in Synchronous Groupware. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, ACM, Portland Oregon USA, 2221-2235. DOI:https://doi.org/10.1145/2998181.2998226
392
+
393
+ [30] Carl Gutwin and Saul Greenberg. 1998. Design for individuals, design for groups: tradeoffs between power and workspace awareness. In Proceedings of the 1998 ACM conference on Computer supported cooperative work (CSCW '98), Association for Computing Machinery, New York, NY, USA, 207-216. DOI:https://doi.org/10.1145/289444.289495
394
+
395
+ [31] Carl Gutwin and Saul Greenberg. 2002. A Descriptive Framework of Workspace Awareness for Real-Time Groupware. Computer Supported Cooperative Work (CSCW) 11, 3 (September 2002), 411-446. DOI:https://doi.org/10.1023/A:1021271517844
396
+
397
+ [32] Carl Gutwin, Saul Greenberg, and Mark Roseman. 1996. Workspace Awareness in Real-Time Distributed Groupware: Framework, Widgets, and Evaluation. In People and Computers XI, Springer, London, 281-298. DOI:https://doi.org/10.1007/978-1-4471-3588-3_18
398
+
399
+ [33] Carl Gutwin and Reagan Penner. 2002. Improving interpretation of remote gestures with telepointer traces. In Proceedings of the 2002 ACM conference on Computer supported cooperative work (CSCW '02), Association for Computing Machinery, New York, NY, USA, 49-57. DOI:https://doi.org/10.1145/587078.587086
400
+
401
+ [34] Maryam Habibi and Andrei Popescu-Belis. 2015. Keyword Extraction and Clustering for Document Recommendation in Conversations. IEEE/ACM Trans. Audio Speech Lang. Process. 23, 4 (April 2015), 746-759. DOI:https://doi.org/10.1109/TASLP.2015.2405482
402
+
403
+ [35] Mark S. Hancock and Sheelagh Carpendale. 2006. The Complexities of Computer-Supported Collaboration.
404
+
405
+ (February 2006).
406
+
407
+ DOI:https://doi.org/10.11575/PRISM/30519
408
+
409
+ [36] Sandra G. Hart and Lowell E. Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. In Advances in Psychology, Peter A. Hancock and Najmedin Meshkati (eds.). North-Holland, 139-183. DOI:https://doi.org/10.1016/S0166-4115(08)62386-9
410
+
411
+ [37] Matthias Heinrich, Franz Lehmann, Thomas Springer, and Martin Gaedke. 2012. Exploiting single-user web applications for shared editing: a generic transformation approach. In Proceedings of the 21st international conference on World Wide Web (WWW '12), Association for Computing Machinery, New York, NY, USA, 1057- 1066. DOI:https://doi.org/10.1145/2187836.2187978
412
+
413
+ [38] Jason Hill and Carl Gutwin. 2003. Awareness support in a groupware widget toolkit. In Proceedings of the 2003 international ACM SIGGROUP conference on Supporting group work (GROUP '03), Association for Computing Machinery, New York, NY, USA, 258-267. DOI:https://doi.org/10.1145/958160.958201
414
+
415
+ [39] Petra Isenberg, Danyel Fisher, Sharoda A. Paul, Meredith Ringel Morris, Kori Inkpen, and Mary Czerwinski. 2012. Co-Located Collaborative Visual Analytics around a Tabletop Display. IEEE Transactions on Visualization and Computer Graphics 18, 5 (May 2012), 689-702. DOI:https://doi.org/10.1109/TVCG.2011.287
416
+
417
+ [40] Hiroshi Ishii, Minoru Kobayashi, and Jonathan Grudin. 1993. Integration of interpersonal space and shared workspace: ClearBoard design and experiments. ${ACM}$ Trans. Inf. Syst. 11, 4 (October 1993), 349-375. DOI:https://doi.org/10.1145/159764.159762
418
+
419
+ [41] Demetrios Karis, Daniel Wildman, and Amir Mané. 2016. Improving Remote Collaboration With Video Conferencing and Video Portals. Human-Computer Interaction 31, 1 (January 2016), 1-58. DOI:https://doi.org/10.1080/07370024.2014.921506
420
+
421
+ [42] Azam Khan, Igor Mordatch, George Fitzmaurice, Justin Matejka, and Gordon Kurtenbach. 2008. ViewCube: a 3D orientation indicator and controller. In Proceedings of the 2008 symposium on Interactive 3D graphics and games (I3D '08), Association for Computing Machinery, New York, NY, USA, 17-25. DOI:https://doi.org/10.1145/1342250.1342253
422
+
423
+ [43] Heini Korpilahti and Toni Koskinen. 2006. Five Levels of Collaboration-Five Levels of ICT Support? In Cooperative Systems Design, Parina Hassanaly, Thomas Herrmann, Gabriele Kunau, and Manuel Zacklad (eds.). IOS Press, 196-210.
424
+
425
+ [44] Romina Kühn and Thomas Schlegel. 2018. Mixed-focus collaboration activities for designing mobile interactions. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct (MobileHCI '18), Association for Computing Machinery, New York, NY, USA, 71-78. DOI:https://doi.org/10.1145/3236112.3236122
426
+
427
+ [45] Grete Helena Kütt, Kevin Lee, Ethan Hardacre, and Alexandra Papoutsaki. 2019. Eye-Write: Gaze Sharing for Collaborative Writing. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, ACM, Glasgow Scotland Uk, 1-12. DOI:https://doi.org/10.1145/3290605.3300727
428
+
429
+ [46] Ida Larsen-Ledet and Henrik Korsgaard. 2019. Territorial Functioning in Collaborative Writing: Fragmented
430
+
431
+ Exchanges and Common Outcomes. Comput Supported
432
+
433
+ Coop Work 28, 3-4 (June 2019), 391-433. DOI:https://doi.org/10.1007/s10606-019-09359-8
434
+
435
+ [47] Ida Larsen-Ledet, Henrik Korsgaard, and Susanne Bødker. 2020. Collaborative Writing Across Multiple Artifact Ecologies. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, ACM, Honolulu HI USA, 1-14. DOI:https://doi.org/10.1145/3313831.3376422
436
+
437
+ [48] Mankyung Lee, Joohee Kim, Kwangjae Lee, and Jundong Cho. 2016. CIRCLE ROUND; Flexible Communication using Multiple Access at Face-to-Face Meeting. In Proceedings of the 19th ACM Conference on Computer Supported Cooperative Work and Social Computing Companion - CSCW '16 Companion, ACM Press, San Francisco, California, USA, 65-68. DOI:https://doi.org/10.1145/2818052.2874326
438
+
439
+ [49] Roman Lissermann, Jochen Huber, Martin Schmitz, Jürgen Steimle, and Max Mühlhäuser. 2014. Permulin: mixed-focus collaboration on multi-view tabletops. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14), Association for Computing Machinery, New York, NY, USA, 3191-3200. DOI:https://doi.org/10.1145/2556288.2557405
440
+
441
+ [50] Luis López, Miguel París, Santiago Carot, Boni García, Micael Gallego, Francisco Gortázar, Raul Benítez, Jose A. Santos, David Fernández, Radu Tom Vlad, Iván Gracia, and Francisco Javier López. 2016. Kurento: The WebRTC Modular Media Server. In Proceedings of the 24th ACM international conference on Multimedia (MM '16), Association for Computing Machinery, New York, NY, USA, 1187-1191. DOI:https://doi.org/10.1145/2964284.2973798
442
+
443
+ [51] Hugo Lopez-Tovar, Andreas Charalambous, and John Dowell. 2015. Managing Smartphone Interruptions through Adaptive Modes and Modulation of Notifications. In Proceedings of the 20 th International Conference on Intelligent User Interfaces (IUI '15), Association for Computing Machinery, New York, NY, USA, 296-299. DOI:https://doi.org/10.1145/2678025.2701390
444
+
445
+ [52] Dennis Mancl and Steven D. Fraser. 2020. COVID-19's Influence on the Future of Agile. In Agile Processes in Software Engineering and Extreme Programming - Workshops (Lecture Notes in Business Information Processing), Springer International Publishing, Cham, 309-316. DOI:https://doi.org/10.1007/978-3-030-58858- 8 32
446
+
447
+ [53] Jennifer Marlow, Eveline van Everdingen, and Daniel Avrahami. 2016. Taking Notes or Playing Games?: Understanding Multitasking in Video Communication. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, ACM, San Francisco California USA, 1726-1737. DOI:https://doi.org/10.1145/2818048.2819975
448
+
449
+ [54] Daniel C. McFarlane. 2002. Comparison of Four Primary Methods for Coordinating the Interruption of People in Human-Computer Interaction. Human-Computer Interaction 17, 1 (March 2002), 63-139. DOI:https://doi.org/10.1207/S15327051HCI1701_2
450
+
451
+ [55] Daniel C. McFarlane and Kara A. Latorella. 2002. The Scope and Importance of Human Interruption in Human-Computer Interaction Design. Human-Computer Interaction 17, 1 (March 2002), 1-61. DOI:https://doi.org/10.1207/S15327051HCI1701_1
452
+
453
+ [56] David McNeill. 1992. Hand and mind: What gestures reveal about thought. University of Chicago Press, Chicago, IL, US.
454
+
455
+ [57] Microsoft. 2021. Microsoft Teams announces new developer features | Build 2021. Retrieved September 3, 2021 from
456
+
457
+ https://techcommunity.microsoft.com/t5/microsoft-teams-blog/microsoft-teams-announces-new-developer-features-build-2021/ba-p/2352558
458
+
459
+ [58] Microsoft. Use Whiteboard in Microsoft Teams. Retrieved September 3, 2021 from https://support.microsoft.com/en-us/office/use-whiteboard-in-microsoft-teams-7a6e7218- e9dc-4ccc-89aa-b1a0bb9c31ee?ui=en-US&rs=en-US&ad=US
460
+
461
+ [59] A. Mitchell and R. M. Baecker. 1996. The Calliope multiuser shared editor. Department of Computer Science, University of Toronto, Toronto, Canada.
462
+
463
+ [60] Dennis C. Neale, John M Carroll, and Mary Beth Rosson. 2004. Evaluating computer-supported cooperative work: models and frameworks. In Proceedings of the 2004 ACM conference on Computer supported cooperative work (CSCW '04), Association for Computing Machinery, New York, NY, USA, 112-121. DOI:https://doi.org/10.1145/1031607.1031626
464
+
465
+ [61] Thomas Neumayr, Hans-Christian Jetter, Mirjam Augstein, Judith Friedl, and Thomas Luger. 2018. Domino: A Descriptive Framework for Hybrid Collaboration and Coupling Styles in Partially Distributed Teams. Proc. ACM Hum.-Comput. Interact. 2, CSCW (November 2018), 128:1-128:24. DOI:https://doi.org/10.1145/3274397
466
+
467
+ [62] Carman Neustaedter and Saul Greenberg. 2012. Intimacy in long-distance relationships over video chat. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, Austin Texas USA, 753- 762. DOI:https://doi.org/10.1145/2207676.2207785
468
+
469
+ [63] Christine M. Neuwirth, David S. Kaufer, Ravinder Chandhok, and James H. Morris. 1994. Computer support for distributed collaborative writing: defining parameters of interaction. In Proceedings of the 1994 ACM conference on Computer supported cooperative work (CSCW '94), Association for Computing Machinery, New York, NY, USA, 145-152. DOI:https://doi.org/10.1145/192844.192893
470
+
471
+ [64] Jennifer K. Olsen, Nikol Rummel, and Vincent Aleven. 2021. Designing for the co-Orchestration of Social Transitions between Individual, Small-Group and Whole-Class Learning in the Classroom. Int J Artif Intell Educ 31, 1 (March 2021), 24-56. DOI:https://doi.org/10.1007/s40593-020-00228-w
472
+
473
+ [65] Andrei Popescu-Belis, Erik Boertjes, Jonathan Kilgour, Peter Poller, Sandro Castronovo, Theresa Wilson, Alejandro Jaimes, and Jean Carletta. 2008. The AMIDA Automatic Content Linking Device: Just-in-Time Document Retrieval in Meetings. In Machine Learning for Multimodal Interaction, Andrei Popescu-Belis and Rainer Stiefelhagen (eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 272-283. DOI:https://doi.org/10.1007/978-3- 540-85853-9 25
474
+
475
+ [66] Irene Rae, Gina Venolia, John C. Tang, and David Molnar. 2015. A Framework for Understanding and Designing Telepresence. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW '15), Association for Computing Machinery, New York, NY, USA, 1552-1566.
476
+
477
+ DOI:https://doi.org/10.1145/2675133.2675141
478
+
479
+ [67] Mark Roseman and Saul Greenberg. 1995. GROUPKIT A Groupware Toolkit for Building Real-Time Conferencing Applications. In Readings in Human-Computer Interaction, RONALD M. Baecker, JONATHAN Grudin, WILLIAM A. S. Buxton and SAUL Greenberg (eds.). Morgan Kaufmann, 390-397. DOI:https://doi.org/10.1016/B978-0-08-051574-8.50040-6
480
+
481
+ [68] Mark Roseman and Saul Greenberg. 1996. Building real-time groupware with GroupKit, a groupware toolkit. ${ACM}$ Trans. Comput.-Hum. Interact. 3, 1 (March 1996), 66-106. DOI:https://doi.org/10.1145/226159.226162
482
+
483
+ [69] Advait Sarkar, Sean Rintel, Damian Borowiec, Rachel Bergmann, Sharon Gillett, Danielle Bragg, Nancy Baym, and Abigail Sellen. 2021. The promise and peril of parallel chat in video meetings for work. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, ACM, Yokohama Japan, 1-8. DOI:https://doi.org/10.1145/3411763.3451793
484
+
485
+ [70] Kjeld Schmidt and Liam Bannon. 1992. Taking CSCW seriously: Supporting articulation work. Comput Supported Coop Work 1, 1-2 (March 1992), 7-40. DOI:https://doi.org/10.1007/BF00752449
486
+
487
+ [71] Kjeld Schmidt and Carla Simonee. 1996. Coordination mechanisms: Towards a conceptual foundation of CSCW systems design. Comput Supported Coop Work 5, 2 (June 1996), 155-200. DOI:https://doi.org/10.1007/BF00133655
488
+
489
+ [72] M. Stefik, D. G. Bobrow, G. Foster, S. Lanning, and D. Tatar. 1987. WYSIWIS revised: early experiences with multiuser interfaces. ACM Trans. Inf. Syst. 5, 2 (April 1987), 147-167. DOI:https://doi.org/10.1145/27636.28056
490
+
491
+ [73] Anissa R. Stewart, Danielle B. Harlow, and Kim DeBacco. 2011. Students' experience of synchronous learning in distributed environments. Distance Education 32, 3 (November 2011), 357-381. DOI:https://doi.org/10.1080/01587919.2011.610289
492
+
493
+ [74] Anselm Strauss. 1988. The Articulation of Project Work: An Organizational Process. The Sociological Quarterly 29, 2 (June 1988), 163-178. DOI:https://doi.org/10.1111/j.1533-8525.1988.tb01249.x
494
+
495
+ [75] John C. Tang. 1991. Findings from observational studies of collaborative work. International Journal of Man-Machine Studies 34, 2 (February 1991), 143-160. DOI:https://doi.org/10.1016/0020-7373(91)90039-A
496
+
497
+ [76] Kimberly Tee, Saul Greenberg, and Carl Gutwin. 2006. Providing artifact awareness to a distributed group through screen sharing. In Proceedings of the 2006 20th anniversary conference on Computer supported cooperative work (CSCW '06), Association for Computing Machinery, New York, NY, USA, 99-108. DOI:https://doi.org/10.1145/1180875.1180891
498
+
499
+ [77] Philip Tuddenham and Peter Robinson. 2009. Territorial coordination and workspace awareness in remote tabletop collaboration. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 2139-2148. Retrieved September 7, 2021 from https://doi.org/10.1145/1518701.1519026
500
+
501
+ [78] Hana Vrzakova, Mary Jean Amon, McKenzie Rees, Myrthe Faber, and Sidney D'Mello. 2021. Looking for a Deal?: Visual Social Attention during Negotiations via Mixed Media Videoconferencing. Proc. ACM Hum.-
502
+
503
+ Comput. Interact. 4, CSCW3 (January 2021), 1-35. DOI:https://doi.org/10.1145/3434169
504
+
505
+ [79] Hana Vrzakova, Mary Jean Amon, Angela E. B. Stewart, and Sidney K. D'Mello. 2019. Dynamics of Visual Attention in Multiparty Collaborative Problem Solving using Multidimensional Recurrence Quantification Analysis. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, ACM, Glasgow Scotland Uk, 1-14.
506
+
507
+ DOI:https://doi.org/10.1145/3290605.3300572
508
+
509
+ [80] Jan Oliver Wallgrün, Mahda M. Bagher, Pejman Sajjadi, and Alexander Klippel. 2020. A Comparison of Visual Attention Guiding Approaches for ${360}^{ \circ }$ Image-Based VR Tours. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 83-91. DOI:https://doi.org/10.1109/VR46266.2020.00026
510
+
511
+ [81] Lauren Westendorf, Orit Shaer, Petra Varsanyi, Hidde van der Meulen, and Andrew L. Kun. 2017. Understanding Collaborative Decision Making Around a Large-Scale Interactive Tabletop. Proc. ACM Hum.-Comput. Interact. 1, CSCW (December 2017), 110:1-110:21. DOI:https://doi.org/10.1145/3134745
512
+
513
+ [82] William A. S. Buxton, Abigail J. Sellen, and Michael C. Sheasby. 1997. Interfaces for Multiparty Videoconferences. In Video Mediated Communication. Erlbaum, Hillsdale, N.J., 385-400.
514
+
515
+ [83] Heather Wiltse and Jeffrey Nichols. 2009. PlayByPlay: collaborative web browsing for desktop and mobile devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 1781-1790. Retrieved September 3, 2021 from https://doi.org/10.1145/1518701.1518975
516
+
517
+ [84] Nelson Wong and Carl Gutwin. 2014. Support for deictic pointing in CVEs: still fragmented after all these years'. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing (CSCW '14), Association for Computing Machinery, New York, NY, USA, 1377-1387.
518
+
519
+ DOI:https://doi.org/10.1145/2531602.2531691
520
+
521
+ [85] Anna Xambó, Pratik Shah, Gerard Roma, Jason Freeman, and Brian Magerko. 2017. Turn-Taking and Chatting in Collaborative Music Live Coding. In Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences, ACM, London United Kingdom, 1-5. DOI:https://doi.org/10.1145/3123514.3123519
522
+
523
+ [86] Longqi Yang, David Holtz, Sonia Jaffe, Siddharth Suri, Shilpi Sinha, Jeffrey Weston, Connor Joyce, Neha Shah, Kevin Sherman, Brent Hecht, and Jaime Teevan. 2021. The effects of remote work on collaboration among information workers. Nat Hum Behav (September 2021). DOI:https://doi.org/10.1038/s41562-021-01196-4
524
+
525
+ [87] Jian Zhao, Michael Glueck, Petra Isenberg, Fanny Chevalier, and Azam Khan. 2018. Supporting Handoff in Asynchronous Collaborative Sensemaking Using Knowledge-Transfer Graphs. IEEE Trans. Visual. Comput. Graphics 24, 1 (January 2018), 340-350. DOI:https://doi.org/10.1109/TVCG.2017.2745279
526
+
527
+ [88] ohyay. Retrieved from https://ohyay.co/
528
+
529
+ [89] SpatialChat. Retrieved from https://spatial.chat/
530
+
531
+ [90] Wonder. Retrieved from https://www.wonder.me/
532
+
533
+ [91] Gather. Retrieved from https://www.gather.town/
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/U8p66V2PeEa/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,365 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § PEEK-AT-YOU: AN AWARENESS, NAVIGATION, AND VIEW SHARING SYSTEM FOR REMOTE COLLABORATIVE CONTENT CREATION
2
+
3
+ < g r a p h i c s >
4
+
5
+ Figure 1 Overview of our research process
6
+
7
+ § ABSTRACT
8
+
9
+ Remote work plays a critical and growing role in modern workplaces. A particular challenge for remote workers is mixed-focus collaboration, which involves frequent switching between individual and group tasks while maintaining awareness of others' activities. Mixed focus collaboration is important in content creation as it can benefit from the greater perspective, larger skill set, and reduced bias of a group, but this work is difficult to do remotely because existing systems only provide information about collaborators passively or through cumbersome interactions. In this paper, we present Peek-at-You, a system of collaborative features leveraging integration between collaboration and communication software, including conversational position indicators, speaker's view peeking, and view pushing. Our evaluation shows these features help support awareness, understanding, and working state transitions. Finally, we discuss adapting the features to manage distractions and support various work artifacts.
10
+
11
+ Keywords: Groupwork, Remote Collaboration, Content Creation
12
+
13
+ Index Terms: Human-centered computing-Collaborative and social computing-Collaborative and social computing theory, concepts and paradigms-Computer supported cooperative work
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ Distributed teams require successful technology-based collaboration to functioning effectively [1]. Collaborative work, in which actions are "influenced by the presence of, knowledge of, or the activities of another person" [63:145] is necessary for distributed teams and can be conducted asynchronously or synchronously. We focus on synchronous collaboration, which has become rarer among remote workers due to barriers not solved by existing systems [43]. Recent shifts to remote work due to COVID—19 are associated with a decrease in synchronous communication [86]. Ellis et al.'s time-space matrix refers to this type of collaboration as "same time, different place" [15].
18
+
19
+ Synchronous collaboration comes in various forms, such as parallel work on separate tasks or collective work on one task. However, for remote teams, the more complex mixed-focus collaboration presents challenges [31]. This type of collaboration involves moving back and forth between individual tasks and shared work with other group members while maintaining awareness of their whereabouts and activities [30]. Quick and fluid transitions between individual and shared work are key to successful mixed-focus collaboration [35]. Poor support for transitions can add significant friction to collaboration.
20
+
21
+ Collaborative content creation, an instance of mixed-focus collaboration, may be undertaken collaboratively for many reasons, e.g., distributing tasks, leveraging different expertise, avoiding bias, and gaining multiple perspectives [70]. It rates highly on the shared task and shared environment dimensions of collaboration [15] as people aim to create a cohesive final artifact. In contrast to collaborative writing, which has received attention in early groupware systems (e.g., $\left\lbrack {{15},{22},{59}}\right\rbrack$ ), we focus on the broader term "content creation" as formatting and graphical capabilities in modern groupware go beyond simple text and we design for steps other than writing, such as researching and decision making.
22
+
23
+ Ishii et al. described two types of collaborative spaces that tools can create: shared workspaces (spaces that allow sharing information, pointing to specific items, marking, etc.) and interpersonal space (spaces that allow verbal and nonverbal communication, eye contact, etc.) [40]. Synchronous remote collaboration requires multiple tools, such as audio and video calls for interpersonal space and real-time groupware for shared workspaces [68]. However, combining these tools can result in a disparate and unoptimized set of tools, such as conflicting signals and inaccurate perceptions. For example, screensharing allows everyone to see the same thing but does not allow everyone to work on it and transitioning between different user's sharing is tedious. Further, awareness widgets (e.g., Mural's presence icons or mini-map; Figure 2 bottom) are passive and may not provide accurate awareness of ongoing conversation and transitioning from individual to subgroup work requires searching a list of users or decoding colors to find out what part of an artifact others are working on. Lastly, the combination of signals from shared workspace and interpersonal space may lead to conflicting signals or inaccurate perceptions. For example, in video chat a collaborator's face becomes visible when they speak, giving the impression that they share a perspective on the workspace, even if they are seeing different parts of an artifact.
24
+
25
+ To address these challenges, we propose Peek-at-You, a system that integrates communication and collaborative software with elements that respond to users' activities and conversations. Peek-At-You includes Conversation-Based Location Indicators that overlay icons on users' video feeds to show which part of an artifact they are looking at (Figure 1D); a popup over the shared artifact to indicate the current speaker's position (Figure 1C); Speaker's View Peeking which allows users to preview the active speaker's view without leaving their current location (Figure 1A); and View Pushing (Figure 1 E) which streams a user's view as an overlay in the collaborative application, enabling quick transitions between individual and shared work. These features illustrate the potential of integrated systems for remote mixed-focus content creation.
26
+
27
+ This work makes three main contributions: 1) a series of formative observations using existing tools that contribute to our system's development. 2) Peek-At-You, a system and prototype implementation extending Google Docs. 3) findings form our evaluation, that show how integrated systems can foster the awareness, understanding, and transitions critical to enacting mixed-focus collaboration. We further discuss design iterations to minimize distractions and adapting the system to a variety of artifacts. By expanding the understanding and system support for mixed-focus content creation, our work advances the ability of systems to support synchronous collaboration for remote workers.
28
+
29
+ § 2 RELATED WORK
30
+
31
+ Our work builds upon three areas: 1) systems for synchronous remote collaboration; 2) tools for supporting fundamentals of remote collaboration, and 3) specific characteristics and requirements of mixed-focus collaboration.
32
+
33
+ § 2.1 SYNCHRONOUS REMOTE COLLABORATION
34
+
35
+ Distributed teams "work together on a mutual goal or work assignment, interact from different locations, and therefore communicate and cooperate by means of information and communication technology" [18:459-460]. Remote collaboration plays a growing role [6,52] in many types of work [3,4,19,41].
36
+
37
+ Video-mediated communication (i.e., VMC or video chat) is a useful tool for supporting remote teams, work and learning $\left\lbrack {{27},{41},{73}}\right\rbrack$ . VMC apps often allow text chat and screensharing [76], but are not aware of interactions in other collaborative apps.
38
+
39
+ Another important tool for remote work is Real-time groupware, which lets "distance-separated people work on a shared task in real time" [68:66] (e.g., editing documents, slides, or interface prototypes). Traditionally, real-time groupware has not integrated communication (e.g., [2,38,67]), even relying on speaking over room dividers for studies (e.g., [37,83]). Many modern apps also act as "silos" with little integration (e.g., Slack and Zoom focus on communication while Figma and Microsoft Word focus on content). Increasingly, VMC is being integrated with groupware (e.g., video calls in Google Docs [11] or whiteboards [58] and third-party apps [57] in Microsoft Teams calls), but conversational data such as the current speaker is not used to support collaboration.
40
+
41
+ Some research projects have sought to integrate the shared workspace with the interpersonal space: Ishii et al. did so for pairs by overlaying drawing atop a remote partner's video feed [40] and Grønbæk et al. did so for groups by allowing both spaces to become semi-transparent and overlay each other [26]. However, these systems have limitations including number of users, manually managing positions and opacities, and interfaces that differ significantly from single-user apps. Peek-At-You uses a logical integration: it surfaces workspace data in the interpersonal space and vice-versa, but does not overlay the two; this builds on the familiarity and scalability of traditional groupware and VMC.
42
+
43
+ Collaborative content creation tasks involve working with others to generate content; writing is a popularly studied example (e.g., $\left\lbrack {{45},{46},{72},{47},8}\right\rbrack$ ), but other examples include presentations and interface designs. Research suggests remote content generation is associated with less communication, more focus on organization of work, and less focus on feedback or content [8]. One particular aspect that collaborators negotiate is territories, i.e., portions of an artifact that are primarily edited or controlled by one person. These can both explicit or implicit and vary in duration [46]. Collaborators also negotiate transitions between tools (e.g., a document editor, notepad, and LaTeX editor) depending on their current needs [47].
44
+
45
+ § 2.2 VIEWS, AWARENESS, AND GESTURES
46
+
47
+ Screensharing is a longstanding paradigm for making applications "collaborative" [20]. Screensharing is asymmetrical: typically, one person shares at a time, decides what is shown, and manipulates the interface. "Remote control" may allow another user to move the cursor, but still does not support multiple real-time collaborators [20]. Screensharing provides WYSIWIS ("what you see is what I see") collaboration, but real-time groupware can go beyond this limitation as people use their own instance of the software. Real-time groupware can be WYSIWIS [17] or relaxed-WYSIWIS: viewports, representations, and formatting, and can vary per-user $\left\lbrack {{21},{72}}\right\rbrack$ . This increases the Independence of users [66]. While tools that spatially integrate interpersonal space and workspace use screensharing and WYSIWIS views to mix content and VMC [26,88], Peek-at-You benefits from the independence and shared control via integrated relaxed-WYSIWIS groupware.
48
+
49
+ In relaxed-WYSIWIS groupware, it can be difficult to maintain workspace awareness-i.e., "the up-to-the-moment understanding of another person's interaction with the shared workspace" [31:417]. Awareness includes multiple elements: who (presence, identity, authorship), what (action, intention, artifact), and where (location, gaze, view, reach) [31]. Gutwin et al. [32] devised several awareness supports, including Radar Views (a scaled down overview of the workspace), Multiple-WYSIWIS Views (scaled down mirrors of others' views), WYSIWID Views (a full-size view of the area around another's cursor), and Teleportals (temporary navigation to someone's viewport). Showing another user's screen or the area around their cursor is helpful, but poorly scales to groups because of limited screen real-estate [30]. More generally, existing awareness supports have significant drawbacks: because they are not aware of who the user is communicating with, they cannot optimize screen usage or highlight the most relevant information.
50
+
51
+ Supporting awareness involves a tradeoff with distractions. This can relate to usage of screen real-estate, visual feedback of others' work [30], and collaborators interrupting. For best results, care should be taken before, during, and after interruptions, to ensure that it occurs at an opportune time, the interruption is handled completely, and the original task is resumed easily [55]. There are multiple approaches to this, including immediate interruptions, negotiating when an interruption will occur, or having a mediator or schedule for interruptions [54]. Systems can employ these directly or support people in using them.
52
+
53
+ Like awareness, gestures and references are also critical elements of collaboration that are difficult to leverage in remote contexts [75]. Gestures allow people to communicate things that are difficult to verbalize, e.g., where an item is located, and occur very frequently during face-to-face collaboration [28,75]. One common type of gesture, deictic referencing, involves pointing to establish what object a person is referring to as they speak [56]. "Pointing" using a remotely displayed cursor (i.e., a telepointer) is common, but with relaxed-WYSIWIS the content being pointed to may be rendered differently or even outside remote user's viewport [21]. References and gestures also require we-awareness ("the socially recursive inferences that let collaborators know that all are mutually aware of each other's awareness" [23:279]). The first step of gestures is establishing mutual orientation ("that both parties can see the gesture and the target") [84:1378], so systems must allow collaborators to establish a shared view and also be aware of this state. Our system allows people to quickly establish mutual orientation, by pushing a view and seeing the current viewers or by jumping to others' positions and seeing who is in the same area.
54
+
55
+ § 2.3 CONFIGURATIONS, TRANSITIONS, AND ACTIVITIES IN MIXED- FOCUS COLLABORATION
56
+
57
+ Broadly, mixed-focus collaboration involves "individual tasks ... and shared work" [30:207]. To be more specific, the Coupling typology characterizes work as Light-weight Interactions, Information Sharing, Coordination, Collaboration, or Cooperation [60]. Mixed-focus collaboration occurs at the more tightly coupled levels, which are rarely done remotely [43]. Another way to characterize group work is subgroupings. Informally, this may include parallel (individual), pair/small-group, and group work [81]; formally, subgrouping can be described in more detail [61]. A third way to characterize groupwork is content focus. For example, one such categorization includes discussion, view engaged, sharing of the same view, same information but different views, same specific problem, same general problem, different problems, and disengaged [39]. These characterizations raise key concepts-coupling, groupings, and shared views-that we use to define important configurations for our system to support.
58
+
59
+ In mixed-focus collaboration, transitions between working configurations are key to success [35]. Transitions facilitate the three typical phases of collaboration: pre-process, in-process, and post-process [10]. Further, transitions facilitate various activities while in-process (e.g., creating content, presenting results, comparing results, and sharing content) [44]. Several research projects seek to support transitions. For classrooms, one allows teachers to plan and make planned or fluid transitions between individual, small-group, and whole-group phases [64]. For in-person collaboration, shape-changing furniture can aid transitions [25] or an extra shared device can aid moves from individual to group work [5]. For remote work between pairs, continuous screen sharing using a second monitor supports transitions [14]. For other remote work, the TeamWave system uses a room metaphor to ease transitions [24]. The Peek-At-You system supports transitions for fully remote groups, with a design intended for a variety of artifacts.
60
+
61
+ § 2.4 SUPPORT FOR COLLABORATIVE CONTENT CREATION
62
+
63
+ Creating content collaboratively requires both planning (defining the goals for the content, discussing resources of each collaborator, defining the forms of collaboration to occur, and allocating tasks) and production (sketching, composing, and reviewing content an individual and group level) [8]. The collaborators must communicate, coordinate, cooperate, and maintain awareness [60].
64
+
65
+ Commercial and research systems have worked to advance support for these key elements. Video chat supports conversations and awareness [12], which is important for planning (e.g., discussing how to distribute tasks) and production (e.g., reviewing others' individual work through discussion or speaking about how to compose individual work into a cohesive whole). Relaxed-WYSIWIS groupware allows people to do individual sketching or composition work (by taking on their own views) as well as group composing and reviewing work (because the task space is shared) [68]. Within groupware, awareness tools provide support for monitoring and understanding what others in the group are doing [32]. However, research suggests that existing tools still require remote groups to spend a large amount of time organizing their work, limiting their ability to focus on planning and discussing the content itself [8]. Research testing non-traditional spatial interfaces has shown that combining communication and collaborative tools has the potential to further support for communication, organization, and awareness $\left\lbrack {{26},{40}}\right\rbrack$ . To best support collaborative content creation, we consider a non-spatial approach to integrating task and interpersonal space, seeking to support communication, awareness, and group work processes while maintaining the familiar interfaces of productivity and communication tools.
66
+
67
+ Integrating and sharing data between the task and interpersonal spaces may offer many benefits. First, it could reduce the burden of managing windows [62] and help avoid a sense of impoliteness related to multitasking [53]. Second, since collaborators using VMC spend 5-17% of the time looking at the video feeds [78,79], an integrated system could place awareness widgets near video feeds to make them more consistently visible. Third, awareness indicators on peoples' video feeds could tie information to easily scannable of video feeds, rather than a row of circles that must be searched or interacted with to locate others' positions. Signals could also be prioritized based on the current speaker. Fourth, integration could enable unique view sharing tools, reducing difficulties with starting and managing shared views [30]. For example, privacy preserving 'push' and 'pull' view sharing, quick transitions to co-editing, highly visible gesture cursors only when needed, and prioritized access to the current speaker's view. Fifth, an integrated approach could support multiple working styles; for example, using fewer awareness supports when a call is not active. Finally, an integrated approach could automatically respect boundaries (e.g., breakout rooms), avoid inconsistent information, and help everyone in a group call establish a shared workspace.
68
+
69
+ 3 FORMATIVE OBSERVATIONS
70
+
71
+ < g r a p h i c s >
72
+
73
+ Figure 2. A collaboration setup used in our sessions: Zoom and Mural (people and content are for illustration, not from our data).
74
+
75
+ Previous research suggests that tightly coupled remote work is difficult, even with video chat [41]. To build on this understanding in the case of collaborative content, we conducted two formative sessions in which groups of five ( 7 man, 3 woman; participants were all office workers employed within a research unit; remote work experience: all were employed remotely at the time of the study) collaborated with existing tools (see Figure 2).
76
+
77
+ The sessions used Zoom and two real-time collaborative apps: Microsoft Word online and Mural (a digital whiteboard). The task was to create a business plan for two prompts: "A stall on a tropical beach full of tourists" and "A kiosk in a busy mall". Group members were assigned a role-Product Developer (three people) or Writer (two people)—and worked on the following activities: (1) create a name for the business [All Roles], (2) create 10 products, each with a name and image [Product Developers], (3) write a paragraph explaining why people should come to the new business [Writers], and (4) agree on prices for the products [All Roles].
78
+
79
+ Each group completed the task twice, once using Microsoft Word and once using Mural (the order was switched between groups). For each tool, participants received an overview of available collaborative functionality (Microsoft Word: list of editors, jump to others' cursor; Mural: list of editors, shared selections, telecursors, mini-map, jump to or follow others' locations), then collaborated for 12 minutes. The prompt and roles differed for each tool.
80
+
81
+ The collected data included participants' screens, audio and video, and questionnaires after each task (NASA-TLX [36] and questions about who they worked with most, what parts of the task they worked on most, and any issues noticed while collaborating). A final questionnaire asked about preference between Word and Mural. Lastly, a semi-structured interview explored the group's organization, feeling of connectedness, and awareness of others.
82
+
83
+ § 3.1 OBSERVATIONS
84
+
85
+ We reviewed the recordings and survey data to identify issues participants encountered while collaborating remotely.
86
+
87
+ Audio channel limits small-group work. Our observations suggested that during the middle phase (the role-specific tasks) the product developers tended to occupy the audio channel. Annotating the recorded calls showed that product developers spent a total of 17.73 minutes speaking while writers spent 11.04 minutes speaking (total across both groups and tasks). The cause for this disparity may simply be the larger number of product developers, a reluctance to break into conversation on the part of the smaller subgroup, or the fact that writing work does not facilitate multitasking and discussion. This finding suggests that multiple subgroups may not benefit equally from a shared audio channel.
88
+
89
+ Written content can be more difficult to get feedback on. In one group, a writer asked for others to check over their paragraph, but no one did. In the other group, a writer said they were not happy with their paragraph and others should take a look, but again no one did. In contrast, ideas for products or names, which could be raised verbally, generally received quick feedback from others.
90
+
91
+ Misunderstandings and duplicated work were common and often unnoticed. In several instances, multiple people added the same product or created a heading and area for the same section. In several other cases, recordings showed two people simultaneously searching for images of the same product; this lack of coordination was not revealed until they returned to the workspace to see an image already added. While duplicated work can be desirable in some circumstances (e.g., brainstorming), the duplicated work we observed was silently discarded, not considered as an improvement.
92
+
93
+ Collaboration tools infrequently used. Recordings revealed that participants did not use jump and follow. While recordings cannot show with certainty whether participants looked at Mural's mini-map, none interacted with it, and several participants were unaware of changes outside their viewport (which it displays). The infrequent usage may relate to the session length, task requirements not calling for such interactions, or friction when using these tools.
94
+
95
+ § 4 DESIGN CONSIDERATIONS FOR THE PEEK-AT-YOU SYSTEM
96
+
97
+ In addition to our formative observations, we based our design on five team configurations and four design goals.
98
+
99
+ § 4.1 TEAM CONFIGURATIONS TO SUPPORT
100
+
101
+ Researchers have developed frameworks for describing mixed-focus collaboration $\left\lbrack {{39},{61},{81}}\right\rbrack$ , but not for fully remote synchronous collaboration. Guided by these frameworks and our formative observations, we focus on five key group configurations. To describe these configurations, we use concepts identified in previous work $\left\lbrack {{61},{81}}\right\rbrack$ that we define as follows: a team is the collection of individuals collaborating in a call; a subgroup is a unit of two or more people collaborating and a main group is a special subgroup that is maintaining the conversational floor. We introduce the "main group" unit because of poor support for parallel conversations in video calls [82] (e.g., this was observed with the product developer subgroups in our formative observations).
102
+
103
+ Considering previous work $\left\lbrack {{61},{81}}\right\rbrack$ and the peculiarities of video calls (e.g., breakout rooms, limited parallel conversations), we define five key configurations to support (see Table 1):
104
+
105
+ 1. Individual Work: each person works alone
106
+
107
+ 2. Individual + Subgroups Work: some people work alone, while others work together in pairs or small groups
108
+
109
+ 3. Subgroups Work: all people work in pairs or small groups
110
+
111
+ 4. Splintered Team Work: most people work together in a main group, while a few work individually
112
+
113
+ 5. Team Work: all people are working together in a main group To define subgroup more precisely than "people collaborating", we considered states identified in co-located work [39] and adapted them to a remote and task-agnostic context by recognizing two key concepts: sharing a view and discussing. These concepts have been used in coding remote [77] and hybrid [61] collaboration, and help formalize differences between conversational and visual feedback seen in our formative observations. Therefore, we consider four subgroup states: not existing, discussing, working on the same content, and working on the same content and discussing.
114
+
115
+ Table 1. Our five team configurations illustrated for a team of eight.
116
+
117
+ max width=
118
+
119
+ 7*Group states X 3|c|Individual States
120
+
121
+ 2-5
122
+ X Working individually Working with a subgroup Working with the main group
123
+
124
+ 2-5
125
+ Individual work 1212 2022 X X
126
+
127
+ 2-5
128
+ Individual + subgroups Work 2018 2.2 二二 X
129
+
130
+ 2-5
131
+ Subgroups work 3|c|2022 2023
132
+
133
+ 2-5
134
+ Splintered team work 三三 X
135
+ < g r a p h i c s >
136
+ 三三
137
+
138
+ 2-5
139
+ Team work X X 2222 2022
140
+
141
+ 1-5
142
+
143
+ The heart of mixed-focus collaboration is fluid transitions between states [35]. Therefore, it is important to consider not only the configurations or states that individuals, subgroups, and teams can take on, but also the variety of transitions that can occur between them (e.g., from Individual Work to Subgroup Work). The transitions from individual to subgroup or team work involve particular challenges: unlike face-to-face collaboration, physical movements and reconfigurations of the workspace that can support transitions [25] are not possible. People must quickly and accurately understand what others are working on to assess when a transition is appropriate and whether it has succeeded. On a subgroup level, transitions between discussing and not discussing are mainly constrained by the availability of the audio channel. Transitions from not sharing a view to sharing a view can be more difficult to do quickly without system support.
144
+
145
+ § 4.2 DESIGN GOALS
146
+
147
+ Based on existing literature, formative observations, and our configurations to support, we address blockers to remote mixed-focus content creation with a system designed around four goals:
148
+
149
+ DG1. Build awareness when and where needed. Systems for mixed-focus collaboration should actively build awareness of collaborators' actions and positions, rather than relying on passive indicators. Previous approaches to this goal include detecting references to documents in conversation to surface relevant files $\left\lbrack {{34},{65}}\right\rbrack$ , detecting periods of inattention and using highlighting, motion traces, or replays to catch up [29], or manually configuring avatars that can notify users of certain actions by others [16]; we focus on automatically and continuously supporting awareness.
150
+
151
+ DG2. Support understanding of conversation. Conversations can be difficult to understand when views differ between collaborators, and existing solutions for maintaining awareness are passive in these situations. Previous approaches to this goal include awareness widgets like mini-maps [30,32], detecting content references in text messages and determining the probability of misunderstandings based on gaze detection [9], or sharing gaze positions with other users [45]. We focus on using integration with the interpersonal space to go beyond traditional awareness indicators without requiring gaze detection hardware.
152
+
153
+ DG3. Allow fast and simple transitions between collaborative states. Understanding what others can see and establishing shared views should be quick and easy, supporting transitions. We focus on supporting lightweight transitions that occur without changing the communication medium or work artifact. Previous approaches have also enforced additional structure. One structured approach is turn-taking of control (e.g., driver and viewer roles for editing documents [48] or music live coding [85]); however, research suggests that verbal and non-verbal communication can obviate the need for rigid turn-taking protocols [13] so we focus on more flexible state transitions in our video-chat based system. A second structured approach is handoff of information (e.g., using a specialized visualization for collaborative sensemaking [87]); however, for collaborative content creation, we focus on leveraging views and positions in the existing artifact to support transitions.
154
+
155
+ DG4. Means for lightweight feedback. Assistance and feedback should be easy to provide. Visual communication should be supported for gestures, referencing, and times when the audio channel is occupied. The primary previous approaches for lightweight feedback include telecursors [21] and screensharing [20]. We focus on simpler and more transient view-sharing and enabling gesture-friendly cursors within these views.
156
+
157
+ § 5 PEEK-AT-YOU: NEW COLLABORATIVE SYSTEM LEVERAGING INTEGRATION
158
+
159
+ Based on our design goals, we created Peek-at-you, a set of collaborative features developed with our four design goals in mind, implemented as a Chrome extension that extends Google Docs. The design reflects the specific case of document editing, which is the focus of our evaluation, but the features are designed to apply to various artifacts (e.g., slides, digital whiteboards, 3D models, or interface designs). Our system focuses on allowing a tightly coupled group to successfully leverage the rich communication and awareness possible within a single video call; therefore, we did not include other established methods of managing communication that split up conversations (e.g., text chat [69], breakout rooms, or spatial video chat [88-91])
160
+
161
+ § 5.1 CONVERSATION-BASED POSITION INDICATORS
162
+
163
+ Position indicators help collaborators understand where in the document others are working (DG2). Because our system integrates video chat with collaborative software, this information can be surfaced where and when we expected it to be most useful (DG1). First, icons are shown in the corner of each users' video feed (see Figure 3, A), indicating the collaborator's current page and whether they are in the same place, above/below, or in another tab. This is based on our observation that users who are speaking might erroneously assume they are looking at the same thing. Clicking the icon scrolls to the collaborator's position (DG3). Placing awareness supports on others' video feeds is unique approach that does not use screen space within the task area. Second, an active speaker popup is shown at the bottom of the work area when a collaborator is speaking (see Figure 3, B), containing the same icons as in the speaker's video feed and a description of their state (e.g., "below you (page 6) in the document" or "in another tab"). For users focused on the shared workspace, this actively shows relevant information from the interpersonal space without the need to scan the indicators in the video chat. Conversational position indicators on video feeds provide many components of awareness: presence and identity via video feeds, location and view via position indicators, and action and artifact via jumping. Further, understanding others' viewpoints and navigating to them afford the fundamentals of we-awareness [23], a key requirement for discussing content. This is important as effective communication and organization could allow more content-focused work time [8].
164
+
165
+ < g r a p h i c s >
166
+
167
+ Figure 3. Video call integrated into Google Docs using Peek-at-You. Conversation-Based Position Indicators appear on others' video feeds (A) and at the bottom of the work area (B).
168
+
169
+ < g r a p h i c s >
170
+
171
+ Figure 4. Peeking a collaborator's view using our system. Viewers can react (A) and gesture with cursor trails (B)
172
+
173
+ § 5.2 SPEAKER'S VIEW PEEKING
174
+
175
+ Quick and fluid transitions between individual and shared work are key to mixed-focus collaboration [35], and getting feedback is an important component of creating content together [8]. Therefore, the active speaker pop-up described above can be hovered to quickly preview the current speaker's view (see Figure 4). This functionality is inspired by our observation that content such as writing can be difficult to get feedback on, and people may be hesitant to leave their position to see what someone else is talking about (DG2, DG3). When peeking someone else's view, viewers can react using a set of five reactions (DG4): thumbs up/down ( § / § ), eyes ( $\square \square$ ), ok ( & ), and thinking ( ; ). Viewers can also use cursor trails (colored dots that temporarily appear as their mouse cursors move on the preview); these specialized telecursors are well suited to gesturing [33] (DG2, DG4).
176
+
177
+ § 5.3 VIEW PUSHING
178
+
179
+ In addition to peeking others' views, the system allows participants to quickly share their own view with everyone by clicking a "share view" button (DG3, see Figure 5). Like view peeking, this feature is based on the need to easily get feedback and transition between working styles; this feature particularly supports transitions to full-group work. Other collaborators can dismiss the shared view if it is not relevant to them, and the sharer sees a list of current viewers (DG1). Viewers can use the same reactions and cursor trails that are available when peeking (DG4). For quick transitions, pushing ends any existing view push (DG3). By offering both View Peeking and View Pushing, the system provides robust support for passive and active maintenance of the action, artifact, and view components of workspace awareness. Additionally, by enabling a shared context for conversation, these features may allow collaborators to discuss more nascent aspects of workspace awareness such as intention.
180
+
181
+ < g r a p h i c s >
182
+
183
+ Figure 5. Pushing a view to collaborators. Pushing is started or stopped with one click (A), lists the current viewers (B), and shows reactions on the content area (B) and video feeds (C).
184
+
185
+ § 5.4 PROTOTYPE IMPLEMENTATION DETAILS
186
+
187
+ Our prototype extends an existing groupware system, Google Docs. Although our formative observations were done using Microsoft Word, we chose Google Docs, as its HTML was easier to extend. We integrated video chat into the Google Docs page via a sidebar on the right side, where the active speaker is highlighted with a green outline, and added the previously described features.
188
+
189
+ The prototype extends Google Docs using a Google Chrome extension. React and Typescript are used to inject the system's interface, capture camera and tab feeds, and track viewports. The video chat uses WebRTC, with Kurento Media Server [50] for server-side recording and hark.js (https://github.com/otalk/hark) for active speaker detection. A NodeJS server and WebSockets are used to sync collaborators' states (position, active tab, view shares, etc.). Position icons show whether collaborators' scroll positions are the same as the user (>66% viewport overlap) or above/below, and whether others are sharing views or in another tab. Users to share the tab when joining the call; this stream is always transmitted for recording and forwarded to others as needed for view sharing.
190
+
191
+ § 6 SYSTEM EVALUATION
192
+
193
+ We studied six groups of five people in a mixed-focus content creation task to gather initial feedback about our Peek-at-You system. The study was approved by the institutional ethics board.
194
+
195
+ § 6.1 TASK
196
+
197
+ The study task involved creating a plan for a hypothetical business merger. It differs from the business plan task used for our formative observations in two ways. First, the task begins with an existing document, ensuring sufficient content for the need of positional awareness. Second, an editor role was added, to ensure to increase the working configurations and transitions. Participants were assigned one of three roles: Writer (two participants), Marketer (two participants), or Editor (one participant). Groups received a document containing information about a fictional company and three candidate companies for the merger. Their task was to select the best candidate and plan for the new company:
198
+
199
+ * (All Roles) Review background info on the company and three merger options, then choose a merger option
200
+
201
+ * (Writers) Write one or two paragraphs for investors about why the merger will help the company grow
202
+
203
+ * (Marketers) Create a New Company Name, New Hero Offering, and Marketing Plan for the company
204
+
205
+ * (Editor) Help out the others as needed and check all new content for quality/consistency
206
+
207
+ The evaluation included two conditions (Video Chat Only and Peek-At-You) in a within-groups design, so two merger planning documents were created, allowing groups to perform the task twice. The first document described a bakery chain choosing between dessert bakery, deli, and smoothie chains. The second document described a sportswear retail chain choosing between local-focused, sporting equipment, and yoga clothing chains. Each document included background information, a product summary, and a SWOT analysis for the company (1.5 pg.); investor statement placeholder (0.5 pg.); overview, strengths, and weaknesses for each merger option (3 pg.); merger decision placeholder (1 pg.); new name and hero offering placeholders (1 pg.); and marketing plan placeholder ( 1 pg.). Placeholders included a reminder of what to add and scratch space for ideas or notes.
208
+
209
+ § 6.2 PROCEDURE
210
+
211
+ The study was conducted remotely via Zoom, except for the collaborative work which used the video chat integrated in our prototype system, lasting 75 minutes. After giving informed consent, and completed the task twice, once with only the video chat only mode (Video Chat Only) and once with all features enabled (Peek-At-You). Each time, they were given instructions and roles to collaborate using Google Docs. In the Video Chat Only condition, the instructions were pointed to collaborative functionality of Google Docs such as the editor list at the top of the page. In the Peek-At-You condition, the instructions were a brief interactive tutorial. Next, participants opened a copy of the task instructions in a separate tab for reference and spent 15 minutes collaborating. After finishing, participants also completed a survey about the experience. The order of conditions and roles was counterbalanced between groups and tasks.
212
+
213
+ § 6.3 MEASURES
214
+
215
+ After each condition, a questionnaire was used regarding: - NASA-TLX [36]
216
+
217
+ * Collaborative Experience: three items rating participants' Distractedness ("I was frequently distracted as I tried to
218
+
219
+ work"), Awareness ("I had a good sense of what other people were working on at all times"), and Understanding of Discussion ("It was easy to follow the ongoing discussion"). on a 7-point Likert scale (strongly disagree to strongly agree). - Feedback: open feedback about the system or experience.
220
+
221
+ After completing both conditions, a final survey asked which condition participants preferred and why. Finally, a brief (10 minutes) semi-structured group interview was conducted regarding ability to get feedback from others, ability to understand others, desire and ability to maintain awareness, and reasons for using or not using Peek-At-You's features. Participants' screens and video calls were recorded during the tasks and log data was collected.
222
+
223
+ § 6.4 PARTICIPANTS
224
+
225
+ Participants were recruited from within our institution using email and Slack channels and compensated through an internal award program (approximate value pre-tax 70 USD). Twenty-nine participants were recruited in six groups of five people. Due to a last-minute cancellation, one group completed the study with four members rather than five; this was accommodated by omitting the editor role. In total, 29 participants completed the study (17 Female, 12 Male; Age: mean=29.4, SD=9.3). Professions of the participants were varied (Software Developer/Engineer=11, Design/UX=3, Analyst=5, Researcher=3, Management/ Supervision=2, Community Support=2, Marketing=2, Legal=1), but all were experienced with remote work. Most participants were unacquainted, but none were coworkers. While our sample size and the complex dynamics within a five-person group interaction did not allow us to account for these instances within our analysis, we expect the fixed task, randomly assigned roles, and within-group study design minimized any potential effects of these differences and we did not make observations related to acquaintedness.
226
+
227
+ § 6.5 EVALUATION FINDINGS
228
+
229
+ Our findings focus on understanding how much participants used Peek-At-You's features, their preferences for our system or the Video Chat Only condition, and their feedback on each condition.
230
+
231
+ § 6.5.1 SYSTEM USAGE
232
+
233
+ < g r a p h i c s >
234
+
235
+ Figure 6. Collaborative feature usage per-participant. Averages are provided across all participants and per-role.
236
+
237
+ Usage of our system's features was analyzed using log data. In the 15-minute session, participants used the jump functionality of the video overlay icons an average of 1.72 times, the peek functionality an average of 3.21 times, and the push functionality an average of 0.45 times (see Figure 6 for details). While push was used less than peek on a per-participant basis, it is worth noting that pushes affect the entire group whereas peeks are displayed only to the local user. Another important caveat to these usage numbers is that they do not capture how often participants looked at the Conversation-Based Position Indicators, usage which is better captured through survey and interview responses.
238
+
239
+ § 6.5.2 SURVEY RESPONSES
240
+
241
+ Participants' responses to the NASA-TLX were similar in the Video Chat Only and Peek-At-You conditions (see Table 2).
242
+
243
+ Table 2. NASA-TLX responses. Values are mean (SD).
244
+
245
+ max width=
246
+
247
+ X Video Chat Only Peek- At-You Wilcoxon Signed-Ranks
248
+
249
+ 1-4
250
+ Mental Demand 6.48 (2.11) 6.48 (1.45) Z=-0.06; p=.95
251
+
252
+ 1-4
253
+ Physical Demand ${2.48}\left( {2.52}\right)$ 2.03 (2.18) Z=-1.47; p=.14
254
+
255
+ 1-4
256
+ Temporal Demand 6.07 (2.12) 6.52 (2.13) Z=-0.76; p=.45
257
+
258
+ 1-4
259
+ Performance 5.24 (2.71) 4.59 (2.21) Z=-0.85; p=.40
260
+
261
+ 1-4
262
+ Effort ${5.93}\left( {1.98}\right)$ 5.76 (2.08) Z=-0.17; p=.87
263
+
264
+ 1-4
265
+ Frustration 4.86 (2.67) 4.69 (2.04) Z=-0.38; p=.70
266
+
267
+ 1-4
268
+
269
+ Responses to the Collaborative Experience questions show some differences between the Video Chat Only and Peek-At-You conditions (see Figure 7). Participants expressed greater agreement regarding their understanding of the conversation in the Peek-At-You condition, but this difference was not significant in a Wilcoxon Signed-Ranks test $\left( {\mathrm{Z} = - {1.101};\mathrm{p} = {0.267}}\right)$ . Participants rated their awareness of collaborators higher in the Peek-At-You condition (median=Somewhat agree) than in the Video Chat only condition (median=Somewhat disagree); the difference was significant in a Wilcoxon Signed-Ranks test $\left( {\mathrm{Z} = - {2.15};\mathrm{p} = {0.03}}\right)$ . Participants did not rate their level of distraction significantly differently in the two conditions (Wilcoxon Signed-Ranks: $\mathrm{Z} = - {0.26};\mathrm{p} = {0.80}$ ).
270
+
271
+ < g r a p h i c s >
272
+
273
+ "Strongly disagree "Disagree "Somewhat disagree "Neutral "Somewhat agree "Agree "Strongly agree
274
+
275
+ Figure 7. Responses to Collaborative Experience questions (*p<.05).
276
+
277
+ A majority of participants preferred the Peek-At-You condition $\left( {\mathrm{n} = {21}}\right)$ . A subset of participants preferred the Video Chat Only condition $\left( {\mathrm{n} = 8}\right)$ . Among participants preferring Video Chat Only, roles in the Video Chat Only condition were Editor $\left( {\mathrm{N} = 2}\right)$ , Writer $\left( {\mathrm{N} = 4}\right)$ , and Marketer $\left( {\mathrm{N} = 2}\right)$ while their roles in the Peet-At-You condition were Writer $\left( {\mathrm{N} = 3}\right)$ and Marketer $\left( {\mathrm{N} = 5}\right)$ . Participants provided open-ended feedback regarding the reasoning for their preferences. Among the participants who preferred the Video Chat Only condition, four did not feel the new features were needed to maintain awareness, or felt that a high degree of awareness was not needed in this task. The other four found the features distracting due to rapid visual changes. Five of these eight participants experienced Peek-At-You with the Marketer role; while the sample is not large enough to test for significant, it is possible that the marketing role in particular was well suited to verbal discussion required less in-artifact coordination. Among the participants who preferred the Peek-At-You condition, reasons for the preference were varied but related to usefulness in supporting awareness and understanding. A qualitative analysis of participants' feedback was performed to provide greater insight into these perceptions.
278
+
279
+ § 6.5.3 PARTICIPANT FEEDBACK
280
+
281
+ To analyse participants' experiences and feedback we used an open coding approach, where two authors separately coded transcripts until no new codes appeared, then reviewed each other's coding for agreement; this included data from two groups. The first author coded the remaining data and identified eight themes, which were merged into six themes after discussion between two authors.
282
+
283
+ Peek-At-You aids awareness. Participants found the Peek-At-You system to be interactive and helpful in maintaining awareness of collaborators' locations, roles, thought processes, and task progress. The conversation-based position indicators helped participants stay aware of others' locations and focus on what they wanted to share. Displaying collaborators' positions helped communicate roles by being able to see which areas of a document everyone is working on. Tracking indicators over time can also reveal thought processes such as referencing one part of the document to help with writing elsewhere; P11 explained the system "definitely helped us understand, like, who was working on what and what they were, what their thought process was." In addition to process, position indicators can communicate progress on a task: "even, I think, something as simple as whether we've finished reading, and that was easy to understand in the [Peek-At-You condition]" (P26).
284
+
285
+ Peek-At-You supports conversational understanding. Participants found it was easier to understand what others were speaking about with our system, with P13 stating that "it was easier to know what someone else was talking about or referring to". This suggests that the additional awareness of others' positions, thought processes, task progress, and roles provides context that makes following the conversation easier.
286
+
287
+ Participants specifically appreciated the popup showing the active speaker's location, as it aided with following the conversation. For P26, "it was really helpful to see the speaker's view and be notified when I was not on their view." P11 found the popup "was a little bit distracting sometimes, but it definitely was helpful." This suggests both roles of the popup (i.e., warning when the listener is not seeing the same part of the document and view peeking) are valuable for conversational understanding.
288
+
289
+ Peek-At-You aids transitions. Participants reported that Peek-At-You's features were helpful for transitioning to mixed-focused collaboration. For example, P21 found it "easier to track others, share progress, find one another". Position icons aided in grouping up; in one instance P25 explained "I couldn't find the section where we were supposed to be writing and I was able to jump up to where P27 was, was taking a look. So yeah, I found it helpful." Jumping to others' video feeds also helped with temporarily transitions, e.g., "I was doing the marketing stuff, so I was like, looking at what, P10 and P11 were like adding just so that I could like, know what was happening like on the other part, and yeah I was just like jumping to their pages with the little, little icon on the video." More generally, ${P23}$ found "the features allowed me to quickly hop back and forth between where other people were looking and working."
290
+
291
+ Pushing a view was a quick way to ensure everyone was looking at the same thing. As ${P4}$ explained,"I was able to share my screen on the merger page and everyone else could pop-up on my screen, so they didn't have to scroll all the way back up." P17 noted that Peek-At-You's features "made it easier to share views and get input without having to completely leave the work you were doing." In contrast, P6, P8, and P11 described challenges to transition to group work in the Video Chat Only condition.
292
+
293
+ Audio channel aids awareness but difficult to share. Some participants mentioned that the audio channel helps maintain awareness. However, many participants reported that sharing the audio channel was challenging. For example, at times "others had to take a pause until the main conversation was over or find another way to speak without disruption" (P3). This was especially apparent when participants were working in small-group configurations. When ${P1}$ and ${P2}$ worked together, ${P1}$ found "it was hard to coordinate with ${P2}$ because we didn’t want to, like, talk over like ${P3}$ and ${P4}$ talking." P16 likewise found that "it was annoying trying to have a discussion with just part of the team while other[s] were, are having a conversation."
294
+
295
+ While breakout rooms or selective muting are possible solutions, these approaches are also likely to reduce awareness within a group. Collaborative features like the ones in our system may attenuate the need for breakout rooms by reducing verbal articulation work (the work of working together [71,74]).
296
+
297
+ Collaborative features can be distracting. Although helpful with awareness and understanding, some participants found certain aspects of our system distracting, such as rapid visual changes and shared views taking up too much screen space. To manage these distractions, participants suggested using collaborative features only during certain phases or being able to turn them on and off as needed. For example, one participant felt the Peek-At-You feature was only important initially, during brainstorming and discussion, while another suggested having the feature be toggle-able.
298
+
299
+ Collaborative features may be more useful with experience or in other tasks. Participants also explained that because the features were new, they may not have fully learned or thought to use all of them during the study. P8 explained that "since the UI was new, we were getting distracted because of that", but "the more we use this tool, the more efficient ways we will find to make the most of it." P23 felt similarly: they "didn't use some of the features consciously due to familiarity. With more exposure to the extension and conscious effort it will become more natural."
300
+
301
+ Participants found the system useful beyond gaining experience, particularly in scenarios involving collaborative work or presentations with multiple slides, as it would reduce the amount of scrolling (P6). They highlighted the usefulness of the sharing through push and peek, as well as the preview feature for keeping track of others' progress without interrupting their own work.
302
+
303
+ § 7 DISCUSSION
304
+
305
+ We discuss how our system supports fluid working configurations, why existing applications should enable extensibility to support the functionality of Peek-At-You, and adapting our system to reduce distractions.
306
+
307
+ § 7.1 HOW IN-THE-MOMENT INDICATORS SUPPORT TRANSITIONS
308
+
309
+ Our evaluation shows that Peek-At-You supports smooth transitions in mixed-focus collaboration by increasing awareness of co-editors' positions, roles, thought processes, and task progress. Survey data confirmed that our system supports this type of awareness, which is important for identifying opportune times to interrupt the current working state of the group during transitions and to understand when transitions into subgroups or a main group succeed. For example, transitioning from individual to subgroup work may involve identifying others with the same role. Similarly, transitioning to teamwork may involve identifying when everyone has made sufficient progress on their individual work first. More generally, transitions are aided by understanding the processes of collaborators and choosing an opportune time to interrupt the current working state of the group [31].
310
+
311
+ Sharing views can also support transitions. While sharing or viewing of private information is simple when face-to-face [49], we show that one-click and conversational interactions can make view sharing equally easy in a remote context. View pushing and peeking further allow to jump to the location for a complete editing experience. Unlike spatial video chat systems that allow users to share views via screensharing and move participant videos around on top of the shared view to group up around a particular element [88], our system supports full content control after jumping.
312
+
313
+ Using awareness of others' positions and actions is a quick and lightweight way to transition into different working configurations while maintaining awareness of the rest of the group. However, traditional approaches, such as breakout rooms or position-based audio muting [89-91], provide stronger separations between groups. While enabling focused work, this limits awareness of other subgroups, leading to challenges, such as unawareness about what a breakout group is working on or when to interrupt.
314
+
315
+ § 7.2 PEEK-AT-YOU VS. OUR FORMATIVE OBSERVATIONS
316
+
317
+ We return to the four themes identified in our formative observations to compare the findings to our evaluation study.
318
+
319
+ "Audio channel limits small-group work." Our system's awareness features reduced the need for verbal articulation work, which may ease the experience of sharing an audio channel. However, sharing an audio channel was still difficult at times, and other solutions such as selective muting, subgrouping, or breakout rooms, are needed scale to arbitrary group sizes.
320
+
321
+ "Written content can be more difficult to get feedback on". Participants found view pushing and peeking useful to quickly establish a shared view. Grouping up around a shared view is an effective way to gather feedback on writing, as it does not require reading the text aloud or losing one's position in the document.
322
+
323
+ "Misunderstandings and duplicated work were common and often unnoticed". Participants noted that our system supported conversational understanding, with position indicators being helpful for tracking discussions. While we could not make direct comparisons with our formative observations, participants indicated that position indicators aided them in assessing what others were working on, helping to avoid duplications.
324
+
325
+ "Collaboration tools infrequently used". Participants use Peek-at-you over 5 times on average, which compares favorably to the formative observations, where collaboration tools (e.g., jump/ follow) were not used. It's worth noting that the longer content in the evaluation task makes a direct comparison difficult. However, placing collaboration tools on video feeds may have also made them easier to access, therefore contributing to increased usage.
326
+
327
+ § 7.3 MANAGING DISTRACTIONS
328
+
329
+ Mixed-focus collaboration involves processing a lot of information, including video/audio communications, real-time artifact changes, and awareness widgets. Our system supplies real-time information, which some participants found distracting due to rapidly changing icons or overlays taking up screen space. However, our questionnaire did not show an overall increase in distractedness when using Peek-at-You, possibly due to distractions inherent to real-time collaboration overshadowing distractions related to our system. Alternatively, increased distractions from the system may have been balanced by a decrease in other distractions, such as improved conversational articulation or better leverage of interruption strategies [54].
330
+
331
+ Though a degree of distraction is inherent to mixed-focus collaboration, the design of collaborative systems involves tradeoffs between maintaining awareness and avoiding distractions [30]; the desired balance may depend on many factors including group size, task, artifact type, and roles. Because some participants in our evaluation cited distraction as a drawback, we suggest four design iterations that could reduce distractions. First, position indicators could use "calm design" [7] by displaying only a binary red/green status light until hovered and varying the active speaker notification [51] based on speaking and working activity (Figure 8, left). Second, shared views could be sized more precisely to manage screen space. Currently, our prototype sizes shared views based on the window aspect ratio of the sharer and viewer, but this may result in a larger than intended preview in some cases. Third, view pushing could incorporate a "consent" mechanism where shared views are small but expand if hovered (Figure 8, right). This approach may offer some of the benefits of continuous gestures like moving and resizing elements in spatial video chat systems [26,88- 91], while still being compatible with a standard scrolling document interface. Fourth, a focus mode could be added, which would hide collaborative features, selectively present audio using roles or proximity, or even hide others' edits. Video overlay icons could signal which collaborators are in focus mode. This may also make the system more inclusive (multiple participants cited ADHD as a particular motivator for minimizing distractions) and support hybrid work that includes loosely coupled phases [60].
332
+
333
+ < g r a p h i c s >
334
+
335
+ Figure 8. Potential design iterations: (left) a calm design that uses binary status lights instead of icons and a color-coded outline instead of the active speaker popup; (right) a pushed view that uses a consent mechanism before appearing full size.
336
+
337
+ § 7.4 COMPARING METHODS OF SHARING VIEWS
338
+
339
+ For peeking / pushing views, we implemented view sharing through video streaming of the user's view, with the option to navigate to their view by clicking the position icon. We relied on a video stream because the tab video was already being streamed for recording and deep integration is difficult with a closed-source application (Google Docs). However, using local rendering for view sharing in collaborative software would provide several benefits, such as bandwidth and quality improvements, increased accessibility, and making the multiple views editable. Regardless of rendering approach, integrating shared views can preserve privacy compared to general-purpose screen sharing, as it only shares content that collaborators already have access to [76].
340
+
341
+ Jumping to someone's view offers an alternative to temporary view sharing, but it can cause context loss for the person jumping. A possible solution is to blend jumping and peeking, as Gutwin et al. [32] did by holding the right mouse button to jump to a collaborator's view and releasing it to jump back. A "back" button could also be shown to aid within-document navigation.
342
+
343
+ § 7.5 SUPPORTING INTEGRATION OF GROUP CALLS AND COLLABORATIVE APPS
344
+
345
+ Currently commercial apps are replacing traditional screensharing with embedded collaborative apps in group calls. For example, Google Docs now integrates video calls and Zoom allows third-party apps to integrate with the shared stage. To enable consistency between collaboration and communication apps, we argue that APIs for UI extensibility and data access are needed-e.g., for assigning an icon to be displayed on top of a participant's video feed or receiving notifications about the current speaker.
346
+
347
+ These APIs would allow for features like in Peek-At-You and to support other uses (e.g., selecting video feeds to show based on viewport proximity, call recordings linked to artifact edit histories, or displaying icons to help people understand others' emotions).
348
+
349
+ § 7.6 GENERALIZING TO OTHER TASKS, GROUPS, AND ARTIFACTS
350
+
351
+ We designed with relaxed-WYSIWIS systems in mind, but focused on content creation using a document editor for prototyping and evaluation. Different types of systems would require adaptations to represent positional indicators. For example, a digital whiteboard with 2D navigation may need to represent "up, left, and zoomed out", while a presentation or interface design application may need to represent that a collaborator is on a different slide or screen. 3D applications may present even more challenges, but could leverage arrows [80] or a Viewcube [42]. Different types of systems would require adaptations to represent positional indicators.
352
+
353
+ Relaxed-WYSIWIS groupware may allow users to have different object formatting and representations [21], which makes establishing a shared view challenging for two reasons: "jumping" to another person's view incurs a significant loss of context and determining whether people are currently sharing a view may be difficult (e.g., if two people see the same table in a spreadsheet but have applied different data filters). Our view peeking and pushing features preserve context and guarantee identical object representation, which may be particularly helpful in these contexts.
354
+
355
+ Our system's collaborative features were designed to support a variety of tasks within content creation process as part of an individual (reading), a team-level (choosing a merger target), and small-group activities (generating investor statements and marketing materials). While other tasks may require a different configurations, our design does not impose a specific ordering or structure for collaboration; therefore, while not yet tested, our system may by useful for other mixed-focus collaboration tasks such as brainstorming, decision making, or reviewing.
356
+
357
+ Our system could scale to larger groups, but stricter approaches for supporting subgroups may be needed (e.g., breakout rooms or audio filtering based on spatial positioning [89-91]). The integration of communication and collaboration leveraged by Peek-At-You could be helpful in these cases, such as using collaborators' proximity within a document or other artifact to select which video feeds or audio feeds to present, providing the most relevant awareness information.
358
+
359
+ § 8 LIMITATIONS & FUTURE WORK
360
+
361
+ The proposed system in this work is tailored to a specific context and may require adaptations for other contexts. While our experimental setup allowed us to recruit groups of a non-trivial size (29 participants in six groups), include a Video Chat Only condition, and recruit participants familiar with remote work, studying a single group size, task, and artifact type limits our ability to draw strong conclusions about the generalizability of our system. Future research should test the system in various contexts to evaluate its generalizability and effectiveness. Additionally, longer-term deployments of the system can help to understand how it can support sustained collaboration over time. Future work should also consider how experience affects system usage, as some participants found that our study's duration limited their ability to learn and leverage all the features. Finally, future work should further study how integrated tools can support hybrid asynchronous-synchronous collaboration.
362
+
363
+ § 9 CONCLUSION
364
+
365
+ In summary, we contribute to research in mixed-focus content creation in multiple ways. First, we build on existing understandings of mixed-focus collaboration and our formative observations of fully-remote collaboration. Second, we then design Peek-At-You, a system of collaborative features that leverage understanding of conversation and collaborative actions to increase awareness, facilitate understanding, and support the transitions needed in mixed-focus collaboration. Finally, we evaluate the system in groups of five collaborators, demonstrating that it can foster the knowledge and actions we intended to support. By enhancing remote collaboration, we contribute to making benefits of collaboration available for remote content creation.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/UlIJS3dcMi/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,225 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Non-Contact Heart Rate Measurement Method Resistant to Illumination Changes Based on Fast Wavelet Transform and Second-Order Blind Identification in Far-Field Environments
2
+
3
+ Rui Yuan*
4
+
5
+ Anhui Province Key Laboratory of
6
+
7
+ Multimodal Cognitive Computation,
8
+
9
+ School of Computer Science and
10
+
11
+ Technology
12
+
13
+ Anhui University
14
+
15
+ Hefei, China
16
+
17
+ Chao Zhang ${}^{ \dagger }$
18
+
19
+ Anhui Province Key Laboratory of
20
+
21
+ Multimodal Cognitive Computation,
22
+
23
+ School of Computer Science and
24
+
25
+ Technology
26
+
27
+ Anhui University
28
+
29
+ Hefei, China
30
+
31
+ ## Abstract
32
+
33
+ Heart rate(HR) is a key parameter for evaluating a person's physiological condition. In recent years, there have been many researches on remote heart rate measurement. However, these methods are mostly conducted in close-range scenarios, making them inapplicable in many scenarios. Remote photoplethysmography (rPPG) provides more possibilities for heart rate measurement in far-field environments. Moreover, the performance of heart rate measurement will be significantly reduced when the subject's movement and the illumination changing. We propose a rPPG framework for heart rate detection, which selects a larger region of interest (ROI) using feature point tracking in far-field environments. The combination of fast wavelet transform (FWT) and second-order blind identification (SOBI) is used to resist illumination interference and most of the motion interference. Singular spectrum analysis (SSA) is then used to resist residual motion interference.In addition, we collected a database of illumination changes in far-field environments and tested our framework with it. The results show that our method is superior to all previous methods.
34
+
35
+ Index Terms: Heart rate(HR), Far-field environments, Fast wavelet transform(FWT), Second-order blind identification(SOBI).
36
+
37
+ ## 1 INTRODUCTION
38
+
39
+ Heart rate is a key parameter for evaluating a person's physiological condition. Early symptoms of cardiovascular disease are not easy to detect and require specific heart rate monitoring equipment, such as electrocardiogram (ECG), which must be in contact with the surface of the skin. If continuous long-term monitoring is required, it can cause inconvenience to the user or patient. In addition, ECG devices are often large in size and expensive, resulting in high measurement costs, which are not conducive to real-time monitoring of the user's physiological and psychological health conditions.People urgently hope to be able to learn their cardiovascular physiological condition through safe and convenient means. Compared with contact-based devices, video-based non-contact heart rate monitoring has obvious advantages. This method uses a consumer level camera to sense the heart rate by capturing body surface videos, so it is very cheap and user friendly.The non-contact method overcomes the disadvantages of contact based heart rate measurement and is therefore widely used in human-computer interaction, health monitoring, and other fields.
40
+
41
+ rPPG is a non-contact method for measuring heart rate. rPPG technology has broad application prospects, for example, in scenarios such as neonates, burn patients, or long-term monitoring, where measurement does not require a far distance. Currently, the near-distance rPPG technology has achieved good application effects in these scenes.However, in some environments where heart rate measurement needs to be performed at a considerable distance, most of the current rPPG applications are unable to meet the requirements.For example, in scenarios such as court hearings, live sports events, and online interviews, it is necessary to obtain the heart rate of individuals over a certain distance.Previous research has shown the feasibility of video-based heart rate measurement, but in real-world environments, changes of illumination and human motion can significantly affect measurement results. It is difficult to avoid interference from illumination during long-term heart rate monitoring, as changes in illumination include various forms of noise caused by environmental changes, such as flickering indoor illumination or changes in outdoor natural indoor illumination.Additionally, it is difficult to avoid interference from human motion, which includes both rigid movements such as head tilting and nonrigid movements such as blinking and smiling. In this paper, we propose a framework that can effectively resist these interferences in a far-field environment. Furthermore, we collected a database of illumination changes in far-field environments and used this database to test our algorithm.
42
+
43
+ ## 2 RELATED WORKS
44
+
45
+ rPPG is a non-contact method for measuring heart rate from a distance using a camera. The basic rPPG process involves the following steps: First, capturing a video of the subject's face using a camera with sufficient resolution and frame rate, Then, selecting an ROI on the subject's face, typically around the forehead or cheek. After that, extracting the BVP (Blood Volume Pulse) signal from the selected ROI using various signal processing techniques, processing the BVP signal to remove any noise or artifacts.Finally, estimating the heart rate from the processed BVP signal.
46
+
47
+ Verkruysse et al. [12] first proposed the use of a regular high-definition camera with rPPG technology to measure heart rate. In ideal conditions, they used the G-BVP method to estimate heart rate and achieved relatively accurate results. However, in practical scenarios, rPPG technology struggles to extract accurate BVP waveforms due to changes of illumination and significant motion. To address these issues, Poh et al. [9] first proposed a method based on independent component analysis (ICA). They believed that the $\mathrm{R},\mathrm{G}$ , and $\mathrm{B}$ channel signals from the imaging device were mixed with BVP signals and noise signals and that using ICA separation methods could isolate the BVP signals from the three channels. The results showed that the ICA separation method provided more accurate results than using the green channel alone. Lewandowska et al. [7] used principal component analysis (PCA) to select the strongest periodic signals as BVP signals, resulting in accurate heart rate measurements. Li et al. [8] proposed a method based on face tracking and normalized least mean square adaptive filtering to combat the effects of illumination and motion. Chen et al. [3] used joint blind source separation and empirical mode decomposition to analyze color signals from multiple facial subregions to resist the effects of illumination changes. In addition, there are also some model-based methods [4] [14], which believe that motion artifacts can be eliminated by linear combinations of the R, G, and B channels. When detecting heart rate based on rPPG, reliable region of interest (ROI) detection and tracking are key steps. By removing areas on the face that are more susceptible to motion or change and using local motion compensation methods [9], the accuracy of heart rate measurement can be ensured.
48
+
49
+ ---
50
+
51
+ *e-mail: E21301281@stu.ahu.edu.cn
52
+
53
+ ${}^{ \dagger }$ e-mail: iiphci_ahu@163.com
54
+
55
+ ---
56
+
57
+ ![01963dff-6687-730a-ac1c-e8dcde8f06c2_1_147_145_723_463_0.jpg](images/01963dff-6687-730a-ac1c-e8dcde8f06c2_1_147_145_723_463_0.jpg)
58
+
59
+ Figure 1: Proposed framework for heart rate measurement.
60
+
61
+ The above methods aim to mitigate the effects of illumination changes and human motion on heart rate measurement as much as possible. However, their experiments were carried out with the distance between camera and the subjects very close, which imposes many limitations on the practical application scenarios. Al-Naji et al. [1] used a framework combining video magnification and blind source separation to reduce the impact of illumination changes on heart rate measurement. They increased the distance of heart rate measurement and provided more space for the application of rPPG. Since the facial image is smaller in a far-field environment, they selected the entire face as the ROI. To avoid interference from non-rigid motion on the measurement results, they removed the eye region for heart rate measurement as accurately as possible. However, their experiments only involved six individuals, which may lead to insufficient data to prove the effectiveness of the experimental measurement. In addition, the ROI they selected includes some background outside the face, which may obtain some non-physiological signals of the person during the video signal processing, resulting in inaccurate heart rate measurement. Moreover, the region they selected when removing the human eyes is too large, which will continue to amplify the disadvantage of the image being too small in the far-field environment, resulting in a ROI selected for heart rate measurement that is too small and inaccurate results.
62
+
63
+ ## 3 A FRAMEWORK FOR RESISTING ILLUMINATION AND MO- TION INTERFERENCE IN FAR-FIELD ENVIRONMENTS
64
+
65
+ As there is no publicly available database for illumination changes in far-field environments, we collected a database specifically for this scenario. Additionally, we propose a framework that can resist illumination and slight motion interference in far-field environments.Our framework consists of three steps, as shown in Fig. 1. In the first step, we need to obtain the ROI that contains the raw physiological signal of the person. We use the Viola-Jones(VJ) algorithm [13] to detect the face in the first frame of the image and then use the Kanade-Lucas-Tomasi (KLT) algorithm [11] to track the position of the ROI. In each frame, we convert the ROI image into an RGB three-channel signal by spatial averaging. The purpose of the second step is to reduce the interference caused by changes in illumination. We perform fast wavelet transform (FWT) [15] on the three-channel signal, preprocess the signal to remove some of the interference of illumination, and then use the SOBI algorithm [2] to process the signal to remove both illumination and motion interference. The purpose of the third step is to filter out residual motion interference. We perform singular spectrum analysis (SSA) [5] on the processed signal, which can resist motion interference to some extent. Then, we estimate the heart rate using Fourier transform. The details of each step will be explained in the following sections.
66
+
67
+ ### 3.1 ROI Detection and Tracking
68
+
69
+ ![01963dff-6687-730a-ac1c-e8dcde8f06c2_1_944_700_683_869_0.jpg](images/01963dff-6687-730a-ac1c-e8dcde8f06c2_1_944_700_683_869_0.jpg)
70
+
71
+ Figure 2: Video frame. (a) A frame from the video, (b) Facial region image, (c) Image with generally selected ROI, (d) Image with feature point tracked ROI.
72
+
73
+ The selection of the face region of interest (ROI) is a crucial step in rPPG heart rate measurement. Figure 2(a) shows a frame from one of the videos in our self-collected dataset, captured under far-field conditions with illumination changes.Fig. 2(b) shows an image without the selected ROI. Typically, as shown in Fig. 2(c), the region below the eyes and above the mouth is generally selected as the ROI because this area is not susceptible to motion interference and contains dense capillaries that provide the required signal, making it a good choice for ROI. However, if a fixed box in the video frame is used to represent the ROI, the box may deviate from the original ROI area result in losing the area for obtaining physiological signals. In a far-field environment, since the facial area is relatively small, our goal is to include as much of the facial area as possible in the ROI, while excluding the eye region, which produces non-rigid motion and interferes to heart rate measurement. As shown in Fig. 2(d), we used the Viola-Jones face detector [13] to detect the face in the first frame, which provides a rectangular box containing the approximate position of the face. We used a rough facial template to locate the skin areas above and below the eyes and remove the eye region. To remove the background, we detected feature points [10] using the Minimum Eigenvalue Algorithm and selected suitable facial landmarks within the rectangular box to include as much of the facial area as possible in the ROI. Then, we used the Kanade-Lucas-Tomasi (KLT) technique [11] to track the face in each frame of the video. By tracking the feature points in the current and next frames, we adjusted the spatial orientation and size of the ROI in 2D and obtained the raw RGB signal by spatially averaging the pixel intensity values in each frame's ROI.
74
+
75
+ ### 3.2 Illumination Rectification
76
+
77
+ In this section, we aim to remove illumination changes as much as possible. To achieve this, we use wavelet transform to process the RGB signal. Wavelet transform is a time-frequency localized analysis method, whose window area is fixed but time and frequency windows are variable. Therefore, wavelet transform has the characteristics of multi-resolution analysis and can represent the local features of signals in both time and frequency domains. In simple terms, wavelet transform can decompose a signal into components of different frequencies, in order to better understand the temporal and spectral characteristics of the signal. In the low-frequency part, wavelet transform has higher spectral resolution and lower time resolution, while in the high-frequency part, it has higher time resolution and lower spectral resolution. These two characteristics are consistent with the characteristics of slow changes in low-frequency signals and rapid changes in high-frequency signals, making wavelet transform adaptive to different types and frequencies of signals. Through fast wavelet transform, we can decompose the influence of illumination changes on the RGB signal into different frequency components, and select appropriate components for filtering to extract the heart rate signal and remove noise. This process is similar to passing the signal through a band-pass filter, retaining only the signals within the target frequency range and filtering out other frequency components. Finally, the filtered components are combined into a clean heart rate signal, thereby removing the interference of illumination changes on heart rate detection. Therefore, the fast wavelet transform algorithm can be regarded as a filter. For a 1D input signal $f\left( t\right)$ , its decomposition formula is as follows:
78
+
79
+ $$
80
+ f\left( t\right) = {A}_{n} + {D}_{n} + {D}_{n - 1} + \ldots + {D}_{1} \tag{1}
81
+ $$
82
+
83
+ In formula $1, n$ represents the number of decomposition levels of the signal. Through wavelet decomposition, we can divide the signal into low-frequency and high-frequency parts, represented by $A$ and $D$ , respectively [15].. In order to better process the signal, we use filters to separate the high-frequency and low-frequency waves and convert them into frequencies. Then, we recombine them and perform dimensionality reduction to integrate local information (low-frequency) and spatial information (high-frequency). The proposed method adopts 4-level wavelet decomposition and chooses the db3 wavelet type, which has good time-domain and frequency-domain characteristics. The db3 wavelet consists of three scales of wavelet functions and three scales of wavelet packet functions, which can provide a higher signal compression ratio and better signal reconstruction quality. In addition, the db3 wavelet has good performance in both low-frequency and high-frequency decomposition, which can effectively extract heart rate signals while removing noise and interference. Therefore, choosing db3 wavelet for one-dimensional discrete wavelet transformation can improve the measurement accuracy.Considering that the frequency range of heart rate is completely covered by the frequency range of ${D}_{4}$ , we choose to set the decomposition coefficients of ${D}_{1},{D}_{2},{D}_{3}$ , and ${A}_{4}$ to zero, and retain the decomposition coefficient of ${D}_{4}$ . Then, we reconstruct the decomposition coefficients of ${D}_{1},{D}_{2},{D}_{3},{D}_{4}$ , and ${A}_{4}$ to obtain the preprocessed signal. This preprocessing method can better highlight the characteristics of the heart rate signal and improve the quality of the signal. By further analyzing the preprocessed signal, we can more accurately measure the heart rate.
84
+
85
+ ![01963dff-6687-730a-ac1c-e8dcde8f06c2_2_946_379_676_639_0.jpg](images/01963dff-6687-730a-ac1c-e8dcde8f06c2_2_946_379_676_639_0.jpg)
86
+
87
+ Figure 3: Raw RGB signals and SOBI processed signal. (a) Raw signal of red channel, (b) Raw signal of green channel, (c) Raw signal of blue channel, (d) SOBI processed signal.
88
+
89
+ The blind source separation algorithm is used to further denoise the preprocessed signal, which can remove most of the lighting and motion noise. We use SOBI for denoising, which is a type of ICA (Independent Component Analysis) algorithm that differs from other ICA algorithms in that it uses second-order statistical information to reconstruct source signal. SOBI method can deal with non-Gaussian noise better, so it is more suitable for BVP signal processing. Therefore, in the case of the interference of illumination and motion interference, the SOBI method is better than other ICA methods. The observed signal $\mathbf{X}$ is obtained by linearly mixing the source signals $\mathbf{S}$ through the mixing matrix $\mathbf{A}$ . The source signals can be represented as $\mathbf{S} = {\left\lbrack {s}_{1},{s}_{2},{s}_{3}\right\rbrack }^{\mathrm{T}}$ , and the mathematical expression is $\mathbf{X} = \mathbf{{AS}}$ . Both the source signals $\mathbf{S}$ and the mixing matrix $\mathbf{A}$ are unknown and Formula 3 can be used to separate the observed $\operatorname{signal}\mathbf{X}$ . Where $\mathbf{W}$ is the separation matrix, represented as $\mathbf{W} = {\mathbf{A}}^{-1}$ , and $\mathbf{Y} = {\left\lbrack {y}_{1},{y}_{2},{y}_{3}\right\rbrack }^{\mathrm{T}}$ is the estimated value of the source signals $\mathbf{S}.\mathbf{W}$ is randomly initialized and continuously optimized until $\mathbf{Y}$ is close to $\mathbf{S}$ , obtaining the desired signal $\mathbf{X}$ .
90
+
91
+ $$
92
+ {x}_{i} = \mathop{\sum }\limits_{{j = 1}}^{3}{a}_{ij}{s}_{j}\left( {1 \leq i \leq 3}\right) \tag{2}
93
+ $$
94
+
95
+ $$
96
+ {y}_{i} = \mathop{\sum }\limits_{{j = 1}}^{3}{w}_{ij}{x}_{j}\left( {1 \leq i \leq 3}\right) \tag{3}
97
+ $$
98
+
99
+ ### 3.3 Motion Elimination
100
+
101
+ We use SSA to remove residual motion noise from the BVP signal.SSA is an effective method for processing nonlinear time series data, which can decompose time series into meaningful components without prior knowledge. It can directly extract the artifact spectrum from the BVP signal of the facial ROI, assuming that the facial ROI contains all interference information and that the noise artifact is unrelated to the pulse signal. By applying SSA, dominant noise artifacts can be extracted, and it is found that the effect of noise artifacts on all RGB channels is almost the same. To eliminate residual noise artifacts in the extracted BVP signal, SSA can be applied to estimate the spectrum and obtain a BVP signal without noise artifacts. The modal decomposition calculated using SSA is as follows.Let $x\left( t\right)$ be the normalized BVP signal at time $t$ , and define its trajectory matrix as follows:
102
+
103
+ $$
104
+ X = \left\lbrack \begin{matrix} x\left( 1\right) & x\left( 2\right) & \ldots & x\left( m\right) \\ x\left( 2\right) & x\left( 3\right) & \ldots & x\left( {m + 1}\right) \\ \vdots & \vdots & \ddots & \vdots \\ x\left( {n - m + 1}\right) & x\left( {n - m + 2}\right) & \ldots & x\left( n\right) \end{matrix}\right\rbrack \tag{4}
105
+ $$
106
+
107
+ In this method, the trajectory matrix plays an important role in defining the relationship between the window length $m$ and the total number of data points $n$ . Typically, the window length is chosen as one quarter of the data. If the data is periodic, the window length can also be chosen as one quarter of the longest period in the data. Singular value decomposition of the trajectory matrix $\mathbf{X}$ yields three matrices $\mathbf{U},\mathbf{W}$ , and $\mathbf{V}$ , expressed as:
108
+
109
+ $$
110
+ X = {UW}{V}^{T} \tag{5}
111
+ $$
112
+
113
+ where $\mathbf{U}$ and $\mathbf{V}$ are regular orthogonal matrices, and $\mathbf{W}$ is a diagonal matrix that describes the diagonal components of the singular values ${\lambda }_{i}.\mathbf{U} = \left( {{u}_{1},{u}_{2},\ldots ,{u}_{r}}\right) ,\mathbf{V} = \left( {{v}_{1},{v}_{2},\ldots ,{v}_{r}}\right)$ and $r \leq \min \left( {m, n}\right) ,\mathbf{X}$ can be decomposed into the sum of several matrices as follows:
114
+
115
+ $$
116
+ X = {X}_{1} + {X}_{2} + \ldots + {X}_{r} \tag{6}
117
+ $$
118
+
119
+ $$
120
+ {X}_{i} = \sqrt{{\lambda }_{i}}{U}_{i}{V}_{i}^{T}\left( {i = 1,2,\ldots , r}\right) \tag{7}
121
+ $$
122
+
123
+ The singular values of these matrices decrease with increasing subscripts, indicating that modes with smaller mode number have larger variances. Therefore, modes with smaller subscripts contribute more to the original signal. By eliminating signals with relatively small partial correlation, the remaining signals can be reconstructed to obtain the variable components that do not include irrelevant signals.By using the reconfigured BVP signal from $\mathbf{X}$ , a robust heart rate estimation can be obtained. We use a weight-correlated method to set the threshold, ensuring that noise can be effectively eliminated without losing the original valid signals. We performed a fast Fourier transform (FFT) on the BVP signal to convert it to the frequency domain and analyzed its power spectral density (PSD). Since the heart rate signal appears as a distinct peak in the frequency spectrum, we can approximate the heart rate value ${f}_{HR}$ by taking the frequency with the maximum spectral density:
124
+
125
+ $$
126
+ {f}_{HR} = \operatorname{argmax}\left| {W\left( f\right) }\right| \tag{8}
127
+ $$
128
+
129
+ where $W\left( f\right)$ is the power spectral density of the BVP signal. Finally, we can obtain an estimated value of the heart rate, ${HR}$ , is: ${HR} =$ ${f}_{HR} * {60}$ .
130
+
131
+ ## 4 EXPERIMENTAL SETUP AND RESULTS
132
+
133
+ Due to the fact that the experiment was conducted in a far-field environment, there are currently no publicly available datasets to use. Therefore, in order to collect the necessary data, 13 subjects were self-collected, each video is 70 seconds long, including 10 males and 3 females. In this experiment, we used a network camera (Guke, G06-18X) to record the video. The camera was placed 5 meters away from the subjects, with a frame rate of ${30}\mathrm{{fps}}$ and a resolution of ${640} \times {480}\mathrm{{px}}$ . In order to produce the phenomenon of illumination changes, we placed a white LED light panel (FengChuan Ltd., Shen-zhen, China) with 18W power at a distance of 1.5 meters from the subjects. In order to obtain the ground truth of the subject's heart rate, we used a finger clip oximeter (ContecMedice, CMS50E) to measure the photoplethysmogram (PPG) on the subjects' fingers. The scene of the experiment is shown in Fig. 4.
134
+
135
+ ![01963dff-6687-730a-ac1c-e8dcde8f06c2_3_923_151_720_340_0.jpg](images/01963dff-6687-730a-ac1c-e8dcde8f06c2_3_923_151_720_340_0.jpg)
136
+
137
+ Figure 4: The scene diagram of the experiment.
138
+
139
+ ![01963dff-6687-730a-ac1c-e8dcde8f06c2_3_924_574_718_384_0.jpg](images/01963dff-6687-730a-ac1c-e8dcde8f06c2_3_924_574_718_384_0.jpg)
140
+
141
+ Figure 5: Changes of heart rate at different times between the device, SOBI algorithm and the proposed algorithm.
142
+
143
+ ## 5 RESULT ANALYSIS
144
+
145
+ In this section, we used different denoising methods to process the raw signal.In order to evaluate the performance of different denoising methods, we compared the actual heart rate measured by the finger clip oximeter with the heart rate obtained after denoising. In order to assess the accuracy of heart rate measurement, we used two metrics: mean absolute error (MAE) and root mean square error (RMSE). The formulas for calculating RMSE and MAE are as follows:
146
+
147
+ $$
148
+ {RMSE} = \sqrt{\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}{\left( {h}_{i}\left( x\right) - {y}_{i}\right) }^{2}} \tag{9}
149
+ $$
150
+
151
+ $$
152
+ {MAE} = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\left| {{h}_{i}\left( x\right) - {y}_{i}}\right| \tag{10}
153
+ $$
154
+
155
+ where $n$ is the number of measurements, $y$ represents the true heart rate value, and $h\left( x\right)$ represents ground truth HR obtained by the method. The smaller the values of RMSE and MAE, the better the denoising performance. By calculating these metrics, we can evaluate the advantages and disadvantages of different denoising methods and find the best denoising method to provide more accurate results for heart rate measurement.
156
+
157
+ We took an example video from our own collected database to analyze the changes of heart rate obtained by using the standard PPG signal collected by the CMS50E device, the heart rate obtained by using the SOBI denoising algorithm, and the heart rate obtained by our proposed method at different times, as shown in Fig. 5.In Fig. 5, $H{R}_{ppg}$ represents the ground truth HR collected by the instrument, $H{R}_{\text{ours }}$ is the heart rate obtained using our proposed method, and $H{R}_{sobi}$ is the heart rate obtained using the SOBI algorithm. It can be seen that in the scenario with changing illumination, the heart rate measurement obtained directly using the SOBI algorithm is not particularly accurate. In contrast, our proposed method can more accurately fit the heart rate measured by the instrument.
158
+
159
+ Table 1: The RMSE and MAE values of generally selected ROI under different methods.
160
+
161
+ <table><tr><td>method</td><td>G-BVP</td><td>SOBI</td><td>OURS</td></tr><tr><td>RMSE(bpm)</td><td>18.23</td><td>13.71</td><td>9.52</td></tr><tr><td>MAE(bpm)</td><td>16.17</td><td>11.74</td><td>7.76</td></tr></table>
162
+
163
+ Table 2: The RMSE and MAE values of feature point tracked ROI under different methods.
164
+
165
+ <table><tr><td>method</td><td>G-BVP</td><td>SOBI</td><td>OURS</td></tr><tr><td>RMSE(bpm)</td><td>18.05</td><td>12.96</td><td>8.9</td></tr><tr><td>MAE(bpm)</td><td>15.95</td><td>10.62</td><td>7.13</td></tr></table>
166
+
167
+ We compared generally selecting regions of interest with using feature point tracking to define regions of interest, several metrics were used to demonstrate the experiment. To present the results more intuitively, we used Table 1 and Table 2 to list their RMSE and MAE values.From Table 1 and Table 2, the difference between generally selecting ROI and using feature point tracking ROI is not very significant, but it can still be seen that the feature point tracking method is slightly better than the general selection method. In the experiment, the RMSE of the G-BVP method with general selection and feature point tracking were ${18.23}\mathrm{{bpm}}$ and ${18.05}\mathrm{{bpm}}$ , respectively, and their MAE were 16.17bpm and 15.95bpm, respectively. The RMSE of the SOBI algorithm were 13.71bpm and 12.96bpm, respectively, and their MAE were 11.74bpm and ${10.62}\mathrm{{bpm}}$ , respectively. The proposed method had RMSE of ${9.52}\mathrm{{bpm}}$ and ${8.9}\mathrm{{bpm}}$ , respectively, and their MAE were 7.76bpm and 7.13bpm, respectively. Feature point tracking has smaller errors. The feature point tracking algorithm can automatically identify the position of the ROI and adaptively track the ROI, which can better cope with the interference caused by facial expressions and other factors, and obtain more accurate heart rate measurement results.
168
+
169
+ To verify the performance of the method, We used some typical methods to estimate heart rate using videos in the self-collecting database, These methods including G-BVP, FastICA [6], POS, CHROM, and SOBI. We compared them with the proposed method to validate the experimental results.Fig. 6 shows the heart rate range plot drawn based on the results obtained from different methods. The red line represents the average value, the blue squares represent the distribution range of most values, and the red dots represent individual outlier values. Referring to the values of PPG, it can be seen that our proposed method is basically consistent with the range of PPG, while other methods may have large value range offsets or more outlier points, and the results obtained are not accurate enough. At the same time, the average values of the POS algorithm and our proposed method are basically consistent with the values measured by the instrument.
170
+
171
+ ![01963dff-6687-730a-ac1c-e8dcde8f06c2_4_153_1636_713_407_0.jpg](images/01963dff-6687-730a-ac1c-e8dcde8f06c2_4_153_1636_713_407_0.jpg)
172
+
173
+ Figure 6: Heart rate ranges and outlier distributions of different methods.
174
+
175
+ ![01963dff-6687-730a-ac1c-e8dcde8f06c2_4_941_179_659_334_0.jpg](images/01963dff-6687-730a-ac1c-e8dcde8f06c2_4_941_179_659_334_0.jpg)
176
+
177
+ Figure 7: RMSE and MAE of various classical methods and proposed methods.
178
+
179
+ It can be seen from Fig. 7 that the proposed method has the smallest RMSE and MAE. it can be seen that compared with our proposed method, the RMSE and MAE obtained by other methods are relatively large. The POS algorithm has the best performance among other methods, followed by SOBI, CHROM, FastICA, and G-BVP. The RMSE of our proposed method is ${8.9}\mathrm{{bpm}}$ , and the MAE is ${7.13}\mathrm{{bpm}}$ , which also indicates that the proposed method has strong resistance to changes in illumination.
180
+
181
+ In the case of remote measurement in the far field with changing illumination, the statistical values of all remote measurement methods based on the Bland-Altman method are shown in Fig. 8. According to G-BVP, as shown in Fig. 8(a), the average deviation and consistency range are $- {10}, - {40.32} + {20.31}$ beats/min. According to FastICA, as shown in Figure Fig. 8(b), the average deviation and consistency range are -12.16, -30.9+14.1 beats/min. According to POS, as shown in Figure Fig. 8(c), the average deviation and consistency range are -1.67, -26.9+23.57 beats/min. According to CHROM, as shown in Figure Fig. 8(d), the average deviation and consistency range are $- {3.53}, - {34.77} + {27.72}$ beats/min. According to SOBI, as shown in Fig. 8(e), the average deviation and consistency range are -8.14 beats/min, -30.9+14.1 beats/min. Using our proposed framework, as shown in Fig. 8(f), the average deviation and consistency range are $- {3.17}, - {18.82} + {12.49}$ beats/min. Based on the statistical results in Figure 10, it can be seen that POS, CHROM, and our proposed method have smaller average offsets, among which our proposed method has the smallest standard deviation and almost no outliers deviating from the consistency range. This indicates that our proposed method has good performance in remote heart rate measurement.
182
+
183
+ ## 6 CONCLUSION
184
+
185
+ Our research not only extends the distance for measuring heart rate, but also proposes a new framework for remote heart rate measurement. In response to the performance degradation of previous common face video remote heart rate measurement methods under environmental illumination changes and subject movement interference, we propose a framework consisting of three main processes. Firstly, we use the VJ algorithm to accurately identify the ROI of the face and solve the problem caused by rigid head movement. At the same time, we use KLT technology to continuously track each frame to further reduce motion interference. Secondly, we use FWT to remove initial illumination interference, and then use the SOBI algorithm to remove most of the noise. Finally, we use the SSA method to remove residual motion artifact noise. We conducted extensive experimental evaluations of this method and compared it with reference measurement results. The results show that in a far-field environment, this framework can accurately measure the heart rate of people under interference of illumination. We evaluated the system using multiple video data sources and found that the system exhibits strong consistency, high correlation, and low noise levels. Moreover, in complex and changing situations, the system's results are better than traditional measurement methods such as G-BVP, FastICA, POS, CHROM, and SOBI algorithms. Our research results have important implications for the development of remote heart rate measurement technology. Our method not only extends the measurement distance, but also reduces the interference of environmental illumination changes and subject movement on measurement results, thereby improving the practicality and reliability of this technology and expanding the application scenarios of rPPG. We believe that in the future, this technology will be widely used to provide more convenient and accurate monitoring means for people's health.
186
+
187
+ ![01963dff-6687-730a-ac1c-e8dcde8f06c2_5_169_172_670_1107_0.jpg](images/01963dff-6687-730a-ac1c-e8dcde8f06c2_5_169_172_670_1107_0.jpg)
188
+
189
+ Figure 8: Bland-Altman plots. (a) G-BVP method, (b) FastICA method, (c) POS method, (d) CHROM method, (e) SOBI method, (f) Ours method.
190
+
191
+ ## ACKNOWLEDGMENTS
192
+
193
+ I would like to express my sincere gratitude to my supervisor, Chao Zhang, for his invaluable guidance, support and encouragement throughout my research work. His extensive knowledge, critical insights and constructive feedback have been instrumental in shaping my research and helping me achieve my academic goals. I am also deeply grateful to him for his patience, kindness and understanding, which have made my research journey a truly enriching and rewarding experience. I will always cherish the lessons I have learned from him and the memories we have shared together. Thank you, Chao Zhang, for everything you have done for me.
194
+
195
+ ## REFERENCES
196
+
197
+ [1] A. Al-Naji and J. Chahl. Remote optical cardiopulmonary signal extraction with noise artifact removal, multiple subject detection & long-distance. IEEE Access, 6:11573-11595, 2018.
198
+
199
+ [2] A. Belouchrani, K. Abed-Meraim, J.-F. Cardoso, and E. Moulines. A blind source separation technique using second-order statistics. IEEE Transactions on signal processing, 45(2):434-444, 1997.
200
+
201
+ [3] J. Cheng, X. Chen, L. Xu, and Z. J. Wang. Illumination variation-resistant video-based heart rate measurement using joint blind source separation and ensemble empirical mode decomposition. IEEE journal of biomedical and health informatics, 21(5):1422-1433, 2016.
202
+
203
+ [4] G. De Haan and V. Jeanne. Robust pulse rate from chrominance-based rppg. IEEE Transactions on Biomedical Engineering, 60(10):2878- 2886, 2013.
204
+
205
+ [5] J. B. Elsner and A. A. Tsonis. Singular spectrum analysis: a new tool in time series analysis. Springer Science & Business Media, 1996.
206
+
207
+ [6] A. Hyvärinen and E. Oja. Independent component analysis: algorithms and applications. Neural networks, 13(4-5):411-430, 2000.
208
+
209
+ [7] M. Lewandowska, J. Rumiński, T. Kocejko, and J. Nowak. Measuring pulse rate with a webcamla non-contact method for evaluating cardiac activity. In 2011 federated conference on computer science and information systems (FedCSIS), pp. 405-410. IEEE, 2011.
210
+
211
+ [8] X. Li, J. Chen, G. Zhao, and M. Pietikainen. Remote heart rate measurement from face videos under realistic situations. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4264-4271, 2014.
212
+
213
+ [9] M.-Z. Poh, D. J. McDuff, and R. W. Picard. Non-contact, automated cardiac pulse measurements using video imaging and blind source separation. 2010.
214
+
215
+ [10] J. Shi et al. Good features to track. In 1994 Proceedings of IEEE conference on computer vision and pattern recognition, pp. 593-600. IEEE, 1994.
216
+
217
+ [11] C. Tomasi and T. Kanade. Detection and tracking of point. Int $J$ Comput Vis, 9:137-154, 1991.
218
+
219
+ [12] W. Verkruysse, L. O. Svaasand, and J. S. Nelson. Remote plethysmographic imaging using ambient light. Optics express, 16(26):21434, 2008.
220
+
221
+ [13] P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE computer society conference on computer vision and pattern recognition. CVPR 2001, vol. 1, pp. I-I. Ieee, 2001.
222
+
223
+ [14] W. Wang, A. C. Den Brinker, S. Stuijk, and G. De Haan. Algorithmic principles of remote ppg. IEEE Transactions on Biomedical Engineering, 64(7):1479-1491, 2016.
224
+
225
+ [15] H. Xiao, J. Xu, D. Hu, and J. Wang. Combination of denoising algorithms for video-based non-contact heart rate measurement. In 2022 3rd Information Communication Technologies Conference (ICTC), pp. 141-145. IEEE, 2022.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/UlIJS3dcMi/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § A NON-CONTACT HEART RATE MEASUREMENT METHOD RESISTANT TO ILLUMINATION CHANGES BASED ON FAST WAVELET TRANSFORM AND SECOND-ORDER BLIND IDENTIFICATION IN FAR-FIELD ENVIRONMENTS
2
+
3
+ Rui Yuan*
4
+
5
+ Anhui Province Key Laboratory of
6
+
7
+ Multimodal Cognitive Computation,
8
+
9
+ School of Computer Science and
10
+
11
+ Technology
12
+
13
+ Anhui University
14
+
15
+ Hefei, China
16
+
17
+ Chao Zhang ${}^{ \dagger }$
18
+
19
+ Anhui Province Key Laboratory of
20
+
21
+ Multimodal Cognitive Computation,
22
+
23
+ School of Computer Science and
24
+
25
+ Technology
26
+
27
+ Anhui University
28
+
29
+ Hefei, China
30
+
31
+ § ABSTRACT
32
+
33
+ Heart rate(HR) is a key parameter for evaluating a person's physiological condition. In recent years, there have been many researches on remote heart rate measurement. However, these methods are mostly conducted in close-range scenarios, making them inapplicable in many scenarios. Remote photoplethysmography (rPPG) provides more possibilities for heart rate measurement in far-field environments. Moreover, the performance of heart rate measurement will be significantly reduced when the subject's movement and the illumination changing. We propose a rPPG framework for heart rate detection, which selects a larger region of interest (ROI) using feature point tracking in far-field environments. The combination of fast wavelet transform (FWT) and second-order blind identification (SOBI) is used to resist illumination interference and most of the motion interference. Singular spectrum analysis (SSA) is then used to resist residual motion interference.In addition, we collected a database of illumination changes in far-field environments and tested our framework with it. The results show that our method is superior to all previous methods.
34
+
35
+ Index Terms: Heart rate(HR), Far-field environments, Fast wavelet transform(FWT), Second-order blind identification(SOBI).
36
+
37
+ § 1 INTRODUCTION
38
+
39
+ Heart rate is a key parameter for evaluating a person's physiological condition. Early symptoms of cardiovascular disease are not easy to detect and require specific heart rate monitoring equipment, such as electrocardiogram (ECG), which must be in contact with the surface of the skin. If continuous long-term monitoring is required, it can cause inconvenience to the user or patient. In addition, ECG devices are often large in size and expensive, resulting in high measurement costs, which are not conducive to real-time monitoring of the user's physiological and psychological health conditions.People urgently hope to be able to learn their cardiovascular physiological condition through safe and convenient means. Compared with contact-based devices, video-based non-contact heart rate monitoring has obvious advantages. This method uses a consumer level camera to sense the heart rate by capturing body surface videos, so it is very cheap and user friendly.The non-contact method overcomes the disadvantages of contact based heart rate measurement and is therefore widely used in human-computer interaction, health monitoring, and other fields.
40
+
41
+ rPPG is a non-contact method for measuring heart rate. rPPG technology has broad application prospects, for example, in scenarios such as neonates, burn patients, or long-term monitoring, where measurement does not require a far distance. Currently, the near-distance rPPG technology has achieved good application effects in these scenes.However, in some environments where heart rate measurement needs to be performed at a considerable distance, most of the current rPPG applications are unable to meet the requirements.For example, in scenarios such as court hearings, live sports events, and online interviews, it is necessary to obtain the heart rate of individuals over a certain distance.Previous research has shown the feasibility of video-based heart rate measurement, but in real-world environments, changes of illumination and human motion can significantly affect measurement results. It is difficult to avoid interference from illumination during long-term heart rate monitoring, as changes in illumination include various forms of noise caused by environmental changes, such as flickering indoor illumination or changes in outdoor natural indoor illumination.Additionally, it is difficult to avoid interference from human motion, which includes both rigid movements such as head tilting and nonrigid movements such as blinking and smiling. In this paper, we propose a framework that can effectively resist these interferences in a far-field environment. Furthermore, we collected a database of illumination changes in far-field environments and used this database to test our algorithm.
42
+
43
+ § 2 RELATED WORKS
44
+
45
+ rPPG is a non-contact method for measuring heart rate from a distance using a camera. The basic rPPG process involves the following steps: First, capturing a video of the subject's face using a camera with sufficient resolution and frame rate, Then, selecting an ROI on the subject's face, typically around the forehead or cheek. After that, extracting the BVP (Blood Volume Pulse) signal from the selected ROI using various signal processing techniques, processing the BVP signal to remove any noise or artifacts.Finally, estimating the heart rate from the processed BVP signal.
46
+
47
+ Verkruysse et al. [12] first proposed the use of a regular high-definition camera with rPPG technology to measure heart rate. In ideal conditions, they used the G-BVP method to estimate heart rate and achieved relatively accurate results. However, in practical scenarios, rPPG technology struggles to extract accurate BVP waveforms due to changes of illumination and significant motion. To address these issues, Poh et al. [9] first proposed a method based on independent component analysis (ICA). They believed that the $\mathrm{R},\mathrm{G}$ , and $\mathrm{B}$ channel signals from the imaging device were mixed with BVP signals and noise signals and that using ICA separation methods could isolate the BVP signals from the three channels. The results showed that the ICA separation method provided more accurate results than using the green channel alone. Lewandowska et al. [7] used principal component analysis (PCA) to select the strongest periodic signals as BVP signals, resulting in accurate heart rate measurements. Li et al. [8] proposed a method based on face tracking and normalized least mean square adaptive filtering to combat the effects of illumination and motion. Chen et al. [3] used joint blind source separation and empirical mode decomposition to analyze color signals from multiple facial subregions to resist the effects of illumination changes. In addition, there are also some model-based methods [4] [14], which believe that motion artifacts can be eliminated by linear combinations of the R, G, and B channels. When detecting heart rate based on rPPG, reliable region of interest (ROI) detection and tracking are key steps. By removing areas on the face that are more susceptible to motion or change and using local motion compensation methods [9], the accuracy of heart rate measurement can be ensured.
48
+
49
+ *e-mail: E21301281@stu.ahu.edu.cn
50
+
51
+ ${}^{ \dagger }$ e-mail: iiphci_ahu@163.com
52
+
53
+ < g r a p h i c s >
54
+
55
+ Figure 1: Proposed framework for heart rate measurement.
56
+
57
+ The above methods aim to mitigate the effects of illumination changes and human motion on heart rate measurement as much as possible. However, their experiments were carried out with the distance between camera and the subjects very close, which imposes many limitations on the practical application scenarios. Al-Naji et al. [1] used a framework combining video magnification and blind source separation to reduce the impact of illumination changes on heart rate measurement. They increased the distance of heart rate measurement and provided more space for the application of rPPG. Since the facial image is smaller in a far-field environment, they selected the entire face as the ROI. To avoid interference from non-rigid motion on the measurement results, they removed the eye region for heart rate measurement as accurately as possible. However, their experiments only involved six individuals, which may lead to insufficient data to prove the effectiveness of the experimental measurement. In addition, the ROI they selected includes some background outside the face, which may obtain some non-physiological signals of the person during the video signal processing, resulting in inaccurate heart rate measurement. Moreover, the region they selected when removing the human eyes is too large, which will continue to amplify the disadvantage of the image being too small in the far-field environment, resulting in a ROI selected for heart rate measurement that is too small and inaccurate results.
58
+
59
+ § 3 A FRAMEWORK FOR RESISTING ILLUMINATION AND MO- TION INTERFERENCE IN FAR-FIELD ENVIRONMENTS
60
+
61
+ As there is no publicly available database for illumination changes in far-field environments, we collected a database specifically for this scenario. Additionally, we propose a framework that can resist illumination and slight motion interference in far-field environments.Our framework consists of three steps, as shown in Fig. 1. In the first step, we need to obtain the ROI that contains the raw physiological signal of the person. We use the Viola-Jones(VJ) algorithm [13] to detect the face in the first frame of the image and then use the Kanade-Lucas-Tomasi (KLT) algorithm [11] to track the position of the ROI. In each frame, we convert the ROI image into an RGB three-channel signal by spatial averaging. The purpose of the second step is to reduce the interference caused by changes in illumination. We perform fast wavelet transform (FWT) [15] on the three-channel signal, preprocess the signal to remove some of the interference of illumination, and then use the SOBI algorithm [2] to process the signal to remove both illumination and motion interference. The purpose of the third step is to filter out residual motion interference. We perform singular spectrum analysis (SSA) [5] on the processed signal, which can resist motion interference to some extent. Then, we estimate the heart rate using Fourier transform. The details of each step will be explained in the following sections.
62
+
63
+ § 3.1 ROI DETECTION AND TRACKING
64
+
65
+ < g r a p h i c s >
66
+
67
+ Figure 2: Video frame. (a) A frame from the video, (b) Facial region image, (c) Image with generally selected ROI, (d) Image with feature point tracked ROI.
68
+
69
+ The selection of the face region of interest (ROI) is a crucial step in rPPG heart rate measurement. Figure 2(a) shows a frame from one of the videos in our self-collected dataset, captured under far-field conditions with illumination changes.Fig. 2(b) shows an image without the selected ROI. Typically, as shown in Fig. 2(c), the region below the eyes and above the mouth is generally selected as the ROI because this area is not susceptible to motion interference and contains dense capillaries that provide the required signal, making it a good choice for ROI. However, if a fixed box in the video frame is used to represent the ROI, the box may deviate from the original ROI area result in losing the area for obtaining physiological signals. In a far-field environment, since the facial area is relatively small, our goal is to include as much of the facial area as possible in the ROI, while excluding the eye region, which produces non-rigid motion and interferes to heart rate measurement. As shown in Fig. 2(d), we used the Viola-Jones face detector [13] to detect the face in the first frame, which provides a rectangular box containing the approximate position of the face. We used a rough facial template to locate the skin areas above and below the eyes and remove the eye region. To remove the background, we detected feature points [10] using the Minimum Eigenvalue Algorithm and selected suitable facial landmarks within the rectangular box to include as much of the facial area as possible in the ROI. Then, we used the Kanade-Lucas-Tomasi (KLT) technique [11] to track the face in each frame of the video. By tracking the feature points in the current and next frames, we adjusted the spatial orientation and size of the ROI in 2D and obtained the raw RGB signal by spatially averaging the pixel intensity values in each frame's ROI.
70
+
71
+ § 3.2 ILLUMINATION RECTIFICATION
72
+
73
+ In this section, we aim to remove illumination changes as much as possible. To achieve this, we use wavelet transform to process the RGB signal. Wavelet transform is a time-frequency localized analysis method, whose window area is fixed but time and frequency windows are variable. Therefore, wavelet transform has the characteristics of multi-resolution analysis and can represent the local features of signals in both time and frequency domains. In simple terms, wavelet transform can decompose a signal into components of different frequencies, in order to better understand the temporal and spectral characteristics of the signal. In the low-frequency part, wavelet transform has higher spectral resolution and lower time resolution, while in the high-frequency part, it has higher time resolution and lower spectral resolution. These two characteristics are consistent with the characteristics of slow changes in low-frequency signals and rapid changes in high-frequency signals, making wavelet transform adaptive to different types and frequencies of signals. Through fast wavelet transform, we can decompose the influence of illumination changes on the RGB signal into different frequency components, and select appropriate components for filtering to extract the heart rate signal and remove noise. This process is similar to passing the signal through a band-pass filter, retaining only the signals within the target frequency range and filtering out other frequency components. Finally, the filtered components are combined into a clean heart rate signal, thereby removing the interference of illumination changes on heart rate detection. Therefore, the fast wavelet transform algorithm can be regarded as a filter. For a 1D input signal $f\left( t\right)$ , its decomposition formula is as follows:
74
+
75
+ $$
76
+ f\left( t\right) = {A}_{n} + {D}_{n} + {D}_{n - 1} + \ldots + {D}_{1} \tag{1}
77
+ $$
78
+
79
+ In formula $1,n$ represents the number of decomposition levels of the signal. Through wavelet decomposition, we can divide the signal into low-frequency and high-frequency parts, represented by $A$ and $D$ , respectively [15].. In order to better process the signal, we use filters to separate the high-frequency and low-frequency waves and convert them into frequencies. Then, we recombine them and perform dimensionality reduction to integrate local information (low-frequency) and spatial information (high-frequency). The proposed method adopts 4-level wavelet decomposition and chooses the db3 wavelet type, which has good time-domain and frequency-domain characteristics. The db3 wavelet consists of three scales of wavelet functions and three scales of wavelet packet functions, which can provide a higher signal compression ratio and better signal reconstruction quality. In addition, the db3 wavelet has good performance in both low-frequency and high-frequency decomposition, which can effectively extract heart rate signals while removing noise and interference. Therefore, choosing db3 wavelet for one-dimensional discrete wavelet transformation can improve the measurement accuracy.Considering that the frequency range of heart rate is completely covered by the frequency range of ${D}_{4}$ , we choose to set the decomposition coefficients of ${D}_{1},{D}_{2},{D}_{3}$ , and ${A}_{4}$ to zero, and retain the decomposition coefficient of ${D}_{4}$ . Then, we reconstruct the decomposition coefficients of ${D}_{1},{D}_{2},{D}_{3},{D}_{4}$ , and ${A}_{4}$ to obtain the preprocessed signal. This preprocessing method can better highlight the characteristics of the heart rate signal and improve the quality of the signal. By further analyzing the preprocessed signal, we can more accurately measure the heart rate.
80
+
81
+ < g r a p h i c s >
82
+
83
+ Figure 3: Raw RGB signals and SOBI processed signal. (a) Raw signal of red channel, (b) Raw signal of green channel, (c) Raw signal of blue channel, (d) SOBI processed signal.
84
+
85
+ The blind source separation algorithm is used to further denoise the preprocessed signal, which can remove most of the lighting and motion noise. We use SOBI for denoising, which is a type of ICA (Independent Component Analysis) algorithm that differs from other ICA algorithms in that it uses second-order statistical information to reconstruct source signal. SOBI method can deal with non-Gaussian noise better, so it is more suitable for BVP signal processing. Therefore, in the case of the interference of illumination and motion interference, the SOBI method is better than other ICA methods. The observed signal $\mathbf{X}$ is obtained by linearly mixing the source signals $\mathbf{S}$ through the mixing matrix $\mathbf{A}$ . The source signals can be represented as $\mathbf{S} = {\left\lbrack {s}_{1},{s}_{2},{s}_{3}\right\rbrack }^{\mathrm{T}}$ , and the mathematical expression is $\mathbf{X} = \mathbf{{AS}}$ . Both the source signals $\mathbf{S}$ and the mixing matrix $\mathbf{A}$ are unknown and Formula 3 can be used to separate the observed $\operatorname{signal}\mathbf{X}$ . Where $\mathbf{W}$ is the separation matrix, represented as $\mathbf{W} = {\mathbf{A}}^{-1}$ , and $\mathbf{Y} = {\left\lbrack {y}_{1},{y}_{2},{y}_{3}\right\rbrack }^{\mathrm{T}}$ is the estimated value of the source signals $\mathbf{S}.\mathbf{W}$ is randomly initialized and continuously optimized until $\mathbf{Y}$ is close to $\mathbf{S}$ , obtaining the desired signal $\mathbf{X}$ .
86
+
87
+ $$
88
+ {x}_{i} = \mathop{\sum }\limits_{{j = 1}}^{3}{a}_{ij}{s}_{j}\left( {1 \leq i \leq 3}\right) \tag{2}
89
+ $$
90
+
91
+ $$
92
+ {y}_{i} = \mathop{\sum }\limits_{{j = 1}}^{3}{w}_{ij}{x}_{j}\left( {1 \leq i \leq 3}\right) \tag{3}
93
+ $$
94
+
95
+ § 3.3 MOTION ELIMINATION
96
+
97
+ We use SSA to remove residual motion noise from the BVP signal.SSA is an effective method for processing nonlinear time series data, which can decompose time series into meaningful components without prior knowledge. It can directly extract the artifact spectrum from the BVP signal of the facial ROI, assuming that the facial ROI contains all interference information and that the noise artifact is unrelated to the pulse signal. By applying SSA, dominant noise artifacts can be extracted, and it is found that the effect of noise artifacts on all RGB channels is almost the same. To eliminate residual noise artifacts in the extracted BVP signal, SSA can be applied to estimate the spectrum and obtain a BVP signal without noise artifacts. The modal decomposition calculated using SSA is as follows.Let $x\left( t\right)$ be the normalized BVP signal at time $t$ , and define its trajectory matrix as follows:
98
+
99
+ $$
100
+ X = \left\lbrack \begin{matrix} x\left( 1\right) & x\left( 2\right) & \ldots & x\left( m\right) \\ x\left( 2\right) & x\left( 3\right) & \ldots & x\left( {m + 1}\right) \\ \vdots & \vdots & \ddots & \vdots \\ x\left( {n - m + 1}\right) & x\left( {n - m + 2}\right) & \ldots & x\left( n\right) \end{matrix}\right\rbrack \tag{4}
101
+ $$
102
+
103
+ In this method, the trajectory matrix plays an important role in defining the relationship between the window length $m$ and the total number of data points $n$ . Typically, the window length is chosen as one quarter of the data. If the data is periodic, the window length can also be chosen as one quarter of the longest period in the data. Singular value decomposition of the trajectory matrix $\mathbf{X}$ yields three matrices $\mathbf{U},\mathbf{W}$ , and $\mathbf{V}$ , expressed as:
104
+
105
+ $$
106
+ X = {UW}{V}^{T} \tag{5}
107
+ $$
108
+
109
+ where $\mathbf{U}$ and $\mathbf{V}$ are regular orthogonal matrices, and $\mathbf{W}$ is a diagonal matrix that describes the diagonal components of the singular values ${\lambda }_{i}.\mathbf{U} = \left( {{u}_{1},{u}_{2},\ldots ,{u}_{r}}\right) ,\mathbf{V} = \left( {{v}_{1},{v}_{2},\ldots ,{v}_{r}}\right)$ and $r \leq \min \left( {m,n}\right) ,\mathbf{X}$ can be decomposed into the sum of several matrices as follows:
110
+
111
+ $$
112
+ X = {X}_{1} + {X}_{2} + \ldots + {X}_{r} \tag{6}
113
+ $$
114
+
115
+ $$
116
+ {X}_{i} = \sqrt{{\lambda }_{i}}{U}_{i}{V}_{i}^{T}\left( {i = 1,2,\ldots ,r}\right) \tag{7}
117
+ $$
118
+
119
+ The singular values of these matrices decrease with increasing subscripts, indicating that modes with smaller mode number have larger variances. Therefore, modes with smaller subscripts contribute more to the original signal. By eliminating signals with relatively small partial correlation, the remaining signals can be reconstructed to obtain the variable components that do not include irrelevant signals.By using the reconfigured BVP signal from $\mathbf{X}$ , a robust heart rate estimation can be obtained. We use a weight-correlated method to set the threshold, ensuring that noise can be effectively eliminated without losing the original valid signals. We performed a fast Fourier transform (FFT) on the BVP signal to convert it to the frequency domain and analyzed its power spectral density (PSD). Since the heart rate signal appears as a distinct peak in the frequency spectrum, we can approximate the heart rate value ${f}_{HR}$ by taking the frequency with the maximum spectral density:
120
+
121
+ $$
122
+ {f}_{HR} = \operatorname{argmax}\left| {W\left( f\right) }\right| \tag{8}
123
+ $$
124
+
125
+ where $W\left( f\right)$ is the power spectral density of the BVP signal. Finally, we can obtain an estimated value of the heart rate, ${HR}$ , is: ${HR} =$ ${f}_{HR} * {60}$ .
126
+
127
+ § 4 EXPERIMENTAL SETUP AND RESULTS
128
+
129
+ Due to the fact that the experiment was conducted in a far-field environment, there are currently no publicly available datasets to use. Therefore, in order to collect the necessary data, 13 subjects were self-collected, each video is 70 seconds long, including 10 males and 3 females. In this experiment, we used a network camera (Guke, G06-18X) to record the video. The camera was placed 5 meters away from the subjects, with a frame rate of ${30}\mathrm{{fps}}$ and a resolution of ${640} \times {480}\mathrm{{px}}$ . In order to produce the phenomenon of illumination changes, we placed a white LED light panel (FengChuan Ltd., Shen-zhen, China) with 18W power at a distance of 1.5 meters from the subjects. In order to obtain the ground truth of the subject's heart rate, we used a finger clip oximeter (ContecMedice, CMS50E) to measure the photoplethysmogram (PPG) on the subjects' fingers. The scene of the experiment is shown in Fig. 4.
130
+
131
+ < g r a p h i c s >
132
+
133
+ Figure 4: The scene diagram of the experiment.
134
+
135
+ < g r a p h i c s >
136
+
137
+ Figure 5: Changes of heart rate at different times between the device, SOBI algorithm and the proposed algorithm.
138
+
139
+ § 5 RESULT ANALYSIS
140
+
141
+ In this section, we used different denoising methods to process the raw signal.In order to evaluate the performance of different denoising methods, we compared the actual heart rate measured by the finger clip oximeter with the heart rate obtained after denoising. In order to assess the accuracy of heart rate measurement, we used two metrics: mean absolute error (MAE) and root mean square error (RMSE). The formulas for calculating RMSE and MAE are as follows:
142
+
143
+ $$
144
+ {RMSE} = \sqrt{\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}{\left( {h}_{i}\left( x\right) - {y}_{i}\right) }^{2}} \tag{9}
145
+ $$
146
+
147
+ $$
148
+ {MAE} = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\left| {{h}_{i}\left( x\right) - {y}_{i}}\right| \tag{10}
149
+ $$
150
+
151
+ where $n$ is the number of measurements, $y$ represents the true heart rate value, and $h\left( x\right)$ represents ground truth HR obtained by the method. The smaller the values of RMSE and MAE, the better the denoising performance. By calculating these metrics, we can evaluate the advantages and disadvantages of different denoising methods and find the best denoising method to provide more accurate results for heart rate measurement.
152
+
153
+ We took an example video from our own collected database to analyze the changes of heart rate obtained by using the standard PPG signal collected by the CMS50E device, the heart rate obtained by using the SOBI denoising algorithm, and the heart rate obtained by our proposed method at different times, as shown in Fig. 5.In Fig. 5, $H{R}_{ppg}$ represents the ground truth HR collected by the instrument, $H{R}_{\text{ ours }}$ is the heart rate obtained using our proposed method, and $H{R}_{sobi}$ is the heart rate obtained using the SOBI algorithm. It can be seen that in the scenario with changing illumination, the heart rate measurement obtained directly using the SOBI algorithm is not particularly accurate. In contrast, our proposed method can more accurately fit the heart rate measured by the instrument.
154
+
155
+ Table 1: The RMSE and MAE values of generally selected ROI under different methods.
156
+
157
+ max width=
158
+
159
+ method G-BVP SOBI OURS
160
+
161
+ 1-4
162
+ RMSE(bpm) 18.23 13.71 9.52
163
+
164
+ 1-4
165
+ MAE(bpm) 16.17 11.74 7.76
166
+
167
+ 1-4
168
+
169
+ Table 2: The RMSE and MAE values of feature point tracked ROI under different methods.
170
+
171
+ max width=
172
+
173
+ method G-BVP SOBI OURS
174
+
175
+ 1-4
176
+ RMSE(bpm) 18.05 12.96 8.9
177
+
178
+ 1-4
179
+ MAE(bpm) 15.95 10.62 7.13
180
+
181
+ 1-4
182
+
183
+ We compared generally selecting regions of interest with using feature point tracking to define regions of interest, several metrics were used to demonstrate the experiment. To present the results more intuitively, we used Table 1 and Table 2 to list their RMSE and MAE values.From Table 1 and Table 2, the difference between generally selecting ROI and using feature point tracking ROI is not very significant, but it can still be seen that the feature point tracking method is slightly better than the general selection method. In the experiment, the RMSE of the G-BVP method with general selection and feature point tracking were ${18.23}\mathrm{{bpm}}$ and ${18.05}\mathrm{{bpm}}$ , respectively, and their MAE were 16.17bpm and 15.95bpm, respectively. The RMSE of the SOBI algorithm were 13.71bpm and 12.96bpm, respectively, and their MAE were 11.74bpm and ${10.62}\mathrm{{bpm}}$ , respectively. The proposed method had RMSE of ${9.52}\mathrm{{bpm}}$ and ${8.9}\mathrm{{bpm}}$ , respectively, and their MAE were 7.76bpm and 7.13bpm, respectively. Feature point tracking has smaller errors. The feature point tracking algorithm can automatically identify the position of the ROI and adaptively track the ROI, which can better cope with the interference caused by facial expressions and other factors, and obtain more accurate heart rate measurement results.
184
+
185
+ To verify the performance of the method, We used some typical methods to estimate heart rate using videos in the self-collecting database, These methods including G-BVP, FastICA [6], POS, CHROM, and SOBI. We compared them with the proposed method to validate the experimental results.Fig. 6 shows the heart rate range plot drawn based on the results obtained from different methods. The red line represents the average value, the blue squares represent the distribution range of most values, and the red dots represent individual outlier values. Referring to the values of PPG, it can be seen that our proposed method is basically consistent with the range of PPG, while other methods may have large value range offsets or more outlier points, and the results obtained are not accurate enough. At the same time, the average values of the POS algorithm and our proposed method are basically consistent with the values measured by the instrument.
186
+
187
+ < g r a p h i c s >
188
+
189
+ Figure 6: Heart rate ranges and outlier distributions of different methods.
190
+
191
+ < g r a p h i c s >
192
+
193
+ Figure 7: RMSE and MAE of various classical methods and proposed methods.
194
+
195
+ It can be seen from Fig. 7 that the proposed method has the smallest RMSE and MAE. it can be seen that compared with our proposed method, the RMSE and MAE obtained by other methods are relatively large. The POS algorithm has the best performance among other methods, followed by SOBI, CHROM, FastICA, and G-BVP. The RMSE of our proposed method is ${8.9}\mathrm{{bpm}}$ , and the MAE is ${7.13}\mathrm{{bpm}}$ , which also indicates that the proposed method has strong resistance to changes in illumination.
196
+
197
+ In the case of remote measurement in the far field with changing illumination, the statistical values of all remote measurement methods based on the Bland-Altman method are shown in Fig. 8. According to G-BVP, as shown in Fig. 8(a), the average deviation and consistency range are $- {10}, - {40.32} + {20.31}$ beats/min. According to FastICA, as shown in Figure Fig. 8(b), the average deviation and consistency range are -12.16, -30.9+14.1 beats/min. According to POS, as shown in Figure Fig. 8(c), the average deviation and consistency range are -1.67, -26.9+23.57 beats/min. According to CHROM, as shown in Figure Fig. 8(d), the average deviation and consistency range are $- {3.53}, - {34.77} + {27.72}$ beats/min. According to SOBI, as shown in Fig. 8(e), the average deviation and consistency range are -8.14 beats/min, -30.9+14.1 beats/min. Using our proposed framework, as shown in Fig. 8(f), the average deviation and consistency range are $- {3.17}, - {18.82} + {12.49}$ beats/min. Based on the statistical results in Figure 10, it can be seen that POS, CHROM, and our proposed method have smaller average offsets, among which our proposed method has the smallest standard deviation and almost no outliers deviating from the consistency range. This indicates that our proposed method has good performance in remote heart rate measurement.
198
+
199
+ § 6 CONCLUSION
200
+
201
+ Our research not only extends the distance for measuring heart rate, but also proposes a new framework for remote heart rate measurement. In response to the performance degradation of previous common face video remote heart rate measurement methods under environmental illumination changes and subject movement interference, we propose a framework consisting of three main processes. Firstly, we use the VJ algorithm to accurately identify the ROI of the face and solve the problem caused by rigid head movement. At the same time, we use KLT technology to continuously track each frame to further reduce motion interference. Secondly, we use FWT to remove initial illumination interference, and then use the SOBI algorithm to remove most of the noise. Finally, we use the SSA method to remove residual motion artifact noise. We conducted extensive experimental evaluations of this method and compared it with reference measurement results. The results show that in a far-field environment, this framework can accurately measure the heart rate of people under interference of illumination. We evaluated the system using multiple video data sources and found that the system exhibits strong consistency, high correlation, and low noise levels. Moreover, in complex and changing situations, the system's results are better than traditional measurement methods such as G-BVP, FastICA, POS, CHROM, and SOBI algorithms. Our research results have important implications for the development of remote heart rate measurement technology. Our method not only extends the measurement distance, but also reduces the interference of environmental illumination changes and subject movement on measurement results, thereby improving the practicality and reliability of this technology and expanding the application scenarios of rPPG. We believe that in the future, this technology will be widely used to provide more convenient and accurate monitoring means for people's health.
202
+
203
+ < g r a p h i c s >
204
+
205
+ Figure 8: Bland-Altman plots. (a) G-BVP method, (b) FastICA method, (c) POS method, (d) CHROM method, (e) SOBI method, (f) Ours method.
206
+
207
+ § ACKNOWLEDGMENTS
208
+
209
+ I would like to express my sincere gratitude to my supervisor, Chao Zhang, for his invaluable guidance, support and encouragement throughout my research work. His extensive knowledge, critical insights and constructive feedback have been instrumental in shaping my research and helping me achieve my academic goals. I am also deeply grateful to him for his patience, kindness and understanding, which have made my research journey a truly enriching and rewarding experience. I will always cherish the lessons I have learned from him and the memories we have shared together. Thank you, Chao Zhang, for everything you have done for me.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/akc8f5ampp/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,445 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Challenges and Opportunities for Software Testing in Virtual Reality Application Development
2
+
3
+ Category: Submitted to GI'23
4
+
5
+ ## Abstract
6
+
7
+ Testing is a core process for the development of Virtual Reality (VR) software, which could ensure the delivery of high-quality VR products and experiences. As VR applications have become more popular in different fields, more challenges and difficulties have been raised during the testing phase. However, few studies have explored the challenges of software testing in VR development in detail. This paper aims to fill in the gap through a qualitative interview study composed of 14 professional VR developers and a survey study with 33 additional participants. As a result, we derived ${10}\mathrm{{key}}$ challenges that are often confronted by VR developers during software testing. Our study also sheds light on potential design directions for VR development tools based on the identified challenges and needs of the VR developers to alleviate existing issues in testing.
8
+
9
+ ## 1 INTRODUCTION
10
+
11
+ Debugging or testing is one of the critical steps in software development [32]. The creation of Virtual Reality (VR) applications shares a similar process to traditional software development and heavily relies on testing to ensure the quality of the final deliverables. However, VR application testing is more challenging and complex $\left\lbrack {2,{30}}\right\rbrack$ due to its inherent nature of relying on multiple devices and platforms including headsets and desktops. For instance, developers need to put on and take off their VR head-mounted display (HMD) quite frequently during the testing stage, which not only is time-consuming but also causes motion sickness [22]. In addition, developers do not always have access to VR HMDs to realistically evaluate the quality of their creations.
12
+
13
+ While the human-computer interaction (HCI) community is increasingly focused on researching immersive technologies such as VR and augmented reality (AR), there is still a lack of thorough studies exploring the challenges as well as opportunities for software testing in VR development. Various studies have explored AR/VR applications (e.g., [23, 29,40]), interaction techniques (e.g., $\left\lbrack {{20},{28},{35},{36}}\right\rbrack$ ), and authoring tools (e.g., $\left\lbrack {5,9,{21}}\right\rbrack$ ). Some have provided insights into the challenges and opportunities of AR/VR from the development perspective to better understand the needs of professional AR/VR developers [2, 13, 16, 24, 30]. However, little attention has been paid to the challenges of the testing phase in VR development, despite developers having confronted numerous difficulties as introduced above.
14
+
15
+ In this research, we aimed to fill in the gap by exploring the challenges and needs of VR developers during their testing phase and identifying promising directions to overcome or resolve these challenges. We first conducted a comprehensive interview study with 14 professional VR developers (11 from industry and 3 from academia) who have diverse backgrounds and different levels of experience and skill sets in VR development. We then performed a thematic analysis of the interview data and identified 10 key challenges for software testing in VR development. Our results confirmed that VR developers face significant challenges during the testing phase of VR development. Despite employing workarounds, our participants found them to be ad-hoc, requiring manual intervention, and prone to errors. We organized the key challenges into three distinct categories (see Table 2): hardware-related challenges (C1-4), software-related challenges (C5-8), and comprehensive challenges (C9-10).
16
+
17
+ To verify the identified challenges with a broader audience, we further conducted a confirmation survey with ${33}\mathrm{{VR}}$ developers by distributing the survey to various related Slack channels. In the survey, we asked participants to rank the importance of the 10 challenges on a 7-point Likert scale as well as select the most and least important ones. From the survey results, all the identified challenges exhibit reasonable ratings without any outliers, substantiating the validity of our findings.
18
+
19
+ Additionally, we discuss the future opportunities for testing VR applications based on the identified challenges, which can provide guidance to VR developer tool makers and researchers for enhancing the current functionalities of VR development tools and introducing new features. Our study extends the findings of Ashtari et al. [2], Nebeling and Speicher [30], and Krauß et al. [24], which benefits specifically to the testing phase of VR development. In summary, our contributions in this paper include:
20
+
21
+ - Empirical interview and survey studies that examined and validated key challenges in the testing phase of VR development;
22
+
23
+ - The results of 10 identified key challenges in VR testing faced with corresponding future design directions.
24
+
25
+ ## 2 RELATED WORK
26
+
27
+ Our research is related to the existing techniques of VR applications and authoring tools, practices in VR development, as well as studies on VR testing and general software testing.
28
+
29
+ ### 2.1 VR Systems and Applications
30
+
31
+ Virtual Reality (VR) has been largely researched and applied to different fields, such as education, healthcare, science, and entertainment, to empower people's needs in real life. Education and learning are one of the most popular fields for these studies. XRStudio [29] creates a VR lecturing system that enables the instructors to live stream their lectures in VR to the students who can join the lectures in VR or watch the lectures on 2D displays and in AR. Loki [40], a mixed-reality system, enables the learners to view the live-streamed tutorials generated in VR or AR by remote instructors in VR and AR. TutoriVR [41] integrates streaming video tutorials with 3D and contextual aids in VR to facilitate the learning and creation process for VR users. Additionally, VR gradually plays a more important role in healthcare. Virtual experience has been studied to comfortable, enriching pain management experiences [26]. iVR [3] has been proposed to improve users' self-compassion, and in the long term, their positive mental health. Cai et al. [7] have also made efforts to address ASD among children in VR learning environments.
32
+
33
+ With the recent explosion in VR systems and applications in different fields, it is important to understand the needs of VR developers during the development process, which has motivated our study. This could provide them with a better VR development experience and attract more developers into VR application development.
34
+
35
+ ### 2.2 Development and Authoring Tools for VR
36
+
37
+ Development tools for VR assist creators with varying expertise levels in producing VR software. VR development tools encompass 3D game engines (e.g., Unity, Unreal, Godot) and development toolkits/frameworks (e.g., MRTK, A-Frame). These tools empower VR developers and researchers to create versatile VR applications.
38
+
39
+ Based on these tools, several research studies $\left\lbrack {5,8,{12},{15},{21},{43}}\right\rbrack$ have proposed VR authoring tools to satisfy the customized needs of the end-users. These tools focus on satisfying some specialized needs of the end users. Some systems (e.g., VREX [5], Xr360 [21], Genesys [12]) are designed to lower the threshold of VR development and speed up the process. Some other studies help the users meet specialized needs such as creating interactive scenes [43], making experiential learning courses [8], and authoring VR games for physical space [15].
40
+
41
+ An existing study [11] shows that VR authoring tools could help facilitate the creation of different VR features. Despite the availability of the existing authoring tools, there has been a limited number of tools created and researched to ease the VR testing practices for developers. Our study provides empirical insights into the testing of VR development, highlighting the challenges faced by VR developers with some future directions for the creation of new tools that could facilitate VR testing.
42
+
43
+ ### 2.3 VR Development Practices
44
+
45
+ To better understand the challenges and needs in the testing phase of VR development, our study aims to investigate the current development practices of VR developers. Unity and Unreal engines are the common development tools used by VR developers. An explanatory study by Ghrairi et al. [17] discovered that the majority of VR projects on GitHub are currently small to medium-sized, with JavaScript (used for web) and C# (used in Unity) being the most popular programming languages. Unity has emerged as the preferred game engine for VR development and is the most frequently discussed topic on Stack Overflow. In addition to Unity, Unreal Engine is also utilized by VR developers and researchers for creating VR content $\left\lbrack {6,{10}}\right\rbrack$ Unity is preferred for its ease of use, asset store, and support for various platforms, while Unreal Engine is favoured for its advanced graphics capabilities and visual scripting [37]. Numerous customized tools, such as Unity XR Interaction Toolkit and MRTK, have been developed to streamline and support the VR development process in Unity or Unreal Engine.
46
+
47
+ On the other hand, the field of web-based VR development, particularly through the implementation of WebXR, has experienced significant growth in recent years. WebXR, an API that enables the creation and integration of immersive experiences directly within web browsers, has emerged as a popular alternative for the industry. By allowing developers to create platform-agnostic VR experiences [25], WebXR fosters accessibility and reduces the need for specialized hardware or software. As a result, researchers are increasingly exploring the potential of web-based VR development for a wide range of applications, such as education $\left\lbrack {{18},{29},{31}}\right\rbrack$ , healthcare [1], and entertainment [19]. The growing interest in WebXR also highlights the importance of developing new tools, frameworks, and best practices to support the unique challenges and opportunities associated with web-based VR development [44].
48
+
49
+ Cross-platform VR development has also been promoted these years by organizations like Koronos Group and its open standard OpenXR. However, significant challenges still remain for developers. They face a multitude of issues when working with different VR hardware, software, and application programming interfaces (APIs). Heterogeneous specifications, input mechanisms, and performance capabilities can lead to compatibility and optimization difficulties, requiring developers to adapt their applications to each unique platform. Additionally, the varying degrees of support for industry standards and the rapid evolution of VR technology further complicate cross-platform development. VR development practices have made considerable advancements, thanks to the widespread adoption of game engines like Unity and Unreal, the emergence of web-based VR development through WebXR, and the push for cross-platform development by organizations like Koronos Group. These improvements have resulted in more efficient development processes and the creation of customized tools and frameworks to assist developers. However, challenges persist in the realm of VR development, including compatibility and optimization issues across diverse hardware, software, and APIs, as well as the rapid evolution of VR technology and varying support for industry standards $\left\lbrack {2,{13},{24}}\right\rbrack$ . To ensure the continued growth and success of the VR industry, it is essential to address these challenges, foster collaboration among developers, and researchers, and continue exploring new methods and best practices to improve the VR development process and enhance the user experience, where our study aims to contribute to the understanding the specific challenges in the testing phase of VR development process.
50
+
51
+ ### 2.4 Testing Practices in Software Development
52
+
53
+ Software testing, which is an essential step in the whole software development workflow [14], has been researched to ensure the quality of the software delivery $\left\lbrack {4,{38},{42}}\right\rbrack$ . While testing in VR software development has not been studied comprehensively, general software testing could still enlighten the directions of VR software testing in different ways.
54
+
55
+ Automation testing, as one of the most important parts of software testing, has been implemented and applied to the software industry broadly. Automation testing has significantly impacted the testing process, with many software tests now being performed using automation tools [38]. These tools reduce the number of people involved and the likelihood of human errors. Automation testing involves test cases that simplify the process of capturing different scenarios and storing them. For example, automation tests have been explored to reduce the errors in software GUI [27].
56
+
57
+ Manual testing is also an important part of software testing and has been researched in comparison with automation testing $\left\lbrack {{33},{34},{39}}\right\rbrack$ . Manual testing always involves human efforts from testing teams such as Quality Assurance testers and Software Developers who are responsible for creating and running tests. Manual testing is a time-consuming process that demands specific qualities in a tester, such as patience, observance, creativity, open-mindedness, and skill [34]. When applied to large software applications or those with extensive datasets, repetitive manual testing can become challenging to execute effectively. This limitation underscores the need for alternative methods, such as automation testing, to improve efficiency and accuracy in software testing processes. However, due to the nature of VR applications, manual testing is still inevitable as VR software relies on human work to ensure the quality of the products such as the visual presentation of the contents and graphics performance in the VR headsets.
58
+
59
+ While the above studies have explored the needs in general software testing and have proposed tools to address the issues, VR testing can be particularly challenging because of its unique development environment with the HMDs. Different software testing techniques might be customized and applied to VR testing to ensure the delivery of VR software; however, no studies have adequately investigated scenarios of VR testing. Thus, our research specifically aims to get insights into the challenges and opportunities in the testing phase of VR development.
60
+
61
+ ## 3 INTERVIEW STUDY
62
+
63
+ To investigate the current practices and challenges in VR application testing faced by developers in-depth, we employed a qualitative approach by conducting semi-structured interviews with VR developers with diverse backgrounds. In this section, we describe the setup of the interview study and report the results in the next section.
64
+
65
+ ### 3.1 Participants
66
+
67
+ In order to gain a comprehensive understanding of VR development testing practices, we sought out participants with experience in the field, from both academia and industry. We reached out to local HCI research groups as well as VR-related software companies. Our goal was to create a diverse cohort of participants, with varying backgrounds and project experience. We ultimately recruited 14 participants (11 males, 2 females, and 1 non-binary/third gender; aged 19 - 54), including user experience designers, gaming enthusiasts, and academic researchers, as detailed in Table 1. Their experience ranged from 0-2 years to 10+ years; and the cohort covered a variety of popular VR hardware (HMD) on the market, including Oculus Quest 1/2, Oculus Rift, HTC Vive, Meta Quest Pro, etc. In addition, our participants use various VR development software (e.g., Unity, Unreal, and Godot) for their work. Based on their experiences and roles in VR development, we grouped them as junior developers (JD), experienced developers (ED), and VR development tools developers (VDTD). The diversity in these aspects could provide valuable insights into the testing phase of VR development on different perspectives.
68
+
69
+ Table 1: Participants recruited in our interview study.
70
+
71
+ <table><tr><td>ID</td><td>Role</td><td>Experience</td><td>Software Used</td><td>Hardware (HMD) Used</td></tr><tr><td colspan="5">Junior Developers (JD)</td></tr><tr><td>P1</td><td>Software Developer</td><td>0 - 2 years</td><td>Godot</td><td>Oculus Quest 1/2</td></tr><tr><td>P2</td><td>Student Researcher</td><td>0 - 2 years</td><td>Unity</td><td>Oculus Quest 1/2</td></tr><tr><td>P7</td><td>Software Developer</td><td>0 - 2 years</td><td>Unity</td><td>Oculus Quest 1/2, Oculus Rift, HTC Vive, Google Cardboard</td></tr><tr><td>P10</td><td>Product Designer</td><td>0 - 2 years</td><td>Unity</td><td>Oculus Quest 1/2</td></tr><tr><td colspan="5">Experienced Developers (ED)</td></tr><tr><td>P3</td><td>Student Researcher</td><td>6 - 10 years</td><td>Unity</td><td>Oculus Rift, HTC Vive</td></tr><tr><td>P4</td><td>Architectural Designer</td><td>3 - 5 years</td><td>Unity, Unreal</td><td>Oculus Quest 1/2, Oculus Rift</td></tr><tr><td>P5</td><td>Software Developer</td><td>3 - 5 years</td><td>Unity</td><td>Oculus Quest 1/2, Oculus Rift, HP Reverb, Meta Quest Pro</td></tr><tr><td>P6</td><td>Software Developer</td><td>3 - 5 years</td><td>Unity, Unreal</td><td>Oculus Quest 1/2, Oculus Rift, HTC Vive, Varjo VR1/2/3</td></tr><tr><td>P8</td><td>Software Developer</td><td>${10} +$ years</td><td>Unity</td><td>Oculus Quest 1/2, Oculus Rift, HTC Vive, Google Cardboard, Valve Index, HP Reverb, Pico, Focus 3, and other Windows enterprise headsets</td></tr><tr><td>P9</td><td>Software Development Manager</td><td>3 - 5 years</td><td>Unity</td><td>Oculus Rift, HTC Vive</td></tr><tr><td colspan="5">VR Development Tools Developers (VDTD)</td></tr><tr><td>P11</td><td>Software Development Manager, XR Foundation</td><td>3 - 5 years</td><td>Unity, Unreal, Self- build engine</td><td>Oculus Quest 1/2, Google Cardboard, HP Reverb, Meta Quest Pro</td></tr><tr><td>P12</td><td>VR Development Tools Designer</td><td>10 + years</td><td>Unity</td><td>Oculus Quest 1/2, Oculus Rift, HTC Vive, Google Cardboard</td></tr><tr><td>P13</td><td>VR Development Tools Developer</td><td>6 - 10 years</td><td>Unity, Unreal</td><td>Oculus Quest 1/2, Oculus Rift, HTC Vive</td></tr><tr><td>P14</td><td>VR Development Tools Developer</td><td>3 - 5 years</td><td>Unity</td><td>Oculus Quest 1/2, Oculus Rift, HTC Vive, Valve Index</td></tr></table>
72
+
73
+ ### 3.2 Interview Procedure
74
+
75
+ Prior to the interview, participants were asked to sign the consent form and filled in a pre-study questionnaire regarding their demographic information. During the interview, we began by asking participants to describe their current or recent VR projects and let them walk through the VR development workflow on the projects they discussed. We then inquired about the testing techniques they used and the main challenges or frustrations of their current testing and debugging process. Our questions were around the following themes during the interviews:
76
+
77
+ 1. Would you briefly introduce one of the interesting VR experiences you had?
78
+
79
+ 2. Could you walk through your VR development workflow on the project you've talked about or another specific example with us?
80
+
81
+ 3. In the walk-through you just shared with us, what were the testing techniques you use?
82
+
83
+ 4. What are the main challenges or frustrations about your current testing workflow?
84
+
85
+ 5. What are your current solutions for the challenges you just mentioned?
86
+
87
+ 6. What could be the ideal VR testing workflow in your mind? It could be a whole workflow, a new tool or some features.
88
+
89
+ In the end, participants were asked to brainstorm the future directions of VR development tools that could better serve the testing purpose of VR development. The whole interview session was audio recorded and lasted around 60 minutes for each participant.
90
+
91
+ ### 3.3 Data Analysis
92
+
93
+ We transcribed the audio recordings of the interview sessions using Otter.ai and manually checked the places that might not be precise due to the limitation of the transcription software. We employed an inductive approach and generated affinity diagrams in Figma to explore the themes related to the main challenges that our participants faced. Initially, one member of our research team conducted an open-coding pass to generate a list of potential codes. We then refined and consolidated these codes through discussions and the use of affinity diagrams, resulting in a final coding scheme. Throughout the coding process, we focused on understanding the challenges and needs of VR developers in their testing practices of VR development.
94
+
95
+ ## 4 CHALLENGES IN VR TESTING
96
+
97
+ Based on our analysis of the interview data, we consolidated the following 10 key challenges in testing VR applications which are grouped into three categories (Table 2).
98
+
99
+ ### 4.1 Hardware-related Challenges
100
+
101
+ Hardware-related challenges often arise during the testing phase, posing significant obstacles for developers. These challenges include cumbersome VR equipment (C1), motion sickness (C2), difficult equipment setup (C3), and performance issues (C4). Addressing these hardware-related challenges is essential for streamlining the testing process and ensuring the successful development of VR applications.
102
+
103
+ C1: Cumbersome VR Equipment. Cumbersome VR headsets are a burden for the developers in the testing phase. First, they may suffer from frequently putting on and taking off the headsets, which is not only time-wasting but also triggers feelings of unease or sickness:
104
+
105
+ Table 2: Summary of challenges in the testing phase of VR development.
106
+
107
+ <table><tr><td colspan="2">Hardware-related Challenges</td></tr><tr><td>C1Cumbersome VR Equipment</td><td>Developers may suffer from the inconvenience of the VR headsets. For example, developers may experience taking on and off the VR headsets in a high frequency, the sickness caused by the heavy weight of the headsets, or the burden of the eyeglasses and long hair.</td></tr><tr><td>C2Motion Sickness</td><td>Developers may suffer from motion sickness caused by the VR environment and equipment. The motion sickness might be caused by, for instance, the long time spent in VR environments or the low quality (low frame rate, low picture quality) of VR application during the prototyping/testing phase</td></tr><tr><td>C3Difficult Equipment Set Up</td><td>Developers may suffer from the difficult and time-wasting equipment set-ups during the testing phase. For example, developers need to recalibrate the VR equipment every time when they use it. It also has a strict demand on an open. decent-size and obstacle-free physical space when developers want to do some trials in VR environments.</td></tr><tr><td>C4Performance Issues</td><td>Developers may suffer from performance issues during the testing phase. Issues such as long build/loading/rendering time, the discrepancy between hardware's performance (in most cases, the simulator environments like the PC and laptop have better hardware performance than the VR equipment and low frame rate.</td></tr><tr><td colspan="2">Software-related Challenges</td></tr><tr><td>C5Missing Testing Information</td><td>Developers may suffer from the lack of testing information. Many developers reported that they cannot monitor program changes (variables, hardware usage) in VR environments and it is hard to integrate debug information (e.g. logs) for VR applications.</td></tr><tr><td>C6Difficulty in Finding/Reproducing bugs</td><td>Developers may find it hard to find or reproduce bugs. For example, the large 3D immersive environment of VR makes it hard to find details/small glitches. It is also difficult to reproduce bugs since it's hard to track and reproduce the same actions in VR environments.</td></tr><tr><td>C7Lack of Automated Testing</td><td>Developers may suffer from the inconvenience of immature automated testing support. The lack of automated/unit tests in VR makes it hard to reduce manual/repetitive testing work.</td></tr><tr><td>C8Inconvenient Collaboration of VR Testing</td><td>Developers may find it hard to do collaborative debugging/testing with other developers. For example, developers may find it hard to achieve remote testing/debugging and headset sharing with other developers.</td></tr><tr><td colspan="2">Comprehensive Challenges</td></tr><tr><td>C9Lack of Standards</td><td>Developers may suffer from the capability issues. Many developers find there are no common standards (e.g. different APIs) between different VR development tools and software (e.g. Unity, Unreal), hardware (e.g VR HMD like Oculus Ouest and HTC Vive).</td></tr><tr><td>C10Few VR-specific Testing Support</td><td>Developers may suffer from the little VR-specific testing support from the community and industry. There are issues such as low-number of existing toolkits, tutorials, documentation or no collaboration/integration between different tools/solutions.</td></tr></table>
108
+
109
+ When I'm using the headset, I have to like, put it on, and then, do stuff, and then put it off, put it away and look at my console, so on and so forth. If there is a perfect simulator I can use to test most of the features, it will definitely make debugging a lot easier for me. (P2-JD)
110
+
111
+ From time to time, I got a headache after putting on and off the VR headset to figure out some tricky bugs. (P5-ED)
112
+
113
+ Additionally, with the cumbersome HMDs, wearing eyeglasses or having long hair can add to the difficulty experienced during use:
114
+
115
+ Having long hair just makes it harder to put on and off the VR headset. (P1-JD)
116
+
117
+ So I don't buy glasses that are wider than that. So that limits my frame choices. (P8-ED)
118
+
119
+ One of the VR tool developers expressed concern that this issue might not be resolved in the near future:
120
+
121
+ The equipment being cumbersome is like, these are sort of issues for which the industry does not have a solution yet, and it might take some time until we find one. (P13-VDTD)
122
+
123
+ C2: Motion Sickness. Motion sickness caused by VR environments and equipment has been mentioned in a high frequency in the interviews $\left( {{10}/{14}}\right)$ . This discomfort could be a result of spending extended periods of time in VR environments and encountering low-quality VR applications during the prototyping/testing phase, which may exhibit poor picture quality or low frame rates:
124
+
125
+ I used to have a really bad sketchy of my project and the whole horizon in VR was shaking with a very low frame rate, and it caused huge dizziness. (P2-JD)
126
+
127
+ The issue of cumbersome VR Equipment also further exacerbates the situation:
128
+
129
+ If you have to wear glasses, put them on. And you know, it's already like a burden for you. And if you like, do the very frequently, your life, you will have a lot of like headaches and motion sickness. (P7-JD)
130
+
131
+ Some VR applications with specific features such as frequent locomotion also contribute to motion sickness:
132
+
133
+ One thing to notice is about the locomotion in VR: a lot of them give you motion sickness. (P3-ED)
134
+
135
+ Motion sickness could heavily postpone the testing progress of the VR development and may cause the production delay:
136
+
137
+ It was difficult for first-time users, I need to adjust things slower for them. (P5-ED)
138
+
139
+ But if your participants or even the developer, start experiencing physical discomfort due to motion sickness, you won't probably be able to have a sustained session on the headset, and therefore, it really impacts what you can get out of the testing. (P12-VDTD)
140
+
141
+ C3: Difficult Equipment Set Up. Developers may struggle with time-consuming and challenging equipment setups during the testing phase. For instance, they are required to recalibrate or even reboot the VR equipment because the existing calibration was easy to break.
142
+
143
+ I need to recalibrate my headsets a lot of time during the testing, sometimes even rebooting the machine, because sometimes the responding calibration was okay, and then the next time the calibration was not good anymore. We have to recalibrate and install things. (P6-ED)
144
+
145
+ Additionally, there is a stringent need for a spacious, open, and obstruction-free physical area when developers wish to conduct trials within VR environments:
146
+
147
+ I need to clean up the physical space around me every time before some intensive VR testing, being able to fake a physical system in which you don't need to really move to test would save my time a lot. (P3-ED)
148
+
149
+ C4: Performance Issues. During the testing phase, developers may encounter performance-related challenges. Such challenges can involve prolonged build, loading, and rendering times, leading many participants to invest additional time in testing their VR projects.
150
+
151
+ Long build, loading or rendering time extended the testing phase of VR development:
152
+
153
+ When I checked in many changes to a big VR project, more than half of the hour was waiting for the build. (P5-ED)
154
+
155
+ Rebooting the machine took a lot of time since it's not only rebooting the machine itself but sometimes you have to rebuild and reload the project. (P6-ED)
156
+
157
+ The disparities in the performance of hardware components also pose difficulty to the testing. Typically, simulator environments such as PCs and laptops have superior performance compared to VR equipment. As a result, simulators cannot substitute VR equipment when it comes to testing the performance of VR projects.
158
+
159
+ When testing performance, the simulator gives you nothing, you have to build on the VR headset to know if the app runs well (P5-ED)
160
+
161
+ We had a project on Oculus Quest...because it is an Android app and has a big resolution, we needed to cut many features to accommodate the performance limitation. (P5)
162
+
163
+ It is hard to check the performance issue without running it on headsets. (P6-ED)
164
+
165
+ The low frame rate, which has been raised by 6 participants, can also be caused by low performance, which would cause motion sickness (C2) as discussed before and uncertainty to the projects:
166
+
167
+ If the frame rate sucks, motion sickness would probably come. (P7-JD)
168
+
169
+ The different frame rates give different glitches all the time. (P6-ED)
170
+
171
+ ### 4.2 Software-related Challenges
172
+
173
+ Other than hardware-related challenges, developers often face a variety of software-related challenges that impact the testing phase of VR development. These challenges include a lack of testing information (C5), difficulty in finding and reproducing bugs (C6), lack of automated testing (C7), and inconvenient collaboration of VR testing (C8). From software developers' perspective, software-related challenges are relatively easier to mitigate. Addressing these software-related challenges is crucial for optimizing the VR development process and ensuring the creation of high-quality applications.
174
+
175
+ C5: Missing Testing Information. Developers might face difficulties due to a lack of adequate testing information. Many developers, especially junior developers, have indicated that tracking program changes, including variables and hardware usage, within VR environments poses a challenge, and incorporating debug information, such as logs, into VR applications also proves to be problematic.
176
+
177
+ Some of this can be really frustrating, because, it's sometimes very hard to see, like the States, like the internal state of the system, which you kind of need for debugging. (P2-JD)
178
+
179
+ Ideally, if there is some log, or debugging options to track those variables, that would be ideal. (P1-JD)
180
+
181
+ Furthermore, some more experienced participants suggested that VR development tools could even do more than just show the basic testing information. For example, for Unity developers, an integrated Unity inspector in the VR environment has been raised as a missing component for VR testing:
182
+
183
+ I would love to see in headset authoring. So be able to like, you know, put on the headset, and basically be able to see the scene hierarchy and have control over your inspector at least for some parts, you know, basically like an engine within the headset. (P12-VDTD)
184
+
185
+ C6: Difficulty in Finding/Reproducing bugs. Developers can face challenges when trying to identify or reproduce issues in VR applications. The vast 3D immersive environment can make pinpointing minute details or minor inconsistencies difficult:
186
+
187
+ Sometimes, I needed to go back frame by frame to check the bug I saw. (P1-JD)
188
+
189
+ It's tedious to reproduce bugs. It may not be actually challenging It's just um Yeah, you just need to take time to reproduce it. (P3-ED)
190
+
191
+ Additionally, recreating bugs can prove to be troublesome, as retracing and replicating the precise actions within VR settings can be a complex task compared to reproducing them in the simulator:
192
+
193
+ You might be testing your experience in the editor, even with a simulator, but you might not encounter the same issue as you were wearing the headset. (P12-VDTD)
194
+
195
+ C7: Lack of Automated Testing. Developers may face challenges due to the underdeveloped nature of automated testing support in VR. The scarcity of automated or unit tests in VR makes it difficult to minimize manual or repetitive testing tasks. In addition, automated tests are not easy to implement for VR applications by nature. As there is no existing tool in the markets to map the inputs (e.g., user log-in, button presses on the controller, and head movement) in VR to the tests. This phenomenon has been reported by all three groups of participants:
196
+
197
+ It's not really easy to automate like testing with scripts. (P2- JD)
198
+
199
+ Even basic things like a 2d traditional UI, testing every button and every combination is a labor-intensive process there. I haven't seen a good way around it. (P8-ED)
200
+
201
+ I think there's certainly more that we could do as engineers in the industry to set up examples of how to apply the tools that exist today to do some automated tests. (P12-VDTD)
202
+
203
+ Manual tests are unavoidable in VR testing. However, some of the manual tests can be replaced with automated tests to decrease the labour work and accelerate the development process:
204
+
205
+ An ideal version of testing includes, you know, as much automated testing as possible...And when you find something that is actually broken, if you can automate it, you automate it, and write the automation tests for it. And if you can't automate it, you have to actually work with ${QA}$ to say, Okay, now how do we actually build a proper smoke test to actually go through and have a manual test for this? (P11-VDTD)
206
+
207
+ C8: Inconvenient Collaboration of VR Testing. Developers might confront obstacles when engaging in collaborative debugging or testing with their colleagues. The issue of collaboration has been explored by Krauß et al. [24]. In their study, the three main challenges faced by collaborative development are: (1) misconceptions about the medium, (2) lack of tool support and (3) missing a common language and shared concepts. Our interviews confirmed and complemented their findings by two aspects in collaborative VR testing: (1) difficult remote debugging and (2) difficult headset sharing.
208
+
209
+ Remote testing and debugging within a development team remain a critical issue, especially with the adoption of remote work mode in recent years:
210
+
211
+ That was a pain by calling and telling the person to change this and that. In order to debug something, I need to the person: You the specific things to do, and you need to tell me the result either through screenshot or recording. (P1-JD)
212
+
213
+ When I was helping people debugging, I always cannot see what they saw. It could be ideal to have a mapping between what they do (e.g. click, move) to our aspect. (P4-ED)
214
+
215
+ Headset sharing could be an issue since not everyone in the development team has access to the limited number of VR headsets:
216
+
217
+ Some people working at home, and do not have VR headsets there. Then they don't have ways to test some VR-specific problems like the performance issues. (P6-ED)
218
+
219
+ Even with developers located in the same physical space, collaborative debugging between different developers could be challenging. One of the prominent issues is that the headset sharing between developers need more labour work such as recalibration and communication than developer assumed:
220
+
221
+ A teammate head off the headset, then handed me a used VR headset and I put it on, I lost the calibration and was trapped in the box. (P1-JD)
222
+
223
+ Even though I told the other developers what they should do, they started doing other things than you thought. There is a high demand for communication here when you debug with someone else. (P4-ED)
224
+
225
+ ### 4.3 Comprehensive Challenges
226
+
227
+ Comprehensive challenges are these challenges beyond the hardware and software limitations. In our study, a lack of standards (C9) and few VR-specific testing support (C10) have been raised. Overcoming these obstacles requires the collective efforts of the entire VR community, as they go beyond the capabilities of individual developers or organizations.
228
+
229
+ C9: Lack of Standards. Developers might grapple with difficulties stemming from an absence of standardized practices. Numerous developers have pointed out the lack of shared standards, such as varying APIs, among diverse VR development tools and software (e.g., Unity and Unreal) as well as hardware (like VR HMDs such as Oculus Quest and HTC Vive).
230
+
231
+ I need to use two totally different SDKs for Quest 2 and Vive development, which means I have to double my development work by learning and coding two things. (P6-ED)
232
+
233
+ I think a good driver for this, specifically around standards. I mean, I think of openXR, you know, that's a good industry-inclusive initiative that is trying to get behind alignment for standards, so that everyone follows similar patterns, etc. And they can deploy to as many devices as it is supported within. (P12-VDTD)
234
+
235
+ Even though participant feedback from our study suggests that significant progress is still required before developers can fully benefit from the convenience, efficiency, and adaptability that standardization brings to VR development. Organizations like OpenVR strive to address standardization challenges in VR development, which could ultimately benefit many developers:
236
+
237
+ I enjoy being able to work on open ${XR}$ and the really unsexy open standards that are not going to sell front page, you know, news, but ultimately, is really going to benefit developers and the community at large by having open interoperable standards that we as an industry can use. (P12-VDTD)
238
+
239
+ C10: Few VR-specific Testing Support. Developers might face challenges due to the few VR-specific testing provided by the community and industry. This insufficient support can be evident in multiple forms, such as a limited range of toolkits, tutorials, and documentation, or insufficient cooperation and integration among different tools and solutions:
240
+
241
+ I hope to see more samples from the community, and proper documentation, currently they are not straightforward. (P7- ${JD})$
242
+
243
+ Looking for documentation sometimes is still very challenging (P2 - JD)
244
+
245
+ Some of the VR development frameworks I use do not have enough information about their technical details. Their internal logic is unknown and unchangeable and Ifeel like it's a black box. (P6-ED)
246
+
247
+ I'm thinking about the nature of like, VR departments still have a small population compared to trending fields like AI, it doesn't have large community support. (P8-ED)
248
+
249
+ However, one of the tool providers found that community support is getting better for VR development:
250
+
251
+ There's also the Unity learn portal where there are tutorials and, you know, for all levels, beginning, advanced and professional. So, I believe that there's enough documentation from unity and the tools and packages that we provide to the community that is pretty comprehensive. (P12-VDTD)
252
+
253
+ ## 5 SURVEY STUDY
254
+
255
+ To validate and enhance the reliability of our qualitative interview results, we carried out a survey targeting a wider group of VR developers. This approach aimed to triangulate and substantiate our identified challenges.
256
+
257
+ ### 5.1 Study Design
258
+
259
+ Our survey consisted of three parts. Part 1 contained some demographic questions regarding respondents' years of experience in VR development, the VR development tools they utilized, and the VR headsets they employed Part 2 was respondents' assessment of each challenge identified during the interview study using a 7-point Likert scale: "Not at all important", "Low importance", "Slightly important", "Neutral", "Moderately important", "Very important", and "Extremely important". To ascertain the validity of the challenges discovered through the interviews, in Part 3 of the survey, we asked respondents to indicate, out of the 10 challenges, the top three challenges they think are most important and relevant to VR testing as well as the top three challenges they think are least important and irrelevant to VR testing.
260
+
261
+ ![01963e03-50ab-7075-86ee-e774ff2afc0d_6_153_150_713_356_0.jpg](images/01963e03-50ab-7075-86ee-e774ff2afc0d_6_153_150_713_356_0.jpg)
262
+
263
+ Figure 1: Respondents' ratings on the importance of each challenge on a 7-point Likert scale (1="not at all important"; 7="extremely important").
264
+
265
+ We recruited participants for our survey through Slack channels of several HCI research communities, VR-related industry communities, as well as a large IT company making VR development software. Additionally, we encourage respondents to share the survey with other VR developers, if feasible. No compensation was provided for completing the survey.
266
+
267
+ ### 5.2 Results
268
+
269
+ A total of ${33}\mathrm{\;{VR}}$ developers (23 males,9 females, and 1 non-binary/third gender; aged 23 - 52) from various organizations participated in our survey, after excluding 4 invalid responses where the respondents do not have enough VR experiences. Among the valid respondents, 16 had 0-2 years of VR development experience, 10 had 3-5 years, and 7 had 6-10 years. All of the participants (33/33) utilized Unity as their VR development tool, while 6 had experience with Unreal, and 1 had used a custom engine for VR development. The most popular headset among respondents was Oculus Quest 1/2, with 26 out of 33 developers have used it. Additionally, 19 respondents had experience with Oculus Rift(s) and 18 with HTC Vive. Furthermore, 13 developers had worked with Google Cardboard, and a few others $\left( { \leq 5}\right)$ had used headsets such as Meta Quest Pro, Varjo XR3, and Valve Index etc. separately.
270
+
271
+ Figure 1 summarizes the survey responses for the ratings of all identified challenges, including mean values and standard errors. It is evident that all identified challenges have a mean value greater than 4 (neutral), confirming the validity of the challenges. In particular, certain challenges (C1: Cumbersome VR Equipment, C5: Missing Testing Information, and C9: Lack of Standards) exhibit higher mean values, indicating that respondents are more concerned about these issues. C10: Few VR-specific Testing Support has the lowest mean value (4.15). A potential explanation for this could be the recent growth of the VR community due to the popular topic of the Metaverse, which has drawn more developers to the VR community and encouraged organizations and individuals to offer support to developers. C7: Lack of Automated Testing receives the second lowest average rating (4.21). This may be because manual testing cannot be avoided in VR development because of the nature of the software, which requires human labour to check features such as graphics quality and running performance. However, it is also noticeable that 8 of the respondents rated this challenge as "very important" and 1 rated this as "extremely important", which means this challenge still remains prominent among some of the VR developers.
272
+
273
+ Furthermore, we computed the proportion of each challenge selected as most important and relevant (Figure 2a) as well as that considered least important and irrelevant (Figure 2b). From Figure 2a, we can see that all the challenges have their voters, with C1: Cumbersome VR Equipment (15.15%), C3: Difficult Equipment Set-up (13.13%) and C4: Performance Issues (13.13%) having the most respondents’ concern. Notably, we can see that all C1, C3, and C4 are hardware-related challenges, which indicates that the VR development community still has big worries about VR hardware and there is a big growing space for VR hardware. From Figure 2b, we can see that C2: Motion Sickness (13.13%), C4: Performance Issues (13.13%), and C7: Lack of Automated Testing (13.13%) are the challenges that the respondents are least worried about. Motion sickness, as mentioned by one of the experienced developers in the interviews, could be overcome by the time people develop VR applications and gradually get used to it: "I am generally immune to motion sickness after spending a lot of time in the VR industry." (P5-ED)
274
+
275
+ In summary, all the challenges exhibit reasonable ratings without any outliers, further substantiating the validity of our findings from the interview study.
276
+
277
+ ## 6 DISCUSSION
278
+
279
+ In this section, we discuss some future opportunities derived from our interviews with VR professional VR developers as well as several limitations of our study.
280
+
281
+ ### 6.1 Future Opportunities
282
+
283
+ During the open discussion stage of the interviews, we asked our participants about their ideal VR testing tools or VR testing features. From the discussion, we have identified several promising avenues that may help with the design of future VR testing tools for both academic and industrial settings.
284
+
285
+ Improving hardware design for convenient VR testing: Hardware-related issues, as pointed out in both interviews and surveys, still remain prominent in VR development communities. Developers suggested some features that could potentially mitigate hardware-related issues in the future. For example, the quick flip on-and-off features would help people switch faster between the VR environment and computer monitor during the testing:
286
+
287
+ I wish there will be a VR headset that I could just wear, flip it off when I need to take a look at my monitor; and flip it back when I need to go back to VR. (P14-VDTD)
288
+
289
+ What I also really like, is the HoloLens 2 has the visor that can, you know, flip down and flip up... My dream headset, combines this feature in it. (P13-VDTD)
290
+
291
+ Enabling headset-based authoring and testing in VR. Headset authoring and testing in VR environments were raised a lot during the discussion. First, participants want to see a dedicated debugging mode in VR environments with more flexibility to make some code changes just in VR, without going back to the keyboard:
292
+
293
+ Some functionalities like pre-setting some variables that you will be able to tweak in VR, for example, the width or the height of an object in VR, could save a lot of time of going back and forth. (P3-ED)
294
+
295
+ I would love to see in headset authoring. So be able to like, put on the headset, and basically have control over your scene hierarchy and inspector. Basically like an engine within the headset, where you're able to edit parameters, move things around (P12-VDTD)
296
+
297
+ In addition, participants hoped to see the debugging mode in VR with better testing information visualization:
298
+
299
+ ![01963e03-50ab-7075-86ee-e774ff2afc0d_7_158_151_1477_389_0.jpg](images/01963e03-50ab-7075-86ee-e774ff2afc0d_7_158_151_1477_389_0.jpg)
300
+
301
+ Figure 2: Distribution of the three MOST (a) and LEAT (b) important challenges chosen by respondents.
302
+
303
+ There will be some windows inside the VR to help you see the performance and variables. (P6-ED)
304
+
305
+ Being able to see different view points, for example, switching between different cameras will help me debug some complicated scenes easily. (P3-ED)
306
+
307
+ Meta has even some really cool features where you can only have passed through in some areas like you know, the keyboard tracking features are pretty cool, right? I can see my real world keyboards keep that my real world keyboard so I know where to put my fingers when I am testing in VR (P12-VDTD)
308
+
309
+ Generally, participants wanted to have a smooth combination between editing and testing in VR development. One of the tool developers commented:
310
+
311
+ Finally, there should be some unification of editing and testing so that people don't feel a separation in the whole VR development process. (P14-VDTD)
312
+
313
+ Designing collaborative tools for VR testing. More convenient collaboration in VR development could improve the productivity of the whole VR development team. Communication and collaboration in VR testing can be enhanced with more dedicated VR collaborative tools:
314
+
315
+ I do agree that the sort of real-time over a Zoom meeting, and maybe the answer is that Zoom is ultimately not going to be the best platform for real-time VR-related calls. But I think there is sort of a gap there in terms of like, Could we have a zoom alternative in VR where we can load the experience together between developers? (P13-VDTD)
316
+
317
+ Creating automated testing frameworks for VR. The challenge of automated testing could be mitigated with specialized software frameworks or libraries with functionalities such as button mapping, testing video generation and breaking, playing back, and playing forward some testing timelines.
318
+
319
+ It is possible to develop some kind of testing framework, in which you can map some actions from the VR controller to the code to help developers automate some tests. (P9-ED)
320
+
321
+ It is ideal to be able to see the video of the tests and be able to highlight something or catch the model change. (P13-VDTD)
322
+
323
+ It would be helpful to have some systems in place to automate tests and developers could playback or play forward some of the tests in the simulator instead of putting on the VR headset. (P14-VDTD)
324
+
325
+ From the hardware perspective, one tool developer also pointed out that other tools like robotic arms could help with automated tests:
326
+
327
+ You can have some additional robotic testing, where you can actually say, Okay, what happens when I turn my head this way, what happens when I turn my head this way inside of the headset? (P11-VDTD)
328
+
329
+ Lowering the barrier for junior developers and non-tech users. On the social level, we could help with the testing difficulties confronted by junior developers by making VR development more accessible, smoothing some steps in the process, and integrating the learning experience into the VR development tools such as Unity engine:
330
+
331
+ Most of these headsets are making the VR development not very accessible for a lot of people, whether they have vision disabilities or mobility disabilities, you know, they're very able-bodied devices so far. So I hope that as we move forward, with new devices, and new technologies, all of those things begin to bubble up towards the front and become banners that we carry, and we really, really push forward. (P12-VDTD)
332
+
333
+ Setting up the initial building blocks for a starting project could allow someone without coding knowledge to quickly set up a scene. (P12-VDTD)
334
+
335
+ But also try to make all of the learning happen within the engine so that we can hopefully cut the dependency between a browser-based experience like the learning resource and have that integrated into one of the sample builds. (P12-VDTD)
336
+
337
+ Reducing the barriers to entry for VR development could not only draw more developers into the VR community but also significantly contribute to the overall growth and flourishing of the industry.
338
+
339
+ ### 6.2 Limitations
340
+
341
+ There still exist limitations in our study. First, as shown in Table 1, a majority of our participants in the interview study uses Unity as the primary VR development tool. This might have biased the challenges we identified, as different development tools may have very different features. Thus a more diverse sample of interviewees could be recruited to further enhance and validate our results. Second, becond, while our survey study results have confirmed and substantiated the challenges derived from our interview study, more participants could be recruited to better support our insights. However, we do recognize the challenges of getting VR developers as the respondents, not ordinary VR users. Therefore, future studies can be carried out to re-confirm or extend the set of challenges. Third, we primarily employed a qualitative approach to identify the key challenges for VR testing in our study; while we attempted to use a quantitative method in the survey study to verify the interview results, an even more quantitative way might be appreciated. For example, some logging mechanisms can be implemented in VR development tools to examine the behaviors of VR developers during testing, thus complementing our qualitative results. In deriving the future directions for VR testing tools, prototypes can be built to have VR developers actually try different features and then provide their feedback. In summary, several future efforts should be made to continue this line of research, and our study has opened doors to a wide range of development and investigation opportunities for VR testing.
342
+
343
+ ## 7 CONCLUSION
344
+
345
+ In this paper, we aimed to fill in the gap in the literature by exploring the challenges, needs, and future opportunities of software testing in VR development. During the interviews, participants expressed various difficulties encountered during the VR development testing phase, which we then consolidated into a list of challenges. Moreover, we explored future directions for VR testing and presented the outcomes that may shed light on the research and technology development. We confirmed the challenges through a survey, analyzing the ratings given for each challenge. Our findings highlight multiple design opportunities for both academic and industrial stakeholders to alleviate these VR testing challenges. By addressing these issues and following our results, we believe that future development can enhance the utility, productivity, and overall development experience for VR developers during the testing phase.
346
+
347
+ ## REFERENCES
348
+
349
+ [1] I. A. Al Hafidz, S. Sukaridhoto, M. U. H. Al Rasyid, R. P. N. Budi-
350
+
351
+ arti, R. R. Mardhotillah, R. Amalia, E. D. Fajrianti, and N. A. Satrio. Design of collaborative webxr for medical learning platform. In 2021 International Electronics Symposium (IES), pp. 499-504. IEEE, 2021.
352
+
353
+ [2] N. Ashtari, A. Bunt, J. McGrenere, M. Nebeling, and P. K. Chilana. Creating Augmented and Virtual Reality Applications: Current Practices, Challenges, and Opportunities. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1-13. ACM, Honolulu HI USA, Apr. 2020. doi: 10.1145/3313831.3376722
354
+
355
+ [3] N. Baghaei, L. Stemmet, A. Hlasnik, K. Emanov, S. Hach, J. A. Naslund, M. Billinghurst, I. Khaliq, and H.-N. Liang. Time to get personal: Individualised virtual reality for mental health. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI EA '20, p. 1-9. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3334480.3382932
356
+
357
+ [4] A. Bertolino. Software Testing Research: Achievements, Challenges, Dreams. In Future of Software Engineering (FOSE '07), pp. 85-103, May 2007. doi: 10.1109/FOSE.2007.25
358
+
359
+ [5] R. Blonna, M. S. Tan, V. Tan, A. P. Mora, and R. Atienza. Vrex: A framework for immersive virtual reality experiences. In 2018 IEEE Region Ten Symposium (Tensymp), pp. 118-123, 2018. doi: 10.1109/ TENCONSpring.2018.8692018
360
+
361
+ [6] M. Bock and A. Schreiber. Visualization of neural networks in virtual reality using unreal engine. In Proceedings of the 24th ACM symposium on virtual reality software and technology, pp. 1-2, 2018.
362
+
363
+ [7] Y. Cai, R. Chiew, Z. T. Nay, C. Indhumathi, and L. Huang. Design and development of VR learning environments for children with ASD. Interactive Learning Environments, 25(8):1098-1109, Nov. 2017. doi: 10.1080/10494820.2017.1282877
364
+
365
+ [8] F. Cassola, M. Pinto, D. Mendes, L. Morgado, A. Coelho, and H. Pare-des. A novel tool for immersive authoring of experiential learning in virtual reality. In 2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 44-49, 2021. doi: 10.1109/VRW52623.2021.00014
366
+
367
+ [9] D. L. Chen, R. Balakrishnan, and T. Grossman. Disambiguation techniques for freehand object manipulations in virtual reality. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 285-292, 2020. doi: 10.1109/VR46266.2020.00048
368
+
369
+ [10] X. Chen, M. Wang, and Q. Wu. Research and development of virtual reality game based on unreal engine 4. In 2017 4th International Conference on Systems and Informatics (ICSAI), pp. 1388-1393. IEEE, 2017.
370
+
371
+ [11] H. Coelho, P. Monteiro, G. Gonçalves, M. Melo, and M. Bessa. Authoring tools for virtual reality experiences: a systematic review. Multimedia Tools and Applications, 81(19):28037-28060, Aug. 2022. doi: 10.1007/s11042-022-12829-9
372
+
373
+ [12] J. D. O. De Leon, R. P. Tavas, R. A. Aranzanso, and R. O. Atienza. Genesys: A virtual reality scene builder. In 2016 IEEE Region 10
374
+
375
+ Conference (TENCON), pp. 3708-3711, 2016. doi: 10.1109/TENCON .2016.7848751
376
+
377
+ [13] B. Ens, B. Bach, M. Cordeil, U. Engelke, M. Serrano, W. Willett, A. Prouzeau, C. Anthes, W. Büschel, C. Dunne, T. Dwyer, J. Gru-bert, J. H. Haga, N. Kirshenbaum, D. Kobayashi, T. Lin, M. Olaose-bikan, F. Pointecker, D. Saffo, N. Saquib, D. Schmalstieg, D. A. Szafir, M. Whitlock, and Y. Yang. Grand Challenges in Immersive Analytics. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1-17. ACM, Yokohama Japan, May 2021. doi: 10.1145/3411764.3446866
378
+
379
+ [14] G. D. Everett and R. McLeod Jr. Software testing. Testing Across the Entire, 2007.
380
+
381
+ [15] W. Gai, C. Yang, Y. Bian, C. Shen, X. Meng, L. Wang, J. Liu, M. Dong, C. Niu, and C. Lin. Supporting easy physical-to-virtual creation of mobile vr maze games: A new genre. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17, p. 5016-5028. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10.1145/3025453.3025494
382
+
383
+ [16] M. Gandy and B. MacIntyre. Designer's augmented reality toolkit, ten years later: Implications for new media authoring tools. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, UIST '14, p. 627-636. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/2642918.2647369
384
+
385
+ [17] N. Ghrairi, S. Kpodjedo, A. Barrak, F. Petrillo, and F. Khomh. The state of practice on virtual reality (vr) applications: An exploratory study on github and stack overflow. In 2018 IEEE International Conference on Software Quality, Reliability and Security (QRS), pp. 356-366, 2018. doi: 10.1109/QRS.2018.00048
386
+
387
+ [18] X. Guo and I. Mogra. Using web 3d and webxr game to enhance engagement in primary school learning. In 2022 IEEE International Symposium on Multimedia (ISM), pp. 181-184. IEEE, 2022.
388
+
389
+ [19] H. Hadjar, P. McKevitt, and M. Hemmje. Home-based immersive web rehabilitation gaming with audiovisual sensors. In Proceedings of the 33rd European Conference on Cognitive Ergonomics, pp. 1-7, 2022.
390
+
391
+ [20] R. Henrikson, T. Grossman, S. Trowbridge, D. Wigdor, and H. Benko. Head-coupled kinematic template matching: A prediction model for ray pointing in vr. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20, p. 1-14. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/ 3313831.3376489
392
+
393
+ [21] A. Karakottas, N. Zioulis, A. Doumanglou, V. Sterzentsenko, V. Gk-itsas, D. Zarpalas, and P. Daras. Xr360: A toolkit for mixed 360 and 3d productions. In 2020 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), pp. 1-6, 2020. doi: 10.1109/ ICMEW46912.2020.9105984
394
+
395
+ [22] H. K. Kim, J. Park, Y. Choi, and M. Choe. Virtual reality sickness questionnaire (vrsq): Motion sickness measurement index in a virtual reality environment. Applied ergonomics, 69:66-73, 2018.
396
+
397
+ [23] R. Konrad, D. G. Dansereau, A. Masood, and G. Wetzstein. SpinVR: towards live-streaming 3D virtual reality video. ACM Transactions on Graphics, 36(6):209:1-209:12, Nov. 2017. doi: 10.1145/3130800. 3130836
398
+
399
+ [24] V. Krauß, A. Boden, L. Oppermann, and R. Reiners. Current practices, challenges, and design implications for collaborative ar/vr application development. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI '21. Association for Computing Machinery, New York, NY, USA, 2021. doi: 10.1145/3411764. 3445335
400
+
401
+ [25] B. MacIntyre and T. F. Smith. Thoughts on the future of webxr and the immersive web. In 2018 IEEE international symposium on mixed and augmented reality adjunct (ISMAR-Adjunct), pp. 338-342. IEEE, 2018.
402
+
403
+ [26] M. Matsangidou, A. P. Kassianos, D. Papaioannou, T. Solomou, M. Krini, M. Karekla, and C. S. Pattichis. Virtual painkillers: Designing accessible virtual reality experiences for helping cancer patients manage pain at home. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems, CHI EA '22. Association for Computing Machinery, New York, NY, USA, 2022.
404
+
405
+ doi: 10.1145/3491101.3503562
406
+
407
+ [27] A. M. Memon and M. B. Cohen. Automated testing of gui applications: models, tools, and controlling flakiness. In 2013 35th International Conference on Software Engineering (ICSE), pp. 1479-1480. IEEE,
408
+
409
+ 2013.
410
+
411
+ [28] D. Navarre, P. Palanque, R. Bastide, A. Schyn, M. Winckler, L. P. Nedel, and C. M. Freitas. A formal description of multimodal interaction techniques for immersive virtual reality applications. In Human-Computer Interaction-INTERACT 2005: IFIP TC13 International Conference, Rome, Italy, September 12-16, 2005. Proceedings 10, pp. 170-183. Springer, 2005.
412
+
413
+ [29] M. Nebeling, S. Rajaram, L. Wu, Y. Cheng, and J. Herskovitz. XRStu-dio: A Virtual Production and Live Streaming System for Immersive Instructional Experiences. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, number 107, pp. 1-12. Association for Computing Machinery, New York, NY, USA, May 2021.
414
+
415
+ [30] M. Nebeling and M. Speicher. The trouble with augmented reality/virtual reality authoring tools. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), pp. 333-337, 2018. doi: 10.1109/ISMAR-Adjunct.2018.00098
416
+
417
+ [31] F. C. Rodríguez, M. Dal Peraro, and L. A. Abriata. Democratizing interactive, immersive experiences for science education with webxr. Nature Computational Science, 1(10):631-632, 2021.
418
+
419
+ [32] S. S. A Study of Software Development Life Cycle Process Models, June 2017. doi: 10.2139/ssrn.2988291
420
+
421
+ [33] M. Sánchez-Gordón, L. Rijal, and R. Colomo-Palacios. Beyond technical skills in software testing: Automated versus manual testing. In Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops, pp. 161-164, 2020.
422
+
423
+ [34] R. Sharma. Quantitative analysis of automation and manual testing. International journal of engineering and innovative technology, 4(1), 2014.
424
+
425
+ [35] L. Sidenmark, C. Clarke, X. Zhang, J. Phu, and H. Gellersen. Outline pursuits: Gaze-assisted selection of occluded objects in virtual reality. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20, p. 1-13. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3313831. 3376438
426
+
427
+ [36] L. Sidenmark, D. Potts, B. Bapisch, and H. Gellersen. Radi-eye: Hands-free radial interfaces for $3\mathrm{\;d}$ interaction using gaze-activated head-crossing. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI '21. Association for Computing Machinery, New York, NY, USA, 2021. doi: 10.1145/3411764. 3445697
428
+
429
+ [37] A. Šmíd. Comparison of unity and unreal engine. Czech Technical University in Prague, pp. 41-61, 2017.
430
+
431
+ [38] K. Sneha and G. M. Malle. Research on software testing techniques and software automation testing tools. In 2017 International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS), pp. 77-81, 2017. doi: 10.1109/ICECDS.2017.8389562
432
+
433
+ [39] O. Taipale, J. Kasurinen, K. Karhu, and K. Smolander. Trade-off between automated and manual software testing. International Journal of System Assurance Engineering and Management, 2:114-125, 2011.
434
+
435
+ [40] B. Thoravi Kumaravel, F. Anderson, G. Fitzmaurice, B. Hartmann, and T. Grossman. Loki: Facilitating Remote Instruction of Physical Tasks Using Bi-Directional Mixed-Reality Telepresence. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, UIST '19, pp. 161-174. Association for Computing Machinery, New York, NY, USA, Oct. 2019. doi: 10.1145/3332165. 3347872
436
+
437
+ [41] B. Thoravi Kumaravel, C. Nguyen, S. DiVerdi, and B. Hartmann. TutoriVR: A Video-Based Tutorial System for Design Applications in Virtual Reality. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-12. Association for Computing Machinery, New York, NY, USA, May 2019.
438
+
439
+ [42] M. A. Umar and C. Zhanfang. A study of automated software testing: Automation tools and frameworks. International Journal of Computer Science Engineering (IJCSE), 6:217-225, 2019.
440
+
441
+ [43] L. Zhang and S. Oney. Flowmatic: An immersive authoring tool for
442
+
443
+ creating interactive scenes in virtual reality. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology, UIST '20, p. 342-353. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3379337.3415824
444
+
445
+ [44] M. Zubair and N. Anyameluhor. How long do you want to maintain this thing? understanding the challenges faced by webxr creators. In The 26th International Conference on 3D Web Technology, Web3D '21. Association for Computing Machinery, New York, NY, USA, 2021. doi: 10.1145/3485444.3495181
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/akc8f5ampp/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,440 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § CHALLENGES AND OPPORTUNITIES FOR SOFTWARE TESTING IN VIRTUAL REALITY APPLICATION DEVELOPMENT
2
+
3
+ Category: Submitted to GI'23
4
+
5
+ § ABSTRACT
6
+
7
+ Testing is a core process for the development of Virtual Reality (VR) software, which could ensure the delivery of high-quality VR products and experiences. As VR applications have become more popular in different fields, more challenges and difficulties have been raised during the testing phase. However, few studies have explored the challenges of software testing in VR development in detail. This paper aims to fill in the gap through a qualitative interview study composed of 14 professional VR developers and a survey study with 33 additional participants. As a result, we derived ${10}\mathrm{{key}}$ challenges that are often confronted by VR developers during software testing. Our study also sheds light on potential design directions for VR development tools based on the identified challenges and needs of the VR developers to alleviate existing issues in testing.
8
+
9
+ § 1 INTRODUCTION
10
+
11
+ Debugging or testing is one of the critical steps in software development [32]. The creation of Virtual Reality (VR) applications shares a similar process to traditional software development and heavily relies on testing to ensure the quality of the final deliverables. However, VR application testing is more challenging and complex $\left\lbrack {2,{30}}\right\rbrack$ due to its inherent nature of relying on multiple devices and platforms including headsets and desktops. For instance, developers need to put on and take off their VR head-mounted display (HMD) quite frequently during the testing stage, which not only is time-consuming but also causes motion sickness [22]. In addition, developers do not always have access to VR HMDs to realistically evaluate the quality of their creations.
12
+
13
+ While the human-computer interaction (HCI) community is increasingly focused on researching immersive technologies such as VR and augmented reality (AR), there is still a lack of thorough studies exploring the challenges as well as opportunities for software testing in VR development. Various studies have explored AR/VR applications (e.g., [23, 29,40]), interaction techniques (e.g., $\left\lbrack {{20},{28},{35},{36}}\right\rbrack$ ), and authoring tools (e.g., $\left\lbrack {5,9,{21}}\right\rbrack$ ). Some have provided insights into the challenges and opportunities of AR/VR from the development perspective to better understand the needs of professional AR/VR developers [2, 13, 16, 24, 30]. However, little attention has been paid to the challenges of the testing phase in VR development, despite developers having confronted numerous difficulties as introduced above.
14
+
15
+ In this research, we aimed to fill in the gap by exploring the challenges and needs of VR developers during their testing phase and identifying promising directions to overcome or resolve these challenges. We first conducted a comprehensive interview study with 14 professional VR developers (11 from industry and 3 from academia) who have diverse backgrounds and different levels of experience and skill sets in VR development. We then performed a thematic analysis of the interview data and identified 10 key challenges for software testing in VR development. Our results confirmed that VR developers face significant challenges during the testing phase of VR development. Despite employing workarounds, our participants found them to be ad-hoc, requiring manual intervention, and prone to errors. We organized the key challenges into three distinct categories (see Table 2): hardware-related challenges (C1-4), software-related challenges (C5-8), and comprehensive challenges (C9-10).
16
+
17
+ To verify the identified challenges with a broader audience, we further conducted a confirmation survey with ${33}\mathrm{{VR}}$ developers by distributing the survey to various related Slack channels. In the survey, we asked participants to rank the importance of the 10 challenges on a 7-point Likert scale as well as select the most and least important ones. From the survey results, all the identified challenges exhibit reasonable ratings without any outliers, substantiating the validity of our findings.
18
+
19
+ Additionally, we discuss the future opportunities for testing VR applications based on the identified challenges, which can provide guidance to VR developer tool makers and researchers for enhancing the current functionalities of VR development tools and introducing new features. Our study extends the findings of Ashtari et al. [2], Nebeling and Speicher [30], and Krauß et al. [24], which benefits specifically to the testing phase of VR development. In summary, our contributions in this paper include:
20
+
21
+ * Empirical interview and survey studies that examined and validated key challenges in the testing phase of VR development;
22
+
23
+ * The results of 10 identified key challenges in VR testing faced with corresponding future design directions.
24
+
25
+ § 2 RELATED WORK
26
+
27
+ Our research is related to the existing techniques of VR applications and authoring tools, practices in VR development, as well as studies on VR testing and general software testing.
28
+
29
+ § 2.1 VR SYSTEMS AND APPLICATIONS
30
+
31
+ Virtual Reality (VR) has been largely researched and applied to different fields, such as education, healthcare, science, and entertainment, to empower people's needs in real life. Education and learning are one of the most popular fields for these studies. XRStudio [29] creates a VR lecturing system that enables the instructors to live stream their lectures in VR to the students who can join the lectures in VR or watch the lectures on 2D displays and in AR. Loki [40], a mixed-reality system, enables the learners to view the live-streamed tutorials generated in VR or AR by remote instructors in VR and AR. TutoriVR [41] integrates streaming video tutorials with 3D and contextual aids in VR to facilitate the learning and creation process for VR users. Additionally, VR gradually plays a more important role in healthcare. Virtual experience has been studied to comfortable, enriching pain management experiences [26]. iVR [3] has been proposed to improve users' self-compassion, and in the long term, their positive mental health. Cai et al. [7] have also made efforts to address ASD among children in VR learning environments.
32
+
33
+ With the recent explosion in VR systems and applications in different fields, it is important to understand the needs of VR developers during the development process, which has motivated our study. This could provide them with a better VR development experience and attract more developers into VR application development.
34
+
35
+ § 2.2 DEVELOPMENT AND AUTHORING TOOLS FOR VR
36
+
37
+ Development tools for VR assist creators with varying expertise levels in producing VR software. VR development tools encompass 3D game engines (e.g., Unity, Unreal, Godot) and development toolkits/frameworks (e.g., MRTK, A-Frame). These tools empower VR developers and researchers to create versatile VR applications.
38
+
39
+ Based on these tools, several research studies $\left\lbrack {5,8,{12},{15},{21},{43}}\right\rbrack$ have proposed VR authoring tools to satisfy the customized needs of the end-users. These tools focus on satisfying some specialized needs of the end users. Some systems (e.g., VREX [5], Xr360 [21], Genesys [12]) are designed to lower the threshold of VR development and speed up the process. Some other studies help the users meet specialized needs such as creating interactive scenes [43], making experiential learning courses [8], and authoring VR games for physical space [15].
40
+
41
+ An existing study [11] shows that VR authoring tools could help facilitate the creation of different VR features. Despite the availability of the existing authoring tools, there has been a limited number of tools created and researched to ease the VR testing practices for developers. Our study provides empirical insights into the testing of VR development, highlighting the challenges faced by VR developers with some future directions for the creation of new tools that could facilitate VR testing.
42
+
43
+ § 2.3 VR DEVELOPMENT PRACTICES
44
+
45
+ To better understand the challenges and needs in the testing phase of VR development, our study aims to investigate the current development practices of VR developers. Unity and Unreal engines are the common development tools used by VR developers. An explanatory study by Ghrairi et al. [17] discovered that the majority of VR projects on GitHub are currently small to medium-sized, with JavaScript (used for web) and C# (used in Unity) being the most popular programming languages. Unity has emerged as the preferred game engine for VR development and is the most frequently discussed topic on Stack Overflow. In addition to Unity, Unreal Engine is also utilized by VR developers and researchers for creating VR content $\left\lbrack {6,{10}}\right\rbrack$ Unity is preferred for its ease of use, asset store, and support for various platforms, while Unreal Engine is favoured for its advanced graphics capabilities and visual scripting [37]. Numerous customized tools, such as Unity XR Interaction Toolkit and MRTK, have been developed to streamline and support the VR development process in Unity or Unreal Engine.
46
+
47
+ On the other hand, the field of web-based VR development, particularly through the implementation of WebXR, has experienced significant growth in recent years. WebXR, an API that enables the creation and integration of immersive experiences directly within web browsers, has emerged as a popular alternative for the industry. By allowing developers to create platform-agnostic VR experiences [25], WebXR fosters accessibility and reduces the need for specialized hardware or software. As a result, researchers are increasingly exploring the potential of web-based VR development for a wide range of applications, such as education $\left\lbrack {{18},{29},{31}}\right\rbrack$ , healthcare [1], and entertainment [19]. The growing interest in WebXR also highlights the importance of developing new tools, frameworks, and best practices to support the unique challenges and opportunities associated with web-based VR development [44].
48
+
49
+ Cross-platform VR development has also been promoted these years by organizations like Koronos Group and its open standard OpenXR. However, significant challenges still remain for developers. They face a multitude of issues when working with different VR hardware, software, and application programming interfaces (APIs). Heterogeneous specifications, input mechanisms, and performance capabilities can lead to compatibility and optimization difficulties, requiring developers to adapt their applications to each unique platform. Additionally, the varying degrees of support for industry standards and the rapid evolution of VR technology further complicate cross-platform development. VR development practices have made considerable advancements, thanks to the widespread adoption of game engines like Unity and Unreal, the emergence of web-based VR development through WebXR, and the push for cross-platform development by organizations like Koronos Group. These improvements have resulted in more efficient development processes and the creation of customized tools and frameworks to assist developers. However, challenges persist in the realm of VR development, including compatibility and optimization issues across diverse hardware, software, and APIs, as well as the rapid evolution of VR technology and varying support for industry standards $\left\lbrack {2,{13},{24}}\right\rbrack$ . To ensure the continued growth and success of the VR industry, it is essential to address these challenges, foster collaboration among developers, and researchers, and continue exploring new methods and best practices to improve the VR development process and enhance the user experience, where our study aims to contribute to the understanding the specific challenges in the testing phase of VR development process.
50
+
51
+ § 2.4 TESTING PRACTICES IN SOFTWARE DEVELOPMENT
52
+
53
+ Software testing, which is an essential step in the whole software development workflow [14], has been researched to ensure the quality of the software delivery $\left\lbrack {4,{38},{42}}\right\rbrack$ . While testing in VR software development has not been studied comprehensively, general software testing could still enlighten the directions of VR software testing in different ways.
54
+
55
+ Automation testing, as one of the most important parts of software testing, has been implemented and applied to the software industry broadly. Automation testing has significantly impacted the testing process, with many software tests now being performed using automation tools [38]. These tools reduce the number of people involved and the likelihood of human errors. Automation testing involves test cases that simplify the process of capturing different scenarios and storing them. For example, automation tests have been explored to reduce the errors in software GUI [27].
56
+
57
+ Manual testing is also an important part of software testing and has been researched in comparison with automation testing $\left\lbrack {{33},{34},{39}}\right\rbrack$ . Manual testing always involves human efforts from testing teams such as Quality Assurance testers and Software Developers who are responsible for creating and running tests. Manual testing is a time-consuming process that demands specific qualities in a tester, such as patience, observance, creativity, open-mindedness, and skill [34]. When applied to large software applications or those with extensive datasets, repetitive manual testing can become challenging to execute effectively. This limitation underscores the need for alternative methods, such as automation testing, to improve efficiency and accuracy in software testing processes. However, due to the nature of VR applications, manual testing is still inevitable as VR software relies on human work to ensure the quality of the products such as the visual presentation of the contents and graphics performance in the VR headsets.
58
+
59
+ While the above studies have explored the needs in general software testing and have proposed tools to address the issues, VR testing can be particularly challenging because of its unique development environment with the HMDs. Different software testing techniques might be customized and applied to VR testing to ensure the delivery of VR software; however, no studies have adequately investigated scenarios of VR testing. Thus, our research specifically aims to get insights into the challenges and opportunities in the testing phase of VR development.
60
+
61
+ § 3 INTERVIEW STUDY
62
+
63
+ To investigate the current practices and challenges in VR application testing faced by developers in-depth, we employed a qualitative approach by conducting semi-structured interviews with VR developers with diverse backgrounds. In this section, we describe the setup of the interview study and report the results in the next section.
64
+
65
+ § 3.1 PARTICIPANTS
66
+
67
+ In order to gain a comprehensive understanding of VR development testing practices, we sought out participants with experience in the field, from both academia and industry. We reached out to local HCI research groups as well as VR-related software companies. Our goal was to create a diverse cohort of participants, with varying backgrounds and project experience. We ultimately recruited 14 participants (11 males, 2 females, and 1 non-binary/third gender; aged 19 - 54), including user experience designers, gaming enthusiasts, and academic researchers, as detailed in Table 1. Their experience ranged from 0-2 years to 10+ years; and the cohort covered a variety of popular VR hardware (HMD) on the market, including Oculus Quest 1/2, Oculus Rift, HTC Vive, Meta Quest Pro, etc. In addition, our participants use various VR development software (e.g., Unity, Unreal, and Godot) for their work. Based on their experiences and roles in VR development, we grouped them as junior developers (JD), experienced developers (ED), and VR development tools developers (VDTD). The diversity in these aspects could provide valuable insights into the testing phase of VR development on different perspectives.
68
+
69
+ Table 1: Participants recruited in our interview study.
70
+
71
+ max width=
72
+
73
+ ID Role Experience Software Used Hardware (HMD) Used
74
+
75
+ 1-5
76
+ 5|c|Junior Developers (JD)
77
+
78
+ 1-5
79
+ P1 Software Developer 0 - 2 years Godot Oculus Quest 1/2
80
+
81
+ 1-5
82
+ P2 Student Researcher 0 - 2 years Unity Oculus Quest 1/2
83
+
84
+ 1-5
85
+ P7 Software Developer 0 - 2 years Unity Oculus Quest 1/2, Oculus Rift, HTC Vive, Google Cardboard
86
+
87
+ 1-5
88
+ P10 Product Designer 0 - 2 years Unity Oculus Quest 1/2
89
+
90
+ 1-5
91
+ 5|c|Experienced Developers (ED)
92
+
93
+ 1-5
94
+ P3 Student Researcher 6 - 10 years Unity Oculus Rift, HTC Vive
95
+
96
+ 1-5
97
+ P4 Architectural Designer 3 - 5 years Unity, Unreal Oculus Quest 1/2, Oculus Rift
98
+
99
+ 1-5
100
+ P5 Software Developer 3 - 5 years Unity Oculus Quest 1/2, Oculus Rift, HP Reverb, Meta Quest Pro
101
+
102
+ 1-5
103
+ P6 Software Developer 3 - 5 years Unity, Unreal Oculus Quest 1/2, Oculus Rift, HTC Vive, Varjo VR1/2/3
104
+
105
+ 1-5
106
+ P8 Software Developer ${10} +$ years Unity Oculus Quest 1/2, Oculus Rift, HTC Vive, Google Cardboard, Valve Index, HP Reverb, Pico, Focus 3, and other Windows enterprise headsets
107
+
108
+ 1-5
109
+ P9 Software Development Manager 3 - 5 years Unity Oculus Rift, HTC Vive
110
+
111
+ 1-5
112
+ 5|c|VR Development Tools Developers (VDTD)
113
+
114
+ 1-5
115
+ P11 Software Development Manager, XR Foundation 3 - 5 years Unity, Unreal, Self- build engine Oculus Quest 1/2, Google Cardboard, HP Reverb, Meta Quest Pro
116
+
117
+ 1-5
118
+ P12 VR Development Tools Designer 10 + years Unity Oculus Quest 1/2, Oculus Rift, HTC Vive, Google Cardboard
119
+
120
+ 1-5
121
+ P13 VR Development Tools Developer 6 - 10 years Unity, Unreal Oculus Quest 1/2, Oculus Rift, HTC Vive
122
+
123
+ 1-5
124
+ P14 VR Development Tools Developer 3 - 5 years Unity Oculus Quest 1/2, Oculus Rift, HTC Vive, Valve Index
125
+
126
+ 1-5
127
+
128
+ § 3.2 INTERVIEW PROCEDURE
129
+
130
+ Prior to the interview, participants were asked to sign the consent form and filled in a pre-study questionnaire regarding their demographic information. During the interview, we began by asking participants to describe their current or recent VR projects and let them walk through the VR development workflow on the projects they discussed. We then inquired about the testing techniques they used and the main challenges or frustrations of their current testing and debugging process. Our questions were around the following themes during the interviews:
131
+
132
+ 1. Would you briefly introduce one of the interesting VR experiences you had?
133
+
134
+ 2. Could you walk through your VR development workflow on the project you've talked about or another specific example with us?
135
+
136
+ 3. In the walk-through you just shared with us, what were the testing techniques you use?
137
+
138
+ 4. What are the main challenges or frustrations about your current testing workflow?
139
+
140
+ 5. What are your current solutions for the challenges you just mentioned?
141
+
142
+ 6. What could be the ideal VR testing workflow in your mind? It could be a whole workflow, a new tool or some features.
143
+
144
+ In the end, participants were asked to brainstorm the future directions of VR development tools that could better serve the testing purpose of VR development. The whole interview session was audio recorded and lasted around 60 minutes for each participant.
145
+
146
+ § 3.3 DATA ANALYSIS
147
+
148
+ We transcribed the audio recordings of the interview sessions using Otter.ai and manually checked the places that might not be precise due to the limitation of the transcription software. We employed an inductive approach and generated affinity diagrams in Figma to explore the themes related to the main challenges that our participants faced. Initially, one member of our research team conducted an open-coding pass to generate a list of potential codes. We then refined and consolidated these codes through discussions and the use of affinity diagrams, resulting in a final coding scheme. Throughout the coding process, we focused on understanding the challenges and needs of VR developers in their testing practices of VR development.
149
+
150
+ § 4 CHALLENGES IN VR TESTING
151
+
152
+ Based on our analysis of the interview data, we consolidated the following 10 key challenges in testing VR applications which are grouped into three categories (Table 2).
153
+
154
+ § 4.1 HARDWARE-RELATED CHALLENGES
155
+
156
+ Hardware-related challenges often arise during the testing phase, posing significant obstacles for developers. These challenges include cumbersome VR equipment (C1), motion sickness (C2), difficult equipment setup (C3), and performance issues (C4). Addressing these hardware-related challenges is essential for streamlining the testing process and ensuring the successful development of VR applications.
157
+
158
+ C1: Cumbersome VR Equipment. Cumbersome VR headsets are a burden for the developers in the testing phase. First, they may suffer from frequently putting on and taking off the headsets, which is not only time-wasting but also triggers feelings of unease or sickness:
159
+
160
+ Table 2: Summary of challenges in the testing phase of VR development.
161
+
162
+ max width=
163
+
164
+ 2|c|Hardware-related Challenges
165
+
166
+ 1-2
167
+ C1Cumbersome VR Equipment Developers may suffer from the inconvenience of the VR headsets. For example, developers may experience taking on and off the VR headsets in a high frequency, the sickness caused by the heavy weight of the headsets, or the burden of the eyeglasses and long hair.
168
+
169
+ 1-2
170
+ C2Motion Sickness Developers may suffer from motion sickness caused by the VR environment and equipment. The motion sickness might be caused by, for instance, the long time spent in VR environments or the low quality (low frame rate, low picture quality) of VR application during the prototyping/testing phase
171
+
172
+ 1-2
173
+ C3Difficult Equipment Set Up Developers may suffer from the difficult and time-wasting equipment set-ups during the testing phase. For example, developers need to recalibrate the VR equipment every time when they use it. It also has a strict demand on an open. decent-size and obstacle-free physical space when developers want to do some trials in VR environments.
174
+
175
+ 1-2
176
+ C4Performance Issues Developers may suffer from performance issues during the testing phase. Issues such as long build/loading/rendering time, the discrepancy between hardware's performance (in most cases, the simulator environments like the PC and laptop have better hardware performance than the VR equipment and low frame rate.
177
+
178
+ 1-2
179
+ 2|c|Software-related Challenges
180
+
181
+ 1-2
182
+ C5Missing Testing Information Developers may suffer from the lack of testing information. Many developers reported that they cannot monitor program changes (variables, hardware usage) in VR environments and it is hard to integrate debug information (e.g. logs) for VR applications.
183
+
184
+ 1-2
185
+ C6Difficulty in Finding/Reproducing bugs Developers may find it hard to find or reproduce bugs. For example, the large 3D immersive environment of VR makes it hard to find details/small glitches. It is also difficult to reproduce bugs since it's hard to track and reproduce the same actions in VR environments.
186
+
187
+ 1-2
188
+ C7Lack of Automated Testing Developers may suffer from the inconvenience of immature automated testing support. The lack of automated/unit tests in VR makes it hard to reduce manual/repetitive testing work.
189
+
190
+ 1-2
191
+ C8Inconvenient Collaboration of VR Testing Developers may find it hard to do collaborative debugging/testing with other developers. For example, developers may find it hard to achieve remote testing/debugging and headset sharing with other developers.
192
+
193
+ 1-2
194
+ 2|c|Comprehensive Challenges
195
+
196
+ 1-2
197
+ C9Lack of Standards Developers may suffer from the capability issues. Many developers find there are no common standards (e.g. different APIs) between different VR development tools and software (e.g. Unity, Unreal), hardware (e.g VR HMD like Oculus Ouest and HTC Vive).
198
+
199
+ 1-2
200
+ C10Few VR-specific Testing Support Developers may suffer from the little VR-specific testing support from the community and industry. There are issues such as low-number of existing toolkits, tutorials, documentation or no collaboration/integration between different tools/solutions.
201
+
202
+ 1-2
203
+
204
+ When I'm using the headset, I have to like, put it on, and then, do stuff, and then put it off, put it away and look at my console, so on and so forth. If there is a perfect simulator I can use to test most of the features, it will definitely make debugging a lot easier for me. (P2-JD)
205
+
206
+ From time to time, I got a headache after putting on and off the VR headset to figure out some tricky bugs. (P5-ED)
207
+
208
+ Additionally, with the cumbersome HMDs, wearing eyeglasses or having long hair can add to the difficulty experienced during use:
209
+
210
+ Having long hair just makes it harder to put on and off the VR headset. (P1-JD)
211
+
212
+ So I don't buy glasses that are wider than that. So that limits my frame choices. (P8-ED)
213
+
214
+ One of the VR tool developers expressed concern that this issue might not be resolved in the near future:
215
+
216
+ The equipment being cumbersome is like, these are sort of issues for which the industry does not have a solution yet, and it might take some time until we find one. (P13-VDTD)
217
+
218
+ C2: Motion Sickness. Motion sickness caused by VR environments and equipment has been mentioned in a high frequency in the interviews $\left( {{10}/{14}}\right)$ . This discomfort could be a result of spending extended periods of time in VR environments and encountering low-quality VR applications during the prototyping/testing phase, which may exhibit poor picture quality or low frame rates:
219
+
220
+ I used to have a really bad sketchy of my project and the whole horizon in VR was shaking with a very low frame rate, and it caused huge dizziness. (P2-JD)
221
+
222
+ The issue of cumbersome VR Equipment also further exacerbates the situation:
223
+
224
+ If you have to wear glasses, put them on. And you know, it's already like a burden for you. And if you like, do the very frequently, your life, you will have a lot of like headaches and motion sickness. (P7-JD)
225
+
226
+ Some VR applications with specific features such as frequent locomotion also contribute to motion sickness:
227
+
228
+ One thing to notice is about the locomotion in VR: a lot of them give you motion sickness. (P3-ED)
229
+
230
+ Motion sickness could heavily postpone the testing progress of the VR development and may cause the production delay:
231
+
232
+ It was difficult for first-time users, I need to adjust things slower for them. (P5-ED)
233
+
234
+ But if your participants or even the developer, start experiencing physical discomfort due to motion sickness, you won't probably be able to have a sustained session on the headset, and therefore, it really impacts what you can get out of the testing. (P12-VDTD)
235
+
236
+ C3: Difficult Equipment Set Up. Developers may struggle with time-consuming and challenging equipment setups during the testing phase. For instance, they are required to recalibrate or even reboot the VR equipment because the existing calibration was easy to break.
237
+
238
+ I need to recalibrate my headsets a lot of time during the testing, sometimes even rebooting the machine, because sometimes the responding calibration was okay, and then the next time the calibration was not good anymore. We have to recalibrate and install things. (P6-ED)
239
+
240
+ Additionally, there is a stringent need for a spacious, open, and obstruction-free physical area when developers wish to conduct trials within VR environments:
241
+
242
+ I need to clean up the physical space around me every time before some intensive VR testing, being able to fake a physical system in which you don't need to really move to test would save my time a lot. (P3-ED)
243
+
244
+ C4: Performance Issues. During the testing phase, developers may encounter performance-related challenges. Such challenges can involve prolonged build, loading, and rendering times, leading many participants to invest additional time in testing their VR projects.
245
+
246
+ Long build, loading or rendering time extended the testing phase of VR development:
247
+
248
+ When I checked in many changes to a big VR project, more than half of the hour was waiting for the build. (P5-ED)
249
+
250
+ Rebooting the machine took a lot of time since it's not only rebooting the machine itself but sometimes you have to rebuild and reload the project. (P6-ED)
251
+
252
+ The disparities in the performance of hardware components also pose difficulty to the testing. Typically, simulator environments such as PCs and laptops have superior performance compared to VR equipment. As a result, simulators cannot substitute VR equipment when it comes to testing the performance of VR projects.
253
+
254
+ When testing performance, the simulator gives you nothing, you have to build on the VR headset to know if the app runs well (P5-ED)
255
+
256
+ We had a project on Oculus Quest...because it is an Android app and has a big resolution, we needed to cut many features to accommodate the performance limitation. (P5)
257
+
258
+ It is hard to check the performance issue without running it on headsets. (P6-ED)
259
+
260
+ The low frame rate, which has been raised by 6 participants, can also be caused by low performance, which would cause motion sickness (C2) as discussed before and uncertainty to the projects:
261
+
262
+ If the frame rate sucks, motion sickness would probably come. (P7-JD)
263
+
264
+ The different frame rates give different glitches all the time. (P6-ED)
265
+
266
+ § 4.2 SOFTWARE-RELATED CHALLENGES
267
+
268
+ Other than hardware-related challenges, developers often face a variety of software-related challenges that impact the testing phase of VR development. These challenges include a lack of testing information (C5), difficulty in finding and reproducing bugs (C6), lack of automated testing (C7), and inconvenient collaboration of VR testing (C8). From software developers' perspective, software-related challenges are relatively easier to mitigate. Addressing these software-related challenges is crucial for optimizing the VR development process and ensuring the creation of high-quality applications.
269
+
270
+ C5: Missing Testing Information. Developers might face difficulties due to a lack of adequate testing information. Many developers, especially junior developers, have indicated that tracking program changes, including variables and hardware usage, within VR environments poses a challenge, and incorporating debug information, such as logs, into VR applications also proves to be problematic.
271
+
272
+ Some of this can be really frustrating, because, it's sometimes very hard to see, like the States, like the internal state of the system, which you kind of need for debugging. (P2-JD)
273
+
274
+ Ideally, if there is some log, or debugging options to track those variables, that would be ideal. (P1-JD)
275
+
276
+ Furthermore, some more experienced participants suggested that VR development tools could even do more than just show the basic testing information. For example, for Unity developers, an integrated Unity inspector in the VR environment has been raised as a missing component for VR testing:
277
+
278
+ I would love to see in headset authoring. So be able to like, you know, put on the headset, and basically be able to see the scene hierarchy and have control over your inspector at least for some parts, you know, basically like an engine within the headset. (P12-VDTD)
279
+
280
+ C6: Difficulty in Finding/Reproducing bugs. Developers can face challenges when trying to identify or reproduce issues in VR applications. The vast 3D immersive environment can make pinpointing minute details or minor inconsistencies difficult:
281
+
282
+ Sometimes, I needed to go back frame by frame to check the bug I saw. (P1-JD)
283
+
284
+ It's tedious to reproduce bugs. It may not be actually challenging It's just um Yeah, you just need to take time to reproduce it. (P3-ED)
285
+
286
+ Additionally, recreating bugs can prove to be troublesome, as retracing and replicating the precise actions within VR settings can be a complex task compared to reproducing them in the simulator:
287
+
288
+ You might be testing your experience in the editor, even with a simulator, but you might not encounter the same issue as you were wearing the headset. (P12-VDTD)
289
+
290
+ C7: Lack of Automated Testing. Developers may face challenges due to the underdeveloped nature of automated testing support in VR. The scarcity of automated or unit tests in VR makes it difficult to minimize manual or repetitive testing tasks. In addition, automated tests are not easy to implement for VR applications by nature. As there is no existing tool in the markets to map the inputs (e.g., user log-in, button presses on the controller, and head movement) in VR to the tests. This phenomenon has been reported by all three groups of participants:
291
+
292
+ It's not really easy to automate like testing with scripts. (P2- JD)
293
+
294
+ Even basic things like a 2d traditional UI, testing every button and every combination is a labor-intensive process there. I haven't seen a good way around it. (P8-ED)
295
+
296
+ I think there's certainly more that we could do as engineers in the industry to set up examples of how to apply the tools that exist today to do some automated tests. (P12-VDTD)
297
+
298
+ Manual tests are unavoidable in VR testing. However, some of the manual tests can be replaced with automated tests to decrease the labour work and accelerate the development process:
299
+
300
+ An ideal version of testing includes, you know, as much automated testing as possible...And when you find something that is actually broken, if you can automate it, you automate it, and write the automation tests for it. And if you can't automate it, you have to actually work with ${QA}$ to say, Okay, now how do we actually build a proper smoke test to actually go through and have a manual test for this? (P11-VDTD)
301
+
302
+ C8: Inconvenient Collaboration of VR Testing. Developers might confront obstacles when engaging in collaborative debugging or testing with their colleagues. The issue of collaboration has been explored by Krauß et al. [24]. In their study, the three main challenges faced by collaborative development are: (1) misconceptions about the medium, (2) lack of tool support and (3) missing a common language and shared concepts. Our interviews confirmed and complemented their findings by two aspects in collaborative VR testing: (1) difficult remote debugging and (2) difficult headset sharing.
303
+
304
+ Remote testing and debugging within a development team remain a critical issue, especially with the adoption of remote work mode in recent years:
305
+
306
+ That was a pain by calling and telling the person to change this and that. In order to debug something, I need to the person: You the specific things to do, and you need to tell me the result either through screenshot or recording. (P1-JD)
307
+
308
+ When I was helping people debugging, I always cannot see what they saw. It could be ideal to have a mapping between what they do (e.g. click, move) to our aspect. (P4-ED)
309
+
310
+ Headset sharing could be an issue since not everyone in the development team has access to the limited number of VR headsets:
311
+
312
+ Some people working at home, and do not have VR headsets there. Then they don't have ways to test some VR-specific problems like the performance issues. (P6-ED)
313
+
314
+ Even with developers located in the same physical space, collaborative debugging between different developers could be challenging. One of the prominent issues is that the headset sharing between developers need more labour work such as recalibration and communication than developer assumed:
315
+
316
+ A teammate head off the headset, then handed me a used VR headset and I put it on, I lost the calibration and was trapped in the box. (P1-JD)
317
+
318
+ Even though I told the other developers what they should do, they started doing other things than you thought. There is a high demand for communication here when you debug with someone else. (P4-ED)
319
+
320
+ § 4.3 COMPREHENSIVE CHALLENGES
321
+
322
+ Comprehensive challenges are these challenges beyond the hardware and software limitations. In our study, a lack of standards (C9) and few VR-specific testing support (C10) have been raised. Overcoming these obstacles requires the collective efforts of the entire VR community, as they go beyond the capabilities of individual developers or organizations.
323
+
324
+ C9: Lack of Standards. Developers might grapple with difficulties stemming from an absence of standardized practices. Numerous developers have pointed out the lack of shared standards, such as varying APIs, among diverse VR development tools and software (e.g., Unity and Unreal) as well as hardware (like VR HMDs such as Oculus Quest and HTC Vive).
325
+
326
+ I need to use two totally different SDKs for Quest 2 and Vive development, which means I have to double my development work by learning and coding two things. (P6-ED)
327
+
328
+ I think a good driver for this, specifically around standards. I mean, I think of openXR, you know, that's a good industry-inclusive initiative that is trying to get behind alignment for standards, so that everyone follows similar patterns, etc. And they can deploy to as many devices as it is supported within. (P12-VDTD)
329
+
330
+ Even though participant feedback from our study suggests that significant progress is still required before developers can fully benefit from the convenience, efficiency, and adaptability that standardization brings to VR development. Organizations like OpenVR strive to address standardization challenges in VR development, which could ultimately benefit many developers:
331
+
332
+ I enjoy being able to work on open ${XR}$ and the really unsexy open standards that are not going to sell front page, you know, news, but ultimately, is really going to benefit developers and the community at large by having open interoperable standards that we as an industry can use. (P12-VDTD)
333
+
334
+ C10: Few VR-specific Testing Support. Developers might face challenges due to the few VR-specific testing provided by the community and industry. This insufficient support can be evident in multiple forms, such as a limited range of toolkits, tutorials, and documentation, or insufficient cooperation and integration among different tools and solutions:
335
+
336
+ I hope to see more samples from the community, and proper documentation, currently they are not straightforward. (P7- ${JD})$
337
+
338
+ Looking for documentation sometimes is still very challenging (P2 - JD)
339
+
340
+ Some of the VR development frameworks I use do not have enough information about their technical details. Their internal logic is unknown and unchangeable and Ifeel like it's a black box. (P6-ED)
341
+
342
+ I'm thinking about the nature of like, VR departments still have a small population compared to trending fields like AI, it doesn't have large community support. (P8-ED)
343
+
344
+ However, one of the tool providers found that community support is getting better for VR development:
345
+
346
+ There's also the Unity learn portal where there are tutorials and, you know, for all levels, beginning, advanced and professional. So, I believe that there's enough documentation from unity and the tools and packages that we provide to the community that is pretty comprehensive. (P12-VDTD)
347
+
348
+ § 5 SURVEY STUDY
349
+
350
+ To validate and enhance the reliability of our qualitative interview results, we carried out a survey targeting a wider group of VR developers. This approach aimed to triangulate and substantiate our identified challenges.
351
+
352
+ § 5.1 STUDY DESIGN
353
+
354
+ Our survey consisted of three parts. Part 1 contained some demographic questions regarding respondents' years of experience in VR development, the VR development tools they utilized, and the VR headsets they employed Part 2 was respondents' assessment of each challenge identified during the interview study using a 7-point Likert scale: "Not at all important", "Low importance", "Slightly important", "Neutral", "Moderately important", "Very important", and "Extremely important". To ascertain the validity of the challenges discovered through the interviews, in Part 3 of the survey, we asked respondents to indicate, out of the 10 challenges, the top three challenges they think are most important and relevant to VR testing as well as the top three challenges they think are least important and irrelevant to VR testing.
355
+
356
+ < g r a p h i c s >
357
+
358
+ Figure 1: Respondents' ratings on the importance of each challenge on a 7-point Likert scale (1="not at all important"; 7="extremely important").
359
+
360
+ We recruited participants for our survey through Slack channels of several HCI research communities, VR-related industry communities, as well as a large IT company making VR development software. Additionally, we encourage respondents to share the survey with other VR developers, if feasible. No compensation was provided for completing the survey.
361
+
362
+ § 5.2 RESULTS
363
+
364
+ A total of ${33}\mathrm{\;{VR}}$ developers (23 males,9 females, and 1 non-binary/third gender; aged 23 - 52) from various organizations participated in our survey, after excluding 4 invalid responses where the respondents do not have enough VR experiences. Among the valid respondents, 16 had 0-2 years of VR development experience, 10 had 3-5 years, and 7 had 6-10 years. All of the participants (33/33) utilized Unity as their VR development tool, while 6 had experience with Unreal, and 1 had used a custom engine for VR development. The most popular headset among respondents was Oculus Quest 1/2, with 26 out of 33 developers have used it. Additionally, 19 respondents had experience with Oculus Rift(s) and 18 with HTC Vive. Furthermore, 13 developers had worked with Google Cardboard, and a few others $\left( { \leq 5}\right)$ had used headsets such as Meta Quest Pro, Varjo XR3, and Valve Index etc. separately.
365
+
366
+ Figure 1 summarizes the survey responses for the ratings of all identified challenges, including mean values and standard errors. It is evident that all identified challenges have a mean value greater than 4 (neutral), confirming the validity of the challenges. In particular, certain challenges (C1: Cumbersome VR Equipment, C5: Missing Testing Information, and C9: Lack of Standards) exhibit higher mean values, indicating that respondents are more concerned about these issues. C10: Few VR-specific Testing Support has the lowest mean value (4.15). A potential explanation for this could be the recent growth of the VR community due to the popular topic of the Metaverse, which has drawn more developers to the VR community and encouraged organizations and individuals to offer support to developers. C7: Lack of Automated Testing receives the second lowest average rating (4.21). This may be because manual testing cannot be avoided in VR development because of the nature of the software, which requires human labour to check features such as graphics quality and running performance. However, it is also noticeable that 8 of the respondents rated this challenge as "very important" and 1 rated this as "extremely important", which means this challenge still remains prominent among some of the VR developers.
367
+
368
+ Furthermore, we computed the proportion of each challenge selected as most important and relevant (Figure 2a) as well as that considered least important and irrelevant (Figure 2b). From Figure 2a, we can see that all the challenges have their voters, with C1: Cumbersome VR Equipment (15.15%), C3: Difficult Equipment Set-up (13.13%) and C4: Performance Issues (13.13%) having the most respondents’ concern. Notably, we can see that all C1, C3, and C4 are hardware-related challenges, which indicates that the VR development community still has big worries about VR hardware and there is a big growing space for VR hardware. From Figure 2b, we can see that C2: Motion Sickness (13.13%), C4: Performance Issues (13.13%), and C7: Lack of Automated Testing (13.13%) are the challenges that the respondents are least worried about. Motion sickness, as mentioned by one of the experienced developers in the interviews, could be overcome by the time people develop VR applications and gradually get used to it: "I am generally immune to motion sickness after spending a lot of time in the VR industry." (P5-ED)
369
+
370
+ In summary, all the challenges exhibit reasonable ratings without any outliers, further substantiating the validity of our findings from the interview study.
371
+
372
+ § 6 DISCUSSION
373
+
374
+ In this section, we discuss some future opportunities derived from our interviews with VR professional VR developers as well as several limitations of our study.
375
+
376
+ § 6.1 FUTURE OPPORTUNITIES
377
+
378
+ During the open discussion stage of the interviews, we asked our participants about their ideal VR testing tools or VR testing features. From the discussion, we have identified several promising avenues that may help with the design of future VR testing tools for both academic and industrial settings.
379
+
380
+ Improving hardware design for convenient VR testing: Hardware-related issues, as pointed out in both interviews and surveys, still remain prominent in VR development communities. Developers suggested some features that could potentially mitigate hardware-related issues in the future. For example, the quick flip on-and-off features would help people switch faster between the VR environment and computer monitor during the testing:
381
+
382
+ I wish there will be a VR headset that I could just wear, flip it off when I need to take a look at my monitor; and flip it back when I need to go back to VR. (P14-VDTD)
383
+
384
+ What I also really like, is the HoloLens 2 has the visor that can, you know, flip down and flip up... My dream headset, combines this feature in it. (P13-VDTD)
385
+
386
+ Enabling headset-based authoring and testing in VR. Headset authoring and testing in VR environments were raised a lot during the discussion. First, participants want to see a dedicated debugging mode in VR environments with more flexibility to make some code changes just in VR, without going back to the keyboard:
387
+
388
+ Some functionalities like pre-setting some variables that you will be able to tweak in VR, for example, the width or the height of an object in VR, could save a lot of time of going back and forth. (P3-ED)
389
+
390
+ I would love to see in headset authoring. So be able to like, put on the headset, and basically have control over your scene hierarchy and inspector. Basically like an engine within the headset, where you're able to edit parameters, move things around (P12-VDTD)
391
+
392
+ In addition, participants hoped to see the debugging mode in VR with better testing information visualization:
393
+
394
+ < g r a p h i c s >
395
+
396
+ Figure 2: Distribution of the three MOST (a) and LEAT (b) important challenges chosen by respondents.
397
+
398
+ There will be some windows inside the VR to help you see the performance and variables. (P6-ED)
399
+
400
+ Being able to see different view points, for example, switching between different cameras will help me debug some complicated scenes easily. (P3-ED)
401
+
402
+ Meta has even some really cool features where you can only have passed through in some areas like you know, the keyboard tracking features are pretty cool, right? I can see my real world keyboards keep that my real world keyboard so I know where to put my fingers when I am testing in VR (P12-VDTD)
403
+
404
+ Generally, participants wanted to have a smooth combination between editing and testing in VR development. One of the tool developers commented:
405
+
406
+ Finally, there should be some unification of editing and testing so that people don't feel a separation in the whole VR development process. (P14-VDTD)
407
+
408
+ Designing collaborative tools for VR testing. More convenient collaboration in VR development could improve the productivity of the whole VR development team. Communication and collaboration in VR testing can be enhanced with more dedicated VR collaborative tools:
409
+
410
+ I do agree that the sort of real-time over a Zoom meeting, and maybe the answer is that Zoom is ultimately not going to be the best platform for real-time VR-related calls. But I think there is sort of a gap there in terms of like, Could we have a zoom alternative in VR where we can load the experience together between developers? (P13-VDTD)
411
+
412
+ Creating automated testing frameworks for VR. The challenge of automated testing could be mitigated with specialized software frameworks or libraries with functionalities such as button mapping, testing video generation and breaking, playing back, and playing forward some testing timelines.
413
+
414
+ It is possible to develop some kind of testing framework, in which you can map some actions from the VR controller to the code to help developers automate some tests. (P9-ED)
415
+
416
+ It is ideal to be able to see the video of the tests and be able to highlight something or catch the model change. (P13-VDTD)
417
+
418
+ It would be helpful to have some systems in place to automate tests and developers could playback or play forward some of the tests in the simulator instead of putting on the VR headset. (P14-VDTD)
419
+
420
+ From the hardware perspective, one tool developer also pointed out that other tools like robotic arms could help with automated tests:
421
+
422
+ You can have some additional robotic testing, where you can actually say, Okay, what happens when I turn my head this way, what happens when I turn my head this way inside of the headset? (P11-VDTD)
423
+
424
+ Lowering the barrier for junior developers and non-tech users. On the social level, we could help with the testing difficulties confronted by junior developers by making VR development more accessible, smoothing some steps in the process, and integrating the learning experience into the VR development tools such as Unity engine:
425
+
426
+ Most of these headsets are making the VR development not very accessible for a lot of people, whether they have vision disabilities or mobility disabilities, you know, they're very able-bodied devices so far. So I hope that as we move forward, with new devices, and new technologies, all of those things begin to bubble up towards the front and become banners that we carry, and we really, really push forward. (P12-VDTD)
427
+
428
+ Setting up the initial building blocks for a starting project could allow someone without coding knowledge to quickly set up a scene. (P12-VDTD)
429
+
430
+ But also try to make all of the learning happen within the engine so that we can hopefully cut the dependency between a browser-based experience like the learning resource and have that integrated into one of the sample builds. (P12-VDTD)
431
+
432
+ Reducing the barriers to entry for VR development could not only draw more developers into the VR community but also significantly contribute to the overall growth and flourishing of the industry.
433
+
434
+ § 6.2 LIMITATIONS
435
+
436
+ There still exist limitations in our study. First, as shown in Table 1, a majority of our participants in the interview study uses Unity as the primary VR development tool. This might have biased the challenges we identified, as different development tools may have very different features. Thus a more diverse sample of interviewees could be recruited to further enhance and validate our results. Second, becond, while our survey study results have confirmed and substantiated the challenges derived from our interview study, more participants could be recruited to better support our insights. However, we do recognize the challenges of getting VR developers as the respondents, not ordinary VR users. Therefore, future studies can be carried out to re-confirm or extend the set of challenges. Third, we primarily employed a qualitative approach to identify the key challenges for VR testing in our study; while we attempted to use a quantitative method in the survey study to verify the interview results, an even more quantitative way might be appreciated. For example, some logging mechanisms can be implemented in VR development tools to examine the behaviors of VR developers during testing, thus complementing our qualitative results. In deriving the future directions for VR testing tools, prototypes can be built to have VR developers actually try different features and then provide their feedback. In summary, several future efforts should be made to continue this line of research, and our study has opened doors to a wide range of development and investigation opportunities for VR testing.
437
+
438
+ § 7 CONCLUSION
439
+
440
+ In this paper, we aimed to fill in the gap in the literature by exploring the challenges, needs, and future opportunities of software testing in VR development. During the interviews, participants expressed various difficulties encountered during the VR development testing phase, which we then consolidated into a list of challenges. Moreover, we explored future directions for VR testing and presented the outcomes that may shed light on the research and technology development. We confirmed the challenges through a survey, analyzing the ratings given for each challenge. Our findings highlight multiple design opportunities for both academic and industrial stakeholders to alleviate these VR testing challenges. By addressing these issues and following our results, we believe that future development can enhance the utility, productivity, and overall development experience for VR developers during the testing phase.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/gItvr7Xl66/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,369 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Vice VRsa: Balancing Bystander's and VR user's Privacy through Awareness Cues Inside and Outside VR
2
+
3
+ Category: Research
4
+
5
+ ![01963dfa-590f-7d81-a7bb-6f41d2e23974_0_218_383_1357_253_0.jpg](images/01963dfa-590f-7d81-a7bb-6f41d2e23974_0_218_383_1357_253_0.jpg)
6
+
7
+ Figure 1: Due to the immersive VR experience, a VR user may not notice a bystander's presence, which subjects the VR user to being monitored by bystanders without knowledge. A VR user can use a VR headset's camera (a) to monitor their surroundings. However, conversely, this camera recording raises bystanders' privacy concerns as they may be recorded without consent. We introduce Vice VRsa, which is designed to balance VR users' and bystanders' privacy by providing awareness cues to (b) the VR user about a bystander's presence and location (Radar, Halo, Live View) and (c) to the bystander about a VR user's privacy mode and what is being recorded about them through a color display (projection and LED vest) and public display (c+d).
8
+
9
+ ## Abstract
10
+
11
+ The immersive experience of Virtual Reality (VR) disconnects VR users from their physical surroundings, subjecting them to surveillance from bystanders who could record conversations without consent. While recent research has sought to mitigate this risk (e.g., VR users can stream a live view of their surrounding area into VR), it does not address that bystanders are conversely being recorded by the VR stream without their knowledge. This creates a causality dilemma where the VR user's privacy-enhancing activities raise the bystander's privacy concerns. We introduce Vice VRsa, a system that provides awareness of bystander presence to VR users as well as a VR user's monitoring status to bystanders. This work seeks to provide a framework and set of interactions for considering mutual awareness and privacy for both VR users and bystanders. Results from preliminary interviews with VR experts suggest factors for privacy implications in designing VR interactions in public physical spaces.
12
+
13
+ Index Terms: Human-centered computing-Visualization-Visualization techniques-Treemaps; Human-centered computing-Visualization-Visualization design and evaluation methods
14
+
15
+ ## 1 INTRODUCTION
16
+
17
+ Virtual reality (VR) provides users with immersive experiences in entirely virtual spaces. A VR user's immersion in the virtual space disengages their sense of presence from the physical space surrounding them. Such disengagement subjects VR users to not just running into a physical obstacle but also being monitored by others physically co-located without their knowledge or consent [34], as VR users may be unaware of their surroundings. This can result in putting the users in vulnerable positions in their physical space (e.g., private conversations being overheard or even recorded by someone co-located, accidental collision with physical obstacles, or other risks to their physical safety).
18
+
19
+ To alleviate such risks, VR users could activate the VR headset's passthrough camera to see their real surroundings. Additionally, researchers have previously explored various ways to make VR users aware of bystanders in their surroundings. For example, prior work demonstrated representations of the real world in the virtual environment by blending a camera feed with the virtual world [26, 51] or by bringing avatar representations of bystanders into VR [51].
20
+
21
+ However, these monitoring setups could in turn raise bystanders' privacy concerns as the passthrough camera is embedded in the headset to monitor a VR user's surroundings in a physical space. Using wearable cameras, such as those found in commercial VR headsets, remains a long-standing problem of unwanted surveillance $\left\lbrack {{19},{46}}\right\rbrack$ , as little awareness is provided to bystanders. To mitigate privacy concerns of bystanders in these VR hybrid settings (i.e., situations where a VR user is in a physical space where non-VR users might also be present), a camera activation indicator can be used. However, prior work suggests that a LED indicator may not be noticeable and understandable to end-users $\left\lbrack {6,{19},{48}}\right\rbrack$ .
22
+
23
+ In this paper, we aim to level the playing field between VR users and bystanders, by providing awareness to a VR user about a bystander's presence (VR user's awareness) as well as providing bystanders with awareness about what a VR user might see about them (bystander's awareness). To that end, we present Vice VRsa, as an example of a broader concept of a system offering mutual awareness to VR users and bystanders about each other's monitoring status through inside- and outside-VR headset representations. Moreover, we designed Vice VRsa to allow both a VR user and bystander to negotiate their desired levels of privacy as the desired level of privacy is context-dependent [10]-a VR user may not care about being listened to during a casual chat in VR, but may be more mindful about who is around during a confidential meeting in VR. As a proof-of-concept, Vice VRsa provides a VR user with options to choose from four different modes to determine their desired level of privacy: none/green, low/yellow, medium/orange, high/red (See Figure 3). The VR user can change the mode to receive a different level of granularity of information about their surroundings, while the information they share about their activities inside VR decreases. Concurrently, the bystander can be informed about the VR user's desired level of privacy through color indicators, as well as what the VR user is recording about the physical environment via an accompanying public display.
24
+
25
+ In summary, we contribute Vice VRsa, an instantiation of a framework that aims to improve both a VR user's and bystander's awareness of each other's monitoring status. Through our implementation, we demonstrate how Vice VRsa accommodates different privacy needs and how it allows bystander and VR users to negotiate their desired levels of privacy. Initial feedback on Vice VRsa's concept and system from expert VR users shows that the concept is easily understood and that experts find it promising to support their privacy needs in VR hybrid settings for both VR users and bystanders.
26
+
27
+ ## 2 BACKGROUND AND RELATED WORK
28
+
29
+ The framework of Vice VRsa builds upon prior work from three areas: (1) VR users' privacy concerns against covert monitoring by bystanders; (2) balancing VR users' awareness about their bystander presence and bystanders' interruption; and (3) bystanders' privacy concerns against camera recordings without consent or knowledge. In the following subsections, we will outline our work's position in relation to prior work.
30
+
31
+ ### 2.1 VR Users' Privacy Concerns against Bystanders in Public Settings
32
+
33
+ VR's immersive experience overrides the users' sense of presence in a physical space, putting them in a vulnerable position in terms of privacy. For instance, bystanders near the users could eavesdrop on their conversation without permission, or gain information by observing their interactions $\left\lbrack {{25},{43},{49},{55}}\right\rbrack$ . Prior work also pointed out that a bystander could exploit a VR user's vulnerable state by recording video and/or audio of them without their knowledge or consent $\left\lbrack {{31},{34}}\right\rbrack$ . Researchers have pointed to the need for an interaction that addresses privacy concerns for VR users in public spaces such as a shared office [30] as onlookers might still gain sensitive information from VR users' actions [16]. Consequently, researchers have explored how to prevent shoulder surfers from inferring VR users' data entry in VR, for example, by preventing bystander's observing or recording VR users' passcode-entry gestures with the hand-held controllers $\left\lbrack {{25},{55}}\right\rbrack$ or when typing on their keyboard $\left\lbrack {43}\right\rbrack$ .
34
+
35
+ VR users could take off their VR headset [51] or activate the headset's passthrough camera to see outside. However, a VR headset's passthrough only shows a live camera feed from the headset's front-facing camera, with no option to see on the sides or behind, and removing a headset interrupts any task and breaks the immersion. Moreover, because the passthrough feed only provides a full-screen view, VR users must pause their activity to check bystanders' presence, even when requiring minimal information about their presence- VR users may just want to know if someone is nearby without recognizing who they are. To that end, prior work explores how to improve VR users' awareness of bystanders in proximity without breaking the immersion while helping them to be informed about their physical space. For example, researchers have demonstrated methods using various modalities: different visual cues such as avatar, passthrough videos, radar views [13, 20, 26, 33, 45, 51]; auditory feedback $\left\lbrack {{13},{33}}\right\rbrack$ , and text $\left\lbrack {33}\right\rbrack$ .
36
+
37
+ In our work, we adapted and modified various representations of bystander presence inside VR. We, specifically, explore how such representations can be used in privacy-related contexts, and accommodate varying levels of privacy. Additionally, we consider a bystander's privacy against the VR device's camera recording.
38
+
39
+ ### 2.2 Balancing the Disruption by and Awareness of By- stander Presence for VR Users
40
+
41
+ Interventions from the outside VR can disrupt the VR user's feeling of immersion. Particularly, a bystander's interruption of VR experiences increases a VR user's cognitive burden, and may even cause discomfort $\left\lbrack {{26},{32},{37},{53}}\right\rbrack$ . George et al. found that a VR user is less likely to feel discomfort when they are interrupted from outside during their task switch (e.g., during the app transition) inside VR than in the middle of VR tasks [11]. However, Mai et al. found that not knowing information about their surrounding could cause cognitive burden [23] while putting the user at risk of bumping into objects such as furniture by accident or unwanted or abusive activity by the bystander [34]. Owing to that, there occurs a constant negotiation for users to choose between the needs of interruptions and focus.
42
+
43
+ In addition to bystanders' interruptions, how to represent bystanders in VR environments also affects the VR user's immersion. For instance, Kudo et al. explored three different representations of a bystander's presence inside VR [20]. Their findings show that an avatar representation of a bystander was most effective, although more peripheral visualizations of bystanders preserved a VR user's immersion better. They emphasized the need for systems to use the bystander representation that is most appropriate for the level of urgency a given task requires [20]. Yang et al. present ShareSpace which illustrates bystanders as virtual wall or obstacles and helps VR users to avoid physically bumping against the bystanders [54].
44
+
45
+ In order to handle the constant balance between immersion and interruption in VR, we build on prior work to create a framework and system that provides adjustable levels of awareness regarding bystander presence. Vice VRsa offers different bystander representations according to the level of VR users' desired privacy. We aim to give VR users agency over the granularity of the information they receive about bystanders' presence which is designed to match their situational privacy needs.
46
+
47
+ ### 2.3 Bystander Privacy Concerns against Wearable Cam- eras
48
+
49
+ VR devices (e.g., a headset, controllers) have a multitude of sensors including built-in cameras and microphones, which enables detecting and observing bystanders without their knowledge. This poses a threat to bystander's privacy, as these sensors could unwittingly capture their directly identifiable (e.g., face) or otherwise personal information (e.g., private conversations), causing social friction $\left\lbrack {2,{17},{46},{48}}\right\rbrack$ . Transparency about the camera recording status can reduce this friction. Commodity wearable VR devices (e.g., Quest) or Augmented Reality (AR) glasses (e.g., Google Glass, Snap Spectacles) have an LED indicating to bystanders whether the camera is currently in use or not $\left\lbrack {{14},{18}}\right\rbrack$ . However, such LED indicators are not easily noticeable and could even confuse bystanders or not be understood at all $\left\lbrack {{19},{35}}\right\rbrack$ . To overcome this, researchers have sought ways to avoid undesirable camera capture and to transparently communicate to bystanders camera recording status [2, 48]. For example, Alharbi et al. found that the level of obfuscation of camera capture could affect the level of bystanders' privacy concerns against unwanted capture [2]. Also, PrivacEye demonstrated a way to improve noticeability and understandability by using a physical cover that blocks a camera lens when the camera is not in use [48].
50
+
51
+ Unlike prior work that addressed privacy concerns about AR, there has been little work regarding the privacy of a VR user's bystander. Schwind et al. found no evidence that bystanders of VR users have privacy concerns about being recorded. However, they also point out that privacy concerns with AR glasses only came about with increased popularity, which in turn led to the reduced social acceptability of these devices [44]. In our work, we extend prior work to consider bystander privacy for VR by providing awareness about what a VR user is recording about their physical space.
52
+
53
+ ## 3 VICE VRSA
54
+
55
+ Vice VRsa is a framework and set of interactions to increase awareness of bystanders' presence to VR users and that of VR users' recording of their surroundings to bystanders to enable both sides in negotiating their privacy needs. Next, we will discuss the design considerations and interactions of Vice VRsa.
56
+
57
+ ### 3.1 Design Considerations for Vice VRsa
58
+
59
+ We account for two factors to design Vice VRsa: (1) desired privacy depending on contexts and (2) privacy notice timing.
60
+
61
+ #### 3.1.1 Balancing Privacy and Awareness for VR Users and Bystanders
62
+
63
+ Inside VR Representations for VR Users' Privacy against Bystanders. People have different levels of desired privacy depending on their context and current situation [10, 29]. For example, when a VR user performs an authentication gesture to log in to a VR application and does not want someone to watch their gestures, they may want to check outside to see if anyone is around them. Conversely, when they play a game and do not mind if someone is watching, they may not necessarily want to see outside their virtual environment to not break their immersion. Therefore, a system should offer different levels of awareness according to the desired levels of privacy, which echoes prior work’s findings [12,27].
64
+
65
+ ![01963dfa-590f-7d81-a7bb-6f41d2e23974_2_184_142_1430_208_0.jpg](images/01963dfa-590f-7d81-a7bb-6f41d2e23974_2_184_142_1430_208_0.jpg)
66
+
67
+ Figure 2: Halo implies information of (1) distance (2) and direction. (a, b, c, d) The bigger the sphere is, the closer a bystander is. The sphere appears on the left/right based on the bystander’s position relative to the user. (e, f) The radar view shows the bystander’s precise location in VR.
68
+
69
+ Outside VR Representations for Bystanders' Privacy against VR Users' Monitoring. There is a longstanding problem of cameras in public spaces, with people expressing concerns about being recorded without knowledge or consent [6,48]. Even though commodity VR/AR wearables have an LED indicator associated with their camera activation, prior work found that the LED indicator is unnoticeable and confusing to end users $\left\lbrack {{19},{35}}\right\rbrack$ . One way to address such concerns is to build trust $\left\lbrack {1,7}\right\rbrack$ by improving awareness of the camera's recording status [9]. This could, in turn, help people take further action if they did not want to be recorded. However, commercial VR headsets contain a multitude of cameras, allowing the VR user to observe or record their physical surroundings. This will only increase with future generations of hardware. To mediate trust between the bystander and the VR user, the system should therefore provide awareness to bystanders of whether and what the VR user is recording, as well as an awareness of the VR user's activities inside VR, allowing bystanders to regulate their behavior to accommodate the user's desired level of privacy.
70
+
71
+ #### 3.1.2 Timely Notices for Privacy Awareness
72
+
73
+ Understanding when and how personal information is collected helps people to protect their private data and avoid sensitive data to be tracked, monitored, or recorded without their knowledge [5, 21, 41]. As a result, privacy notices have become an essential part of interactive system as the privacy notice informs users about the data collection, allowing them to make their own privacy decision [5]. Researchers have indicated 'timing' as an important factor of the privacy notice to lower their oversight and to promote transparent communication on the data collection $\left\lbrack {3,{40},{42}}\right\rbrack$ . For example, a privacy notice that appears during smartphone app use is less likely to be overlooked than a privacy notice displayed at the app install time [3]. Therefore, it is critical to find a 'right' moment and duration when a user stays informed about the data collection.
74
+
75
+ This motivates us to design Vice VRsa to provide a privacy notice during VR use to both the user and bystanders. The notice level and amount of information shown depend on the amount of information that is being recorded. For example, if the VR user only wants to know about the presence of any bystanders we show a lesser notice to the bystander. If however, the VR user wants to see a full video feed of any bystanders, the notice carries more urgency and details the information that is being recorded about the environment. We show notices simultaneously in and outside VR during the monitoring, providing awareness to both the VR user and bystanders so they can negotiate their desired level of privacy.
76
+
77
+ ### 3.2 Vice VRsa Interactions
78
+
79
+ We showcase a set of interactions of Vice VRsa aligned with our design considerations. The system setup comprises two parts: (1) a representation inside VR that provides awareness about bystanders according to the user's desired level of privacy (Figure 3, left column); and (2) representations outside VR, including (a) color mode indicators showing the user's set privacy level and (b) an accompanying public display showing information about the VR user's activity and the information that is currently being recorded about the surroundings (Figure 3, right column).
80
+
81
+ Vice VRsa operates in four privacy modes, each corresponding to the desired level of a VR user's privacy. In each mode, various awareness cues in and outside VR are used, which act as privacy notice during VR use. We will discuss how to define the modes and the representations to use for each mode in the following subsections.
82
+
83
+ While various modalities (e.g., visual, sounds [31]) could be applied, we focus on visual feedback as an example of a large category that demonstrates how to design representations both in and outside VR.
84
+
85
+ #### 3.2.1 Awareness Cues Inside VR
86
+
87
+ Inside VR, we chose the representations depending on the extent to which they convey the information about bystander presence: (a) a Halo indicating bystander's presence and distance; (b) Radar showing a bystander's position on a radar map; and (c) Live view showing a live camera view of a bystander at their position in the physical space. Each of these views shows an increasing amount of information about the bystander, therefore also needing to record more information about the surroundings.
88
+
89
+ Halo. We chose Halo as a way to provide minimal information about bystanders' presence while keeping the disruption to VR immersion low. Prior work shows that representations with primitive designs (e.g., sphere) for bystanders provide minimal interruption to end-users [15]. To that end, we adopted the Halo approach which indicates the location of off-screen objects on a map application [4] to show the direction of a bystander's location and distance. The spheres (See Figure 2a-d), entering from the side of the screens, encode two pieces of information: the size of the sphere indicates the distance of the bystanders from the VR user, and the location of the sphere indicates the general direction (left or right) of the bystanders. For this view, the system tracks a rough distance (close, medium, far) and position (left, right) of the bystander.
90
+
91
+ Radar. We selected Radar as a means to offer a quick overview of bystanders' presence for VR users to quickly examine more than the minimal information about bystanders (e.g., the number of bystanders, and how far they are) while maintaining the immersion. Kudo et al. presented the radar view as a way to display an "overview" of bystander locations without breaking the immersion significantly [20]. We adopted this representation and showed it in the top-left corner of the VR user's view (See Figure 2e). Red dots represent bystanders' precise locations relative to the VR user's position (See Figure 2f).
92
+
93
+ Live View. As a tool to see detailed information about bystanders' presence, we designed 'Live View' that displays the camera view of bystanders. In this case, the VR user prioritizes acquiring information about bystanders over maintaining a high level of immersion in VR, e.g., due to highly sensitive tasks or an increased possibility of strangers in the vicinity. Willich et al. demonstrate displaying a passthrough video of passersby in the VR view by using a Microsoft
94
+
95
+ ![01963dfa-590f-7d81-a7bb-6f41d2e23974_3_231_143_1333_1213_0.jpg](images/01963dfa-590f-7d81-a7bb-6f41d2e23974_3_231_143_1333_1213_0.jpg)
96
+
97
+ Figure 3: Vice VRsa’s interactions can consider two stakeholders: (1) VR users; and (2) bystanders. Depending on the VR user’s privacy mode settings (Green, Yellow, Orange, Red), representations both in and outside VR change accordingly.
98
+
99
+ Kinect V2 depth sensor [51]. To achieve a similar effect, we use a 360-degree camera and stream a cropped video view of the detected bystanders. Our aim with this work was to demonstrate the concepts of Vice VRsa, rather than a production-ready implementation. Therefore, using a 360-degree camera offers two benefits over a VR headset's built-in passthrough camera. First, it allows users to see the full surrounding area, unlike front-facing recording by the passthrough camera. Second, the built-in passthrough feature does not provide API access to the raw camera feed. We, therefore, used a 360-degree camera to do image processing on the camera feed. The 360-degree camera and the headset's position are physically aligned through a custom 3D printed holder, therefore the camera feed appears as if directly recorded through the head-mounted display's (HMD) cameras. For this view, the system not only tracks a bystander's location but also records the live camera image.
100
+
101
+ #### 3.2.2 Awareness Cues Outside VR
102
+
103
+ Outside VR, the visual cues indicate to the bystander what information the VR system is recording about them. The representations comprise (a) a color mode indicator showing the VR user's desired level of privacy; and (b) an accompanying public display (Figure 3, right column) providing details on the user's activity and what is being recorded about their surroundings. Similar to inside-VR representations, the degree of information about bystander presence varies both in the color indicator and the public display.
104
+
105
+ Color Indicator. A VR user's cameras could record bystanders even over a distance, as long as there is a line of sight. The color mode indicators aim to provide an awareness of the VR users' selected privacy (and thus recording) at a distance within line of sight. A projection on the floor is a direct indication of the user's selected privacy mode and is in direct proximity of the VR user whom it concerns (Figure 4d). An LED-enabled vest (Figure 4e) is aiding this by making the mode visible at even further distances. The floor projection and LED vest could also be replaced or aided by additional indicator lights directly on the HMD, and are used interchangeably.
106
+
107
+ Public Display. The public display is set up near the VR user's "play" area and visible to passersby (Figure 4a). The public display is split into two parts: the top half shows details on the activity the VR user is doing; and the bottom half shows what the system is recording about the user's surroundings (Figure 3 right column). The contents of both parts are controlled by the VR users' desired privacy level, as described next.
108
+
109
+ ![01963dfa-590f-7d81-a7bb-6f41d2e23974_4_227_143_1339_493_0.jpg](images/01963dfa-590f-7d81-a7bb-6f41d2e23974_4_227_143_1339_493_0.jpg)
110
+
111
+ Figure 4: Vice VRsa setup (a) consists of four components: (b) Meta Quest 2 with Ricoh Theta S mounted on top; (c) public display; (d) projection on the floor; and (e) a vest attached with a series of LED strips.
112
+
113
+ ### 3.3 Color Modes Indicating Desired Levels of Privacy
114
+
115
+ The VR user can set their desired level of privacy by choosing between no (green), low (yellow), medium (orange), and high (red) privacy as shown in Figure 3. Each mode results in a combination of the previously described awareness cues inside VR, and corresponding visuals outside VR (showing a VR user's activity and what is being recorded). In this section, we describe the representations and interactions in and outside VR for each of the four modes. Note: The modes defined here represent example settings, but we anticipate different users could configure the behavior of their own privacy settings to best match their working contexts, such as the work they conduct, the physical space they are in, the people they share the space with, and their subjective privacy perception.
116
+
117
+ #### 3.3.1 Green Mode: No Privacy
118
+
119
+ The green mode is designed for the context where a VR user does not need any privacy, and wants to minimize distractions from any awareness cues, for example, when playing a game.
120
+
121
+ Inside VR. In the green mode, no awareness cues about bystanders are provided inside VR (Figure 3, left column, "Green" mode). The system does not track any information about bystanders.
122
+
123
+ Outside VR. Because there is no privacy desired, the VR user's full VR content is streamed to the public display, allowing bystanders to see what the VR user sees (Figure 3 right column). The color indicators (projector and LED vest illuminate green) indicate the user's low privacy mode at a distance.
124
+
125
+ #### 3.3.2 Yellow Mode: Low Level of Privacy
126
+
127
+ The yellow mode is for situations where the VR user needs a low level of privacy, yet wants to maintain a general awareness of bystanders' presence. The VR user's yellow mode aims to provide an awareness of general presence and proximity of bystanders.
128
+
129
+ Inside VR. The Halo appears on the side(s) of the screen the bystander's location in relation to the user. Its size implies the bystander's distance: the larger the circles the closer bystanders are.
130
+
131
+ Outside VR. The color indicators change to yellow, which signals to bystanders that minimal information about their location is being monitored by the VR user. The public display (Figure 3 right column) does not show the full VR content anymore but instead shows descriptive details on the VR user's activity, for example, the name of the application they are using, or if they are in a meeting, the meeting invite's title and duration. The bottom part of the display shows what kind of information the system records about the environment: the bystander's rough direction and distance.
132
+
133
+ #### 3.3.3 Orange Mode: Medium Level of Privacy.
134
+
135
+ The orange mode is intended for situations where the VR user is in need of a medium level of privacy. For example, they do not mind if bystanders know that they are in a meeting, but do not want any details known about it.
136
+
137
+ Inside VR. In the orange mode, the VR user receives more details about bystanders' location in the form of the radar view. This provides them with more precision on bystanders' locations and distances. In addition, the Halo is also shown.
138
+
139
+ Outside VR. The color indicators change to orange, which informs bystanders that more information about their presence is being observed. The top half of the public display shows a general notice about the type of activity the VR user is doing, for example, that they are in a meeting or playing a game without revealing which one. The bottom half shows a notice that the bystanders' location is being recorded and as well as a duplicate of the radar view, similar to inside VR.
140
+
141
+ #### 3.3.4 Red Mode: High Level Privacy Needed.
142
+
143
+ The red mode represents the highest level of privacy, where the VR user wants to be aware of bystanders and does not want any information about the VR tasks to be revealed.
144
+
145
+ Inside VR. As the information in the VR tasks is sensitive, the VR user needs to check who the bystanders in proximity are, to make an informed decision on whether it is safe to continue their activity or whether they should be mindful of their conversations and actions. Therefore, in this mode, the previously described live view is shown, which provides a window into the real world. Since this is in addition to the Halo and Radar View, the VR user is first made aware of a bystander's presence and location through these. Once aware, the user can then turn their head in the direction the Halo/Radar indicate to see look into the real world using the live view.
146
+
147
+ Outside VR. The color indicators change to red and the public display shows a warning sign on the top part with a request for privacy. The screen's bottom half shows the live view of the 360 camera, where bystanders can see themselves (Figure 3 right column). This enables alerting the bystanders that the VR user needs a high level of privacy, and that the system actively records the bystander.
148
+
149
+ ### 3.4 Vice VRsa Implementation
150
+
151
+ The Vice VRsa prototype, consists of a VR headset with an attached 360-degree camera for tracking bystanders, as well as a privacy notice (in our implementation an external display) and privacy awareness indicators (here an LED vest and an overhead projector).
152
+
153
+ #### 3.4.1 Inside VR Representation
154
+
155
+ For the VR user, there are two hardware components: the VR headset and an external 360-degree camera which is used to recognize and track bystander presence. We use a Meta Quest 2 [28] and Ricoh Theta S 360-degree camera [38] for the VR headset and the 360- degree camera respectively. Current VR headsets allow users to activate a passthrough and to see a live view of their surroundings. However, the passthrough view is only either on or off entirely (as discussed in section 2.1), and the APIs disallow third-party developers to access the passthrough image. We, therefore, mounted the 360 camera on top of the VR headset to synchronize the head and the camera orientations (See Figure 4b). This allowed us to use image processing on a real-world view for bystander detection. The camera and VR headset are tethered to a PC via USB. To detect bystanders outside the VR environment, the PC receives the live video feed from the camera and processes it using the computer vision algorithm Yolo [36] running on Processing.
156
+
157
+ Depending on the VR user's privacy setting, representations of the bystanders change inside VR. We created the representation in a virtual environment using Unity. Bystanders' location data is sent from Processing to Unity via JSON communication and the 360-degree video is streamed directly from the camera feed. The location data of detected bystanders consists of two types of information: (1) the angle difference from a user's current orientation; and (2) the distance. For our prototype, as a proof-of-concept, we estimate the distance by the bottom pixel's y-axis coordinate of the detected human body's bounding box, assuming that the further bystanders are located, the higher bottom pixel $y$ -axis position. In the Unity application, this bystander data is used to display the system's awareness cues.
158
+
159
+ #### 3.4.2 Setup for Outside VR Representation
160
+
161
+ The setup for outside VR representation consists of two components: a public display and color mode indicators (comprising a vest with LEDs and a projector). For the Color Indicators outside VR, we use a projector mounted on a tripod, connected to the PC via HDMI. The LED vest has an RGB LED strip woven into the garment, which is controlled by an Arduino microcontroller, connected to the PC, and controlled via Serial communication.
162
+
163
+ ## 4 SCENARIO
164
+
165
+ To illustrate how Vice VRsa can support bidirectional awareness of VR users and bystanders, we describe the following scenario where Victoria (a VR user) and Bob (a bystander) working for different companies, while physically co-located in a co-working space.
166
+
167
+ ### 4.1 Victoria's Perspective: VR User
168
+
169
+ Green Mode Victoria wears a VR headset and plays a game for a break during work. While gaming, she does not care if someone is watching her. In fact, she wants to encourage others to join her in playing the game. Thus, she sets Vice VRsa's privacy mode to the green mode, and no representation of any bystander is displayed inside her VR space (Figure 5a).
170
+
171
+ Yellow Mode After finishing the game, she joins her team's weekly social meeting through VR and starts a casual conversation. This meeting is casual and nothing sensitive is discussed. While engaging in the conversations, she wants to know if someone is around in the physical space where she is located. She is worried that talking or laughing loudly might disturb people around her. She therefore adjusts the privacy mode to the yellow mode which allows her to be informed about bystanders' presence. A large Halo appears on the left-hand side, telling her that there is someone close by. Thus, she decides to keep her voice down (Figure 5b).
172
+
173
+ Orange Mode After the team social meeting, she joins her team meeting where several important items are discussed. She not only wants to be mindful of disturbing others but also to have a thorough understanding of her surroundings (e.g., how many people are nearby and where are they). She changes her privacy mode to the orange mode, which activates the radar view. She can see there is one bystander on her left close by. While she feels okay that there is someone nearby, she chooses her words carefully to not disclose too much information about the discussion items (Figure 5c).
174
+
175
+ Red Mode After her team meeting, she needs to have a confidential meeting with two team members about a new car design they have been working on. Due to the sensitive nature of the information, she wants to make sure no unauthorized person gains any knowledge about the confidential information. She, therefore, switches to the red mode, which not only provides her the awareness about bystanders' presence and distance through the Halo and Radar but also allows her to see outside the VR through the live view whenever there is a bystander. After a Halo appears on her left, she turns her head and sees a stranger nearby. She politely requests privacy, which the bystander adheres to. Victoria can now be certain that no one is around and continue her confidential design review (See Figure 5d).
176
+
177
+ ### 4.2 Bob's Perspective: Bystander
178
+
179
+ Green Mode When Bob is walking around the office, he finds that Victoria is wearing a VR headset and vigorously throwing her arms around. He sees her being engulfed in green light. He understands this as an invitation to come closer. He stops closer by and watches Victoria play a game in VR, which is being relayed on the nearby public display (Figure 5e). He hopes to join her for a round of games.
180
+
181
+ Yellow Mode After a few moments, Bob notices that Victoria sat down to have a conversation. He sees the lights around her change to yellow, and that the public display shows Victoria having her weekly team social meeting, which he believes is not a sensitive meeting. He also sees on the display that his rough location seems detected as 'close left'. While he is now aware of the change and the system recording information, he does not feel that he needs to leave (See Figure 5f).
182
+
183
+ Orange Mode A few minutes later, Bob notices that the colored lights changed to orange. He feels more cautious and hears Victoria talk about work topics. Bob notices that the public display shows his precise location on a radar view in relation to Victoria's position, and he sees that she is no longer in the team social meeting but cannot see any meeting details. He feels that he should not be close and overhear her conversation. Thus, he takes a few steps back from her and continues working on his phone (Figure 5g).
184
+
185
+ Red Mode Suddenly, Bob notices that the lights around Victoria turn red. Right after that, Victoria turns her head towards him. She speaks to him and asks him for privacy. Additionally, Bob finds that the public display shows a warning sign and message underneath that says 'Privacy Please!'. Therefore, Bob understands that she needs complete privacy and walks away immediately, giving Victoria the requested space (Figure 5h).
186
+
187
+ ## 5 INITIAL SUBJECTIVE FEEDBACK
188
+
189
+ Our focus of this work is to evaluate a newly introduced concept of the system rather than assess the system's usability. We, therefore, wanted to gain initial insights and feedback on Vice VRsa's utility and usefulness and subjective feedback through interviews with expert VR users. The goal was to learn how Vice VRsa could increase awareness about bystanders, if sharing what is being recorded about the environment could be useful for bystanders, and how Vice VRsa's components would be understood. The study underwent our institution's internal ethics review process. We recruited expert VR users via email from within our institution to participate in a guided walk-through of the functionality of Vice VRsa as demonstrated through video scenarios. Sessions lasted one hour and were conducted via Zoom. Participants were compensated with an equivalent of 75 USD. During the study, we asked participants about prior experience conducting professional work in VR and how they assert their privacy needs in such situations. We showed them videos of the bystanders as well as the VR user's perspective of each of the four privacy modes (green, yellow, orange, red), stopping at several points throughout the videos to ask questions.
190
+
191
+ ![01963dfa-590f-7d81-a7bb-6f41d2e23974_6_224_151_1348_570_0.jpg](images/01963dfa-590f-7d81-a7bb-6f41d2e23974_6_224_151_1348_570_0.jpg)
192
+
193
+ Figure 5: We illustrate a scenario to depict how Vice VRsa’s interactions can be used based on the level of privacy mode in various contexts of a VR user(a, b, c, d)and a bystander(e, f, g, h).
194
+
195
+ We recruited seven participants ( 1 female, 6 male; 29-53 years old, $\mathrm{M} = {38.5},\mathrm{{SD}} = {6.97}$ ) with an average of 3.7 years experience of professionally working with VR (1-5 years, SD=1.39). Except one, all participants were currently working weekly or daily in VR, and their roles ranged from XR product, UX, and instructional content designers, to XR customer success managers, and XR researchers. Their professional experience with VR ranged from developing concept designs, training people or demonstrating VR applications, to using VR systems for architectural design tasks.
196
+
197
+ ### 5.1 Bystander's perspective
198
+
199
+ From the bystander's perspective, all participants agreed that the awareness of the VR user's desired level of privacy was useful to regulate their behavior (i.e., by seeing the color indicators from a distance and the details on the public display closeup). However, while all participants stated that green and red would be universally understood, they also stated that the color coding would need to be learned before fully understood. All participants agreed that getting information about the VR user's activity was helpful, primarily to know if they could be interrupted and, in the case of the green mode, finding opportune moments to do so. P7 compared this to regular desktop usage in a shared office, where "you can see what everyone is doing and if they're on a call" (P7). Interestingly, six participants also stated that the more privacy-sensitive a task is, the less likely they would interrupt the VR user, and this is also reflected in their behavior toward interrupting VR users. While all agreed that they would regulate their behavior depending on the VR user's privacy settings, 4/7 also stated that the system would intrigue them to watch the VR user. This points to an interesting dilemma, where the addition of color indication and public display could create a honeypot effect, opposing the purpose of the system.
200
+
201
+ #### 5.1.1 Awareness about Unwanted Recordings
202
+
203
+ Only 3/7 participants stated it was useful as bystanders to know what was being recorded about them. In fact, most would not mind having their distance or location tracked in general, but as soon as their video feed would appear inside VR, they would want to know about this and find the system useful. As one participant put it "[As a bystander] I don't care about position, but if [the VR users] are watching me, they are intruding my own privacy. Especially when recording my face, this would be going a little too far."
204
+
205
+ In sum, as bystanders, participants appreciate the information on the public display for awareness about the VR user's activity, to know when to regulate their presence and opportune moments for interruptions. While they stated that the visuals on the public displays would pose a learning curve, they stated that several aspects of the system could be easily understood, especially in the most restrictive (privacy warning sign) and most permissive (live stream of VR content) modes.
206
+
207
+ ### 5.2 VR user's perspective
208
+
209
+ From the VR user perspective, 6/7 participants would find Vice VRsa useful for awareness of their surroundings and 5/7 could imagine using the system. Several participants found Vice VRsa unnecessary at this stage as they mostly work from home and would hear others enter their office. However, all agreed that when working in a shared office or public space, bringing situational awareness into VR and sharing one's privacy needs with bystanders would be useful.
210
+
211
+ #### 5.2.1 Awareness about Unwitting Monitoring
212
+
213
+ While most participants stated that their usage of VR in a public/shared space is usually for non-confidential tasks (e.g., trade shows, or playing games), they said that for any privacy-sensitive tasks, they would move out of the public space (e.g., by going into a closed meeting room). Most agreed that they would continue doing so for highly sensitive tasks (such as performance conversations (P4), client meetings (P5), or confidential design reviews (P2, P1)). On the contrary, P3 stated that even while working on sensitive prototypes, they would not always move to separate space, as "[bystanders] can't see what I'm seeing and cannot really make out what I'm talking about [from the] fragments they catch". However, several participants(P4, P7)also stated they are self-conscious about their actions when using VR and would therefore find the awareness about bystanders useful to "[not] look silly" (P7).
214
+
215
+ Using Vice VRsa's red mode to identify who is around was highlighted as most useful. This allows VR users to decide if they would need to break out of VR to negotiate their privacy needs or if they could ignore the bystander. P3 even pointed out that it would be good if the system could automatically identify if the bystander was trusted and not show any awareness cues if so.
216
+
217
+ #### 5.2.2 Distraction / Breaking Immersion
218
+
219
+ Four participants mentioned that they would find the visuals distracting, especially the green halo as it grows directly in their peripheral vision. The radar view, however, was seen as less distracting as it "just sits there in the corner and I can ignore it [... and] the red dot doesn't suddenly grow into my face" (P6). While participants agreed that setting a VR user's privacy mode would be useful to communicate with bystanders, several participants also were unsure if the yellow and orange modes were needed. They found the green mode useful to communicate their actions to bystanders for the purpose of sharing the experience (e.g., gaming or teaching) and creating an awareness of what they were doing so that bystanders could adjust their behavior (e.g., they knew if the VR user is interruptible and if so, identifying an opportune moment for doing so). The red mode was seen as useful to clearly express the need for privacy.
220
+
221
+ ## 6 Discussion
222
+
223
+ We explored the design space of balancing VR user and bystander privacy and awareness through the implementation and evaluation of Vice VRsa, which includes the development of several novel interactions to improve mutual awareness.
224
+
225
+ ### 6.1 Necessity for Privacy
226
+
227
+ One interesting finding from the expert feedback was about the necessity of the system from the VR users' perspective. Several participants mentioned that if they were truly doing something private, they are either already in a private space (e.g., a home), or would move from a public space to a closed meeting room. However, it is not always possible to move to a meeting room due to the availability, or the portability of necessary hardware (motion trackers, desktop computers, etc), especially for higher-end VR setups. Additionally, many of the participants have largely been working from home in recent years, which may impact their sensitivity about privacy.
228
+
229
+ Similarly, not all of the experts interviewed considered bystander awareness about the tracking completely necessary as they had no problem with the VR user knowing their presence. However, when they became aware that the VR headset cameras could record them without their knowledge, their perspective shifted to a more privacy-minded approach. It is foreseen that the use of these devices becomes more commonplace as with AR devices [47]. The sensors become higher fidelity and the real-world privacy risks associated with their use will increase. If not carefully designed, VR hardware could be used to record individuals without their knowledge and erode trust between VR users and bystanders.
230
+
231
+ ### 6.2 Awareness vs. Intrusion
232
+
233
+ In our work, while the goal was to examine that the concepts were clear and that participants could understand the prototype's functionalities, several participants commented on the representations' implementation. For example, the halo was too large and distracting whereas the radar was seen as less-intrusive. We believe that further refinements could be made to these techniques in practice. For instance, shrinking the size of the halo or increasing its transparency could reduce the interruption to the main task.
234
+
235
+ Beyond visual feedback, other modalities could be considered as a way to be less intrusive both for the VR user and the bystander in the environment. For the user in VR, subtle haptic cues may be used to indicate the direction and presence of a person, or the use of air flow could provide the simulated sensation of a person walking by [39]. Spatial audio could perform a similar function, supplying information about bystanders through a potentially unused modality [33]. For the bystander, directional speakers may alert them as they walk into the area where they may be sensed, and cause them to look around for further cues about what might record them.
236
+
237
+ ### 6.3 A Privacy Arms Race
238
+
239
+ During our feedback sessions, several expert VR users pointed out that the system is creating an additional overhead, by relying on the user manually setting the privacy mode. While, in this paper, we aimed to explore the design space of Vice VRsa, future work should explore how a privacy mode can be inferred from the user's activity. For example, a system could know that meetings are generally private. A system could go even further, identify who a bystander is, and automatically determine if they are privy to the meeting's content. However, this creates an even bigger dilemma where the system not only records the video footage of the bystander but also connects their identity and contexts from additional databases. This presents another embodiment of the ever-present arms race between defenders of privacy and those who exploit information.
240
+
241
+ ### 6.4 Social Protocols Still Mediate Interactions
242
+
243
+ The interfaces explored with Vice VRsa still require humans to follow established social protocols. The awareness indicators for both bystanders and VR users are designed to replace or augment some of the natural perceptual and social cues that are lost when users enter VR, but they are not intended to prevent bad actors on either side. We have found that Vice VRsa can add value to VR/bystander interactions, and can be a tool to help support interpersonal communication in the face of a technological barrier.
244
+
245
+ ### 6.5 Ecological Validity
246
+
247
+ Our focus of the initial study was to run a preliminary evaluation of a newly introduced concept rather than evaluate the system's usability. To that end, our study goal was to demonstrate the system's workflow and illustrate the system as Ledo et al. suggested as a 'demonstration' [22] and gather feedback on the interaction concepts' potential usefulness. Accordingly, we conducted our evaluation with an interview study by showing the videos that describe Vice VRsa interactions. While participants could not directly interact with Vice VRsa, we found that the concepts of Vice VRsa were understood and that participants felt that Vice VRsa could help improve the balance between bystanders' and VR users' privacy.
248
+
249
+ Furthermore, we chose to gather feedback from VR experts to understand whether and how any newly introduced privacy-related interactions could potentially jeopardize the overall usage of VR. Since privacy is a secondary concern for regular end-users [8], complicated interactions may discourage their adoption [52]. In the future, it will be interesting to collaborate with privacy experts to assess and refine the interaction design and to deploy Vice VRsa in the wild to run a longitudinal study to examine its long-run effect. This will help close the gap between in-lab and field studies [24]. For example, prior work suggests that habituation over time could affect an end-user's security and privacy behaviors [50].
250
+
251
+ ## 7 CONCLUSION
252
+
253
+ The immersive nature of VR leaves users vulnerable to surveillance by bystanders or threats to physical safety. To alleviate that, users can use external cameras to view their surroundings. However, this might infringe on bystanders' privacy. In this work, we developed Vice VRsa, a framework and set of interactions to mediate the privacy of VR users and their bystanders. Vice VRsa transparently communicates presence, recording, and desired privacy through techniques inside VR (Halo, Radar, Live View) as well as to the bystander (Color Indicator and Public Display). In our preliminary evaluation with VR experts, we found that Vice VRsa could help address privacy concerns for both, VR user and bystander. We see Vice VRsa as an initial step towards addressing the emerging problem regarding mutual awareness between VR users and bystanders about their privacy in VR hybrid settings.
254
+
255
+ [1] I. Ahmad, R. Farzan, A. Kapadia, and A. J. Lee. Tangible privacy:
256
+
257
+ Towards user-centric sensor designs for bystander privacy. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW2):1-28, 2020.
258
+
259
+ [2] R. Alharbi, M. Tolba, L. C. Petito, J. Hester, and N. Alshurafa. To mask or not to mask? balancing privacy with visual confirmation utility in activity-oriented wearable cameras. Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies, 3(3):1-29, 2019.
260
+
261
+ [3] R. Balebako, F. Schaub, I. Adjerid, A. Acquisti, and L. Cranor. The impact of timing on the salience of smartphone app privacy notices. In Proceedings of the 5th Annual ACM CCS Workshop on Security and Privacy in Smartphones and Mobile Devices, pp. 63-74, 2015.
262
+
263
+ [4] P. Baudisch and R. Rosenholtz. Halo: a technique for visualizing offscreen objects. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 481-488, 2003.
264
+
265
+ [5] L. F. Cranor. Necessary but not sufficient: Standardized mechanisms for privacy notice and choice. J. on Telecomm. & High Tech. L., 10:273, 2012.
266
+
267
+ [6] T. Denning, Z. Dehlawi, and T. Kohno. In situ with bystanders of augmented reality glasses: Perspectives on recording and privacy-mediating technologies. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2377-2386, 2014.
268
+
269
+ [7] Y. Do, J. W. Park, Y. Wu, A. Basu, D. Zhang, G. D. Abowd, and S. Das. Smart webcam cover: Exploring the design of an intelligent webcam cover to improve usability and trust. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 5(4):1-21, 2021.
270
+
271
+ [8] P. Dourish, R. E. Grinter, J. Delgado De La Flor, and M. Joseph. Security in the wild: user strategies for managing security as an everyday, practical problem. Personal and Ubiquitous Computing, 8:391-401, 2004.
272
+
273
+ [9] B. Ens, T. Grossman, F. Anderson, J. Matejka, and G. Fitzmaurice. Candid interaction: Revealing hidden mobile and wearable computing activities. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, pp. 467-476, 2015.
274
+
275
+ [10] S. Gaw, E. W. Felten, and P. Fernandez-Kelly. Secrecy, flagging, and paranoia: adoption criteria in encrypted email. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 591-600, 2006.
276
+
277
+ [11] C. George, P. Janssen, D. Heuss, and F. Alt. Should i interrupt or not? understanding interruptions in head-mounted display settings. In Proceedings of the 2019 on Designing Interactive Systems Conference, pp. 497-510, 2019.
278
+
279
+ [12] C. George, A. N. Tien, and H. Hussmann. Seamless, bi-directional transitions along the reality-virtuality continuum: A conceptualization and prototype exploration. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 412-424. IEEE, 2020.
280
+
281
+ [13] S. Ghosh, L. Winston, N. Panchal, P. Kimura-Thollander, J. Hotnog, D. Cheong, G. Reyes, and G. D. Abowd. Notifivr: exploring interruptions and notifications in virtual reality. IEEE transactions on visualization and computer graphics, 24(4):1447-1456, 2018.
282
+
283
+ [14] Google. Google glass. https://www.google.com/glass/start/ (Accessed on 09/12/2022).
284
+
285
+ [15] M. Gottsacker, N. Norouzi, K. Kim, G. Bruder, and G. Welch. Diegetic representations for seamless cross-reality interruptions. In 2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 310-319. IEEE, 2021.
286
+
287
+ [16] J. Grubert, E. Ofek, M. Pahud, and P. O. Kristensson. The office of the future: Virtual, portable, and global. IEEE computer graphics and applications, 38(6):125-133, 2018.
288
+
289
+ [17] J. H"akkil"a, F. Vahabpour, A. Colley, J. V"ayrynen, and T. Koskela. Design probes study on user perceptions of a smart glasses concept. In Proceedings of the 14th international conference on mobile and ubiquitous multimedia, pp. 223-233, 2015.
290
+
291
+ [18] S. Inc. Spectacles. https://www.spectacles.com/ (Accessed on 09/12/2022).
292
+
293
+ [19] M. Koelle, K. Wolf, and S. Boll. Beyond led status lights-design requirements of privacy notices for body-worn cameras. In Proceedings
294
+
295
+ of the Twelfth International Conference on Tangible, Embedded, and Embodied Interaction, pp. 177-187, 2018.
296
+
297
+ [20] Y. Kudo, A. Tang, K. Fujita, I. Endo, K. Takashima, and Y. Kitamura.
298
+
299
+ Towards balancing vr immersion and bystander awareness. Proc. ACM Hum. Comput. Interact., 5(ISS):1-22, 2021.
300
+
301
+ [21] M. Langheinrich. Privacy by design-principles of privacy-aware ubiquitous systems. In International conference on ubiquitous computing, pp. 273-291. Springer, 2001.
302
+
303
+ [22] D. Ledo, S. Houben, J. Vermeulen, N. Marquardt, L. Oehlberg, and S. Greenberg. Evaluation strategies for hci toolkit research. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-17, 2018.
304
+
305
+ [23] C. Mai, T. Wiltzius, F. Alt, and H. Hußmann. Feeling alone in public: investigating the influence of spatial layout on users' vr experience. In Proceedings of the 10th Nordic conference on human-computer interaction, pp. 286-298, 2018.
306
+
307
+ [24] F. Mathis, K. Vaniea, and M. Khamis. Prototyping usable privacy and security systems: Insights from experts. International Journal of Human-Computer Interaction, 38(5):468-490, 2022.
308
+
309
+ [25] F. Mathis, J. Williamson, K. Vaniea, and M. Khamis. Rubikauth: Fast and secure authentication in virtual reality. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1-9, 2020.
310
+
311
+ [26] M. McGill, D. Boland, R. Murray-Smith, and S. Brewster. A dose of reality: Overcoming usability challenges in vr head-mounted displays. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 2143-2152, 2015.
312
+
313
+ [27] D. Medeiros, R. Dos Anjos, N. Pantidi, K. Huang, M. Sousa, C. Anslow, and J. Jorge. Promoting reality awareness in virtual reality through proxemics. In 2021 IEEE Virtual Reality and 3D User Interfaces (VR), pp. 21-30. IEEE, 2021.
314
+
315
+ [28] Meta. Meta quest 2. https://store.facebook.com/ca/quest/ products/quest-2/ (Accessed on 09/12/2022).
316
+
317
+ [29] P. E. Naeini, S. Bhagavatula, H. Habib, M. Degeling, L. Bauer, L. F. Cranor, and N. Sadeh. Privacy expectations and preferences in an \{IoT\} world. In Thirteenth Symposium on Usable Privacy and Security (SOUPS 2017), pp. 399-412, 2017.
318
+
319
+ [30] E. Ofek, J. Grubert, M. Pahud, M. Phillips, and P. O. Kristensson. Towards a practical virtual office for mobile knowledge workers. arXiv preprint arXiv:2009.02947, 2020.
320
+
321
+ [31] J. O'Hagan, M. Khamis, M. McGill, and J. R. Williamson. Exploring attitudes towards increasing user awareness of reality from within virtual reality. In ACM International Conference on Interactive Media Experiences, pp. 151-160, 2022.
322
+
323
+ [32] J. O'Hagan, M. Khamis, and J. R. Williamson. Surveying consumer understanding & sentiment of vr. In Proceedings of the International Workshop on Immersive Mixed and Virtual Environment Systems (MMVE'21), pp. 14-20, 2021.
324
+
325
+ [33] J. O'Hagan and J. R. Williamson. Reality aware vr headsets. In Proceedings of the 9TH ACM International Symposium on Pervasive Displays, PerDis '20, p. 9-17. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3393712.3395334
326
+
327
+ [34] J. O'Hagan, J. R. Williamson, M. McGill, and M. Khamis. Safety, power imbalances, ethics and proxy sex: Surveying in-the-wild interactions between vr users and bystanders. In 2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 211-220. IEEE, 2021.
328
+
329
+ [35] R. S. Portnoff, L. N. Lee, S. Egelman, P. Mishra, D. Leung, and D. Wagner. Somebody's watching me? assessing the effectiveness of webcam indicator lights. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 1649-1658, 2015.
330
+
331
+ [36] J. Redmon and A. Farhadi. Yolov3: An incremental improvement. arXiv, 2018.
332
+
333
+ [37] M. Rettinger, C. Schmaderer, and G. Rigoll. Do you notice me? how bystanders affect the cognitive load in virtual reality. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 77-82. IEEE, 2022.
334
+
335
+ [38] L. Ricoh Company. Ricoh theta s. https://theta360.com/en/ about/theta/s.html (Accessed on 09/12/2022).
336
+
337
+ [39] M. Rietzler, K. Plaumann, T. Kränzle, M. Erath, A. Stahl, and E. Rukzio. Vair: Simulating 3d airflows in virtual reality. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 5669-5677, 2017.
338
+
339
+ [40] A. Rossi and G. Lenzini. Transparency by design in data-informed research: A collection of information design patterns. Computer Law & Security Review, 37:105402, 2020.
340
+
341
+ [41] F. Schaub, R. Balebako, and L. F. Cranor. Designing effective privacy notices and controls. IEEE Internet Computing, 21(3):70-77, 2017.
342
+
343
+ [42] F. Schaub, R. Balebako, A. L. Durity, and L. F. Cranor. A design space for effective privacy notices. In Eleventh symposium on usable privacy and security (SOUPS 2015), pp. 1-17, 2015.
344
+
345
+ [43] D. Schneider, A. Otte, T. Gesslein, P. Gagel, B. Kuth, M. S. Damlakhi, O. Dietz, E. Ofek, M. Pahud, P. O. Kristensson, et al. Reconviguration: Reconfiguring physical keyboards in virtual reality. IEEE transactions on visualization and computer graphics, 25(11):3190-3201, 2019.
346
+
347
+ [44] V. Schwind, J. Reinhardt, R. Rzayev, N. Henze, and K. Wolf. Virtual reality on the go? a study on social acceptance of vr glasses. In Proceedings of the 20th international conference on human-computer interaction with mobile devices and services adjunct, pp. 111-118, 2018.
348
+
349
+ [45] A. L. Simeone. The vr motion tracker: visualising movement of nonparticipants in desktop virtual reality experiences. In 2016 IEEE 2nd Workshop on Everyday Virtual Reality (WEVR), pp. 1-4. IEEE, 2016.
350
+
351
+ [46] T. Starner. The challenges of wearable computing: Part 2. Ieee Micro, 21(4):54-67, 2001.
352
+
353
+ [47] Statista. Ar & vr adoption is still in its infancy, Oct 2022. https://www.statista.com/chart/28467/ virtual-and-augmented-reality-adoption-forecast/ (Accessed on 02/14/2023).
354
+
355
+ [48] J. Steil, M. Koelle, W. Heuten, S. Boll, and A. Bulling. Privaceye: privacy-preserving head-mounted eye tracking using egocentric scene image and eye movement features. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications, pp. 1-10, 2019.
356
+
357
+ [49] S. Stephenson, B. Pal, S. Fan, E. Fernandes, Y. Zhao, and R. Chatterjee. Sok: Authentication in augmented and virtual reality. In 2022 IEEE Symposium on Security and Privacy (SP), pp. 267-284. IEEE, 2022.
358
+
359
+ [50] J. Sunshine, S. Egelman, H. Almuhimedi, N. Atri, and L. F. Cranor. Crying wolf: An empirical study of ssl warning effectiveness. In USENIX security symposium, pp. 399-416. Montreal, Canada, 2009.
360
+
361
+ [51] J. Von Willich, M. Funk, F. Müller, K. Marky, J. Riemann, and M. Mühlhäuser. You invaded my tracking space! using augmented virtuality for spotting passersby in room-scale virtual reality. In Proceedings of the 2019 on Designing Interactive Systems Conference, pp. 487-496, 2019.
362
+
363
+ [52] A. Whitten and J. D. Tygar. Why johnny can't encrypt: A usability evaluation of pgp 5.0. In USENIX security symposium, vol. 348, pp. 169-184, 1999.
364
+
365
+ [53] J. R. Williamson, M. McGill, and K. Outram. Planevr: Social acceptability of virtual reality for aeroplane passengers. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-14, 2019.
366
+
367
+ [54] K.-T. Yang, C.-H. Wang, and L. Chan. Sharespace: Facilitating shared use of the physical space by both vr head-mounted display and external users. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, pp. 499-509, 2018.
368
+
369
+ [55] H. Zhu, W. Jin, M. Xiao, S. Murali, and M. Li. Blinkey: A two-factor user authentication method for virtual reality devices. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 4(4):1-29, 2020.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/gItvr7Xl66/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,253 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § VICE VRSA: BALANCING BYSTANDER'S AND VR USER'S PRIVACY THROUGH AWARENESS CUES INSIDE AND OUTSIDE VR
2
+
3
+ Category: Research
4
+
5
+ < g r a p h i c s >
6
+
7
+ Figure 1: Due to the immersive VR experience, a VR user may not notice a bystander's presence, which subjects the VR user to being monitored by bystanders without knowledge. A VR user can use a VR headset's camera (a) to monitor their surroundings. However, conversely, this camera recording raises bystanders' privacy concerns as they may be recorded without consent. We introduce Vice VRsa, which is designed to balance VR users' and bystanders' privacy by providing awareness cues to (b) the VR user about a bystander's presence and location (Radar, Halo, Live View) and (c) to the bystander about a VR user's privacy mode and what is being recorded about them through a color display (projection and LED vest) and public display (c+d).
8
+
9
+ § ABSTRACT
10
+
11
+ The immersive experience of Virtual Reality (VR) disconnects VR users from their physical surroundings, subjecting them to surveillance from bystanders who could record conversations without consent. While recent research has sought to mitigate this risk (e.g., VR users can stream a live view of their surrounding area into VR), it does not address that bystanders are conversely being recorded by the VR stream without their knowledge. This creates a causality dilemma where the VR user's privacy-enhancing activities raise the bystander's privacy concerns. We introduce Vice VRsa, a system that provides awareness of bystander presence to VR users as well as a VR user's monitoring status to bystanders. This work seeks to provide a framework and set of interactions for considering mutual awareness and privacy for both VR users and bystanders. Results from preliminary interviews with VR experts suggest factors for privacy implications in designing VR interactions in public physical spaces.
12
+
13
+ Index Terms: Human-centered computing-Visualization-Visualization techniques-Treemaps; Human-centered computing-Visualization-Visualization design and evaluation methods
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ Virtual reality (VR) provides users with immersive experiences in entirely virtual spaces. A VR user's immersion in the virtual space disengages their sense of presence from the physical space surrounding them. Such disengagement subjects VR users to not just running into a physical obstacle but also being monitored by others physically co-located without their knowledge or consent [34], as VR users may be unaware of their surroundings. This can result in putting the users in vulnerable positions in their physical space (e.g., private conversations being overheard or even recorded by someone co-located, accidental collision with physical obstacles, or other risks to their physical safety).
18
+
19
+ To alleviate such risks, VR users could activate the VR headset's passthrough camera to see their real surroundings. Additionally, researchers have previously explored various ways to make VR users aware of bystanders in their surroundings. For example, prior work demonstrated representations of the real world in the virtual environment by blending a camera feed with the virtual world [26, 51] or by bringing avatar representations of bystanders into VR [51].
20
+
21
+ However, these monitoring setups could in turn raise bystanders' privacy concerns as the passthrough camera is embedded in the headset to monitor a VR user's surroundings in a physical space. Using wearable cameras, such as those found in commercial VR headsets, remains a long-standing problem of unwanted surveillance $\left\lbrack {{19},{46}}\right\rbrack$ , as little awareness is provided to bystanders. To mitigate privacy concerns of bystanders in these VR hybrid settings (i.e., situations where a VR user is in a physical space where non-VR users might also be present), a camera activation indicator can be used. However, prior work suggests that a LED indicator may not be noticeable and understandable to end-users $\left\lbrack {6,{19},{48}}\right\rbrack$ .
22
+
23
+ In this paper, we aim to level the playing field between VR users and bystanders, by providing awareness to a VR user about a bystander's presence (VR user's awareness) as well as providing bystanders with awareness about what a VR user might see about them (bystander's awareness). To that end, we present Vice VRsa, as an example of a broader concept of a system offering mutual awareness to VR users and bystanders about each other's monitoring status through inside- and outside-VR headset representations. Moreover, we designed Vice VRsa to allow both a VR user and bystander to negotiate their desired levels of privacy as the desired level of privacy is context-dependent [10]-a VR user may not care about being listened to during a casual chat in VR, but may be more mindful about who is around during a confidential meeting in VR. As a proof-of-concept, Vice VRsa provides a VR user with options to choose from four different modes to determine their desired level of privacy: none/green, low/yellow, medium/orange, high/red (See Figure 3). The VR user can change the mode to receive a different level of granularity of information about their surroundings, while the information they share about their activities inside VR decreases. Concurrently, the bystander can be informed about the VR user's desired level of privacy through color indicators, as well as what the VR user is recording about the physical environment via an accompanying public display.
24
+
25
+ In summary, we contribute Vice VRsa, an instantiation of a framework that aims to improve both a VR user's and bystander's awareness of each other's monitoring status. Through our implementation, we demonstrate how Vice VRsa accommodates different privacy needs and how it allows bystander and VR users to negotiate their desired levels of privacy. Initial feedback on Vice VRsa's concept and system from expert VR users shows that the concept is easily understood and that experts find it promising to support their privacy needs in VR hybrid settings for both VR users and bystanders.
26
+
27
+ § 2 BACKGROUND AND RELATED WORK
28
+
29
+ The framework of Vice VRsa builds upon prior work from three areas: (1) VR users' privacy concerns against covert monitoring by bystanders; (2) balancing VR users' awareness about their bystander presence and bystanders' interruption; and (3) bystanders' privacy concerns against camera recordings without consent or knowledge. In the following subsections, we will outline our work's position in relation to prior work.
30
+
31
+ § 2.1 VR USERS' PRIVACY CONCERNS AGAINST BYSTANDERS IN PUBLIC SETTINGS
32
+
33
+ VR's immersive experience overrides the users' sense of presence in a physical space, putting them in a vulnerable position in terms of privacy. For instance, bystanders near the users could eavesdrop on their conversation without permission, or gain information by observing their interactions $\left\lbrack {{25},{43},{49},{55}}\right\rbrack$ . Prior work also pointed out that a bystander could exploit a VR user's vulnerable state by recording video and/or audio of them without their knowledge or consent $\left\lbrack {{31},{34}}\right\rbrack$ . Researchers have pointed to the need for an interaction that addresses privacy concerns for VR users in public spaces such as a shared office [30] as onlookers might still gain sensitive information from VR users' actions [16]. Consequently, researchers have explored how to prevent shoulder surfers from inferring VR users' data entry in VR, for example, by preventing bystander's observing or recording VR users' passcode-entry gestures with the hand-held controllers $\left\lbrack {{25},{55}}\right\rbrack$ or when typing on their keyboard $\left\lbrack {43}\right\rbrack$ .
34
+
35
+ VR users could take off their VR headset [51] or activate the headset's passthrough camera to see outside. However, a VR headset's passthrough only shows a live camera feed from the headset's front-facing camera, with no option to see on the sides or behind, and removing a headset interrupts any task and breaks the immersion. Moreover, because the passthrough feed only provides a full-screen view, VR users must pause their activity to check bystanders' presence, even when requiring minimal information about their presence- VR users may just want to know if someone is nearby without recognizing who they are. To that end, prior work explores how to improve VR users' awareness of bystanders in proximity without breaking the immersion while helping them to be informed about their physical space. For example, researchers have demonstrated methods using various modalities: different visual cues such as avatar, passthrough videos, radar views [13, 20, 26, 33, 45, 51]; auditory feedback $\left\lbrack {{13},{33}}\right\rbrack$ , and text $\left\lbrack {33}\right\rbrack$ .
36
+
37
+ In our work, we adapted and modified various representations of bystander presence inside VR. We, specifically, explore how such representations can be used in privacy-related contexts, and accommodate varying levels of privacy. Additionally, we consider a bystander's privacy against the VR device's camera recording.
38
+
39
+ § 2.2 BALANCING THE DISRUPTION BY AND AWARENESS OF BY- STANDER PRESENCE FOR VR USERS
40
+
41
+ Interventions from the outside VR can disrupt the VR user's feeling of immersion. Particularly, a bystander's interruption of VR experiences increases a VR user's cognitive burden, and may even cause discomfort $\left\lbrack {{26},{32},{37},{53}}\right\rbrack$ . George et al. found that a VR user is less likely to feel discomfort when they are interrupted from outside during their task switch (e.g., during the app transition) inside VR than in the middle of VR tasks [11]. However, Mai et al. found that not knowing information about their surrounding could cause cognitive burden [23] while putting the user at risk of bumping into objects such as furniture by accident or unwanted or abusive activity by the bystander [34]. Owing to that, there occurs a constant negotiation for users to choose between the needs of interruptions and focus.
42
+
43
+ In addition to bystanders' interruptions, how to represent bystanders in VR environments also affects the VR user's immersion. For instance, Kudo et al. explored three different representations of a bystander's presence inside VR [20]. Their findings show that an avatar representation of a bystander was most effective, although more peripheral visualizations of bystanders preserved a VR user's immersion better. They emphasized the need for systems to use the bystander representation that is most appropriate for the level of urgency a given task requires [20]. Yang et al. present ShareSpace which illustrates bystanders as virtual wall or obstacles and helps VR users to avoid physically bumping against the bystanders [54].
44
+
45
+ In order to handle the constant balance between immersion and interruption in VR, we build on prior work to create a framework and system that provides adjustable levels of awareness regarding bystander presence. Vice VRsa offers different bystander representations according to the level of VR users' desired privacy. We aim to give VR users agency over the granularity of the information they receive about bystanders' presence which is designed to match their situational privacy needs.
46
+
47
+ § 2.3 BYSTANDER PRIVACY CONCERNS AGAINST WEARABLE CAM- ERAS
48
+
49
+ VR devices (e.g., a headset, controllers) have a multitude of sensors including built-in cameras and microphones, which enables detecting and observing bystanders without their knowledge. This poses a threat to bystander's privacy, as these sensors could unwittingly capture their directly identifiable (e.g., face) or otherwise personal information (e.g., private conversations), causing social friction $\left\lbrack {2,{17},{46},{48}}\right\rbrack$ . Transparency about the camera recording status can reduce this friction. Commodity wearable VR devices (e.g., Quest) or Augmented Reality (AR) glasses (e.g., Google Glass, Snap Spectacles) have an LED indicating to bystanders whether the camera is currently in use or not $\left\lbrack {{14},{18}}\right\rbrack$ . However, such LED indicators are not easily noticeable and could even confuse bystanders or not be understood at all $\left\lbrack {{19},{35}}\right\rbrack$ . To overcome this, researchers have sought ways to avoid undesirable camera capture and to transparently communicate to bystanders camera recording status [2, 48]. For example, Alharbi et al. found that the level of obfuscation of camera capture could affect the level of bystanders' privacy concerns against unwanted capture [2]. Also, PrivacEye demonstrated a way to improve noticeability and understandability by using a physical cover that blocks a camera lens when the camera is not in use [48].
50
+
51
+ Unlike prior work that addressed privacy concerns about AR, there has been little work regarding the privacy of a VR user's bystander. Schwind et al. found no evidence that bystanders of VR users have privacy concerns about being recorded. However, they also point out that privacy concerns with AR glasses only came about with increased popularity, which in turn led to the reduced social acceptability of these devices [44]. In our work, we extend prior work to consider bystander privacy for VR by providing awareness about what a VR user is recording about their physical space.
52
+
53
+ § 3 VICE VRSA
54
+
55
+ Vice VRsa is a framework and set of interactions to increase awareness of bystanders' presence to VR users and that of VR users' recording of their surroundings to bystanders to enable both sides in negotiating their privacy needs. Next, we will discuss the design considerations and interactions of Vice VRsa.
56
+
57
+ § 3.1 DESIGN CONSIDERATIONS FOR VICE VRSA
58
+
59
+ We account for two factors to design Vice VRsa: (1) desired privacy depending on contexts and (2) privacy notice timing.
60
+
61
+ § 3.1.1 BALANCING PRIVACY AND AWARENESS FOR VR USERS AND BYSTANDERS
62
+
63
+ Inside VR Representations for VR Users' Privacy against Bystanders. People have different levels of desired privacy depending on their context and current situation [10, 29]. For example, when a VR user performs an authentication gesture to log in to a VR application and does not want someone to watch their gestures, they may want to check outside to see if anyone is around them. Conversely, when they play a game and do not mind if someone is watching, they may not necessarily want to see outside their virtual environment to not break their immersion. Therefore, a system should offer different levels of awareness according to the desired levels of privacy, which echoes prior work’s findings [12,27].
64
+
65
+ < g r a p h i c s >
66
+
67
+ Figure 2: Halo implies information of (1) distance (2) and direction. (a, b, c, d) The bigger the sphere is, the closer a bystander is. The sphere appears on the left/right based on the bystander’s position relative to the user. (e, f) The radar view shows the bystander’s precise location in VR.
68
+
69
+ Outside VR Representations for Bystanders' Privacy against VR Users' Monitoring. There is a longstanding problem of cameras in public spaces, with people expressing concerns about being recorded without knowledge or consent [6,48]. Even though commodity VR/AR wearables have an LED indicator associated with their camera activation, prior work found that the LED indicator is unnoticeable and confusing to end users $\left\lbrack {{19},{35}}\right\rbrack$ . One way to address such concerns is to build trust $\left\lbrack {1,7}\right\rbrack$ by improving awareness of the camera's recording status [9]. This could, in turn, help people take further action if they did not want to be recorded. However, commercial VR headsets contain a multitude of cameras, allowing the VR user to observe or record their physical surroundings. This will only increase with future generations of hardware. To mediate trust between the bystander and the VR user, the system should therefore provide awareness to bystanders of whether and what the VR user is recording, as well as an awareness of the VR user's activities inside VR, allowing bystanders to regulate their behavior to accommodate the user's desired level of privacy.
70
+
71
+ § 3.1.2 TIMELY NOTICES FOR PRIVACY AWARENESS
72
+
73
+ Understanding when and how personal information is collected helps people to protect their private data and avoid sensitive data to be tracked, monitored, or recorded without their knowledge [5, 21, 41]. As a result, privacy notices have become an essential part of interactive system as the privacy notice informs users about the data collection, allowing them to make their own privacy decision [5]. Researchers have indicated 'timing' as an important factor of the privacy notice to lower their oversight and to promote transparent communication on the data collection $\left\lbrack {3,{40},{42}}\right\rbrack$ . For example, a privacy notice that appears during smartphone app use is less likely to be overlooked than a privacy notice displayed at the app install time [3]. Therefore, it is critical to find a 'right' moment and duration when a user stays informed about the data collection.
74
+
75
+ This motivates us to design Vice VRsa to provide a privacy notice during VR use to both the user and bystanders. The notice level and amount of information shown depend on the amount of information that is being recorded. For example, if the VR user only wants to know about the presence of any bystanders we show a lesser notice to the bystander. If however, the VR user wants to see a full video feed of any bystanders, the notice carries more urgency and details the information that is being recorded about the environment. We show notices simultaneously in and outside VR during the monitoring, providing awareness to both the VR user and bystanders so they can negotiate their desired level of privacy.
76
+
77
+ § 3.2 VICE VRSA INTERACTIONS
78
+
79
+ We showcase a set of interactions of Vice VRsa aligned with our design considerations. The system setup comprises two parts: (1) a representation inside VR that provides awareness about bystanders according to the user's desired level of privacy (Figure 3, left column); and (2) representations outside VR, including (a) color mode indicators showing the user's set privacy level and (b) an accompanying public display showing information about the VR user's activity and the information that is currently being recorded about the surroundings (Figure 3, right column).
80
+
81
+ Vice VRsa operates in four privacy modes, each corresponding to the desired level of a VR user's privacy. In each mode, various awareness cues in and outside VR are used, which act as privacy notice during VR use. We will discuss how to define the modes and the representations to use for each mode in the following subsections.
82
+
83
+ While various modalities (e.g., visual, sounds [31]) could be applied, we focus on visual feedback as an example of a large category that demonstrates how to design representations both in and outside VR.
84
+
85
+ § 3.2.1 AWARENESS CUES INSIDE VR
86
+
87
+ Inside VR, we chose the representations depending on the extent to which they convey the information about bystander presence: (a) a Halo indicating bystander's presence and distance; (b) Radar showing a bystander's position on a radar map; and (c) Live view showing a live camera view of a bystander at their position in the physical space. Each of these views shows an increasing amount of information about the bystander, therefore also needing to record more information about the surroundings.
88
+
89
+ Halo. We chose Halo as a way to provide minimal information about bystanders' presence while keeping the disruption to VR immersion low. Prior work shows that representations with primitive designs (e.g., sphere) for bystanders provide minimal interruption to end-users [15]. To that end, we adopted the Halo approach which indicates the location of off-screen objects on a map application [4] to show the direction of a bystander's location and distance. The spheres (See Figure 2a-d), entering from the side of the screens, encode two pieces of information: the size of the sphere indicates the distance of the bystanders from the VR user, and the location of the sphere indicates the general direction (left or right) of the bystanders. For this view, the system tracks a rough distance (close, medium, far) and position (left, right) of the bystander.
90
+
91
+ Radar. We selected Radar as a means to offer a quick overview of bystanders' presence for VR users to quickly examine more than the minimal information about bystanders (e.g., the number of bystanders, and how far they are) while maintaining the immersion. Kudo et al. presented the radar view as a way to display an "overview" of bystander locations without breaking the immersion significantly [20]. We adopted this representation and showed it in the top-left corner of the VR user's view (See Figure 2e). Red dots represent bystanders' precise locations relative to the VR user's position (See Figure 2f).
92
+
93
+ Live View. As a tool to see detailed information about bystanders' presence, we designed 'Live View' that displays the camera view of bystanders. In this case, the VR user prioritizes acquiring information about bystanders over maintaining a high level of immersion in VR, e.g., due to highly sensitive tasks or an increased possibility of strangers in the vicinity. Willich et al. demonstrate displaying a passthrough video of passersby in the VR view by using a Microsoft
94
+
95
+ < g r a p h i c s >
96
+
97
+ Figure 3: Vice VRsa’s interactions can consider two stakeholders: (1) VR users; and (2) bystanders. Depending on the VR user’s privacy mode settings (Green, Yellow, Orange, Red), representations both in and outside VR change accordingly.
98
+
99
+ Kinect V2 depth sensor [51]. To achieve a similar effect, we use a 360-degree camera and stream a cropped video view of the detected bystanders. Our aim with this work was to demonstrate the concepts of Vice VRsa, rather than a production-ready implementation. Therefore, using a 360-degree camera offers two benefits over a VR headset's built-in passthrough camera. First, it allows users to see the full surrounding area, unlike front-facing recording by the passthrough camera. Second, the built-in passthrough feature does not provide API access to the raw camera feed. We, therefore, used a 360-degree camera to do image processing on the camera feed. The 360-degree camera and the headset's position are physically aligned through a custom 3D printed holder, therefore the camera feed appears as if directly recorded through the head-mounted display's (HMD) cameras. For this view, the system not only tracks a bystander's location but also records the live camera image.
100
+
101
+ § 3.2.2 AWARENESS CUES OUTSIDE VR
102
+
103
+ Outside VR, the visual cues indicate to the bystander what information the VR system is recording about them. The representations comprise (a) a color mode indicator showing the VR user's desired level of privacy; and (b) an accompanying public display (Figure 3, right column) providing details on the user's activity and what is being recorded about their surroundings. Similar to inside-VR representations, the degree of information about bystander presence varies both in the color indicator and the public display.
104
+
105
+ Color Indicator. A VR user's cameras could record bystanders even over a distance, as long as there is a line of sight. The color mode indicators aim to provide an awareness of the VR users' selected privacy (and thus recording) at a distance within line of sight. A projection on the floor is a direct indication of the user's selected privacy mode and is in direct proximity of the VR user whom it concerns (Figure 4d). An LED-enabled vest (Figure 4e) is aiding this by making the mode visible at even further distances. The floor projection and LED vest could also be replaced or aided by additional indicator lights directly on the HMD, and are used interchangeably.
106
+
107
+ Public Display. The public display is set up near the VR user's "play" area and visible to passersby (Figure 4a). The public display is split into two parts: the top half shows details on the activity the VR user is doing; and the bottom half shows what the system is recording about the user's surroundings (Figure 3 right column). The contents of both parts are controlled by the VR users' desired privacy level, as described next.
108
+
109
+ < g r a p h i c s >
110
+
111
+ Figure 4: Vice VRsa setup (a) consists of four components: (b) Meta Quest 2 with Ricoh Theta S mounted on top; (c) public display; (d) projection on the floor; and (e) a vest attached with a series of LED strips.
112
+
113
+ § 3.3 COLOR MODES INDICATING DESIRED LEVELS OF PRIVACY
114
+
115
+ The VR user can set their desired level of privacy by choosing between no (green), low (yellow), medium (orange), and high (red) privacy as shown in Figure 3. Each mode results in a combination of the previously described awareness cues inside VR, and corresponding visuals outside VR (showing a VR user's activity and what is being recorded). In this section, we describe the representations and interactions in and outside VR for each of the four modes. Note: The modes defined here represent example settings, but we anticipate different users could configure the behavior of their own privacy settings to best match their working contexts, such as the work they conduct, the physical space they are in, the people they share the space with, and their subjective privacy perception.
116
+
117
+ § 3.3.1 GREEN MODE: NO PRIVACY
118
+
119
+ The green mode is designed for the context where a VR user does not need any privacy, and wants to minimize distractions from any awareness cues, for example, when playing a game.
120
+
121
+ Inside VR. In the green mode, no awareness cues about bystanders are provided inside VR (Figure 3, left column, "Green" mode). The system does not track any information about bystanders.
122
+
123
+ Outside VR. Because there is no privacy desired, the VR user's full VR content is streamed to the public display, allowing bystanders to see what the VR user sees (Figure 3 right column). The color indicators (projector and LED vest illuminate green) indicate the user's low privacy mode at a distance.
124
+
125
+ § 3.3.2 YELLOW MODE: LOW LEVEL OF PRIVACY
126
+
127
+ The yellow mode is for situations where the VR user needs a low level of privacy, yet wants to maintain a general awareness of bystanders' presence. The VR user's yellow mode aims to provide an awareness of general presence and proximity of bystanders.
128
+
129
+ Inside VR. The Halo appears on the side(s) of the screen the bystander's location in relation to the user. Its size implies the bystander's distance: the larger the circles the closer bystanders are.
130
+
131
+ Outside VR. The color indicators change to yellow, which signals to bystanders that minimal information about their location is being monitored by the VR user. The public display (Figure 3 right column) does not show the full VR content anymore but instead shows descriptive details on the VR user's activity, for example, the name of the application they are using, or if they are in a meeting, the meeting invite's title and duration. The bottom part of the display shows what kind of information the system records about the environment: the bystander's rough direction and distance.
132
+
133
+ § 3.3.3 ORANGE MODE: MEDIUM LEVEL OF PRIVACY.
134
+
135
+ The orange mode is intended for situations where the VR user is in need of a medium level of privacy. For example, they do not mind if bystanders know that they are in a meeting, but do not want any details known about it.
136
+
137
+ Inside VR. In the orange mode, the VR user receives more details about bystanders' location in the form of the radar view. This provides them with more precision on bystanders' locations and distances. In addition, the Halo is also shown.
138
+
139
+ Outside VR. The color indicators change to orange, which informs bystanders that more information about their presence is being observed. The top half of the public display shows a general notice about the type of activity the VR user is doing, for example, that they are in a meeting or playing a game without revealing which one. The bottom half shows a notice that the bystanders' location is being recorded and as well as a duplicate of the radar view, similar to inside VR.
140
+
141
+ § 3.3.4 RED MODE: HIGH LEVEL PRIVACY NEEDED.
142
+
143
+ The red mode represents the highest level of privacy, where the VR user wants to be aware of bystanders and does not want any information about the VR tasks to be revealed.
144
+
145
+ Inside VR. As the information in the VR tasks is sensitive, the VR user needs to check who the bystanders in proximity are, to make an informed decision on whether it is safe to continue their activity or whether they should be mindful of their conversations and actions. Therefore, in this mode, the previously described live view is shown, which provides a window into the real world. Since this is in addition to the Halo and Radar View, the VR user is first made aware of a bystander's presence and location through these. Once aware, the user can then turn their head in the direction the Halo/Radar indicate to see look into the real world using the live view.
146
+
147
+ Outside VR. The color indicators change to red and the public display shows a warning sign on the top part with a request for privacy. The screen's bottom half shows the live view of the 360 camera, where bystanders can see themselves (Figure 3 right column). This enables alerting the bystanders that the VR user needs a high level of privacy, and that the system actively records the bystander.
148
+
149
+ § 3.4 VICE VRSA IMPLEMENTATION
150
+
151
+ The Vice VRsa prototype, consists of a VR headset with an attached 360-degree camera for tracking bystanders, as well as a privacy notice (in our implementation an external display) and privacy awareness indicators (here an LED vest and an overhead projector).
152
+
153
+ § 3.4.1 INSIDE VR REPRESENTATION
154
+
155
+ For the VR user, there are two hardware components: the VR headset and an external 360-degree camera which is used to recognize and track bystander presence. We use a Meta Quest 2 [28] and Ricoh Theta S 360-degree camera [38] for the VR headset and the 360- degree camera respectively. Current VR headsets allow users to activate a passthrough and to see a live view of their surroundings. However, the passthrough view is only either on or off entirely (as discussed in section 2.1), and the APIs disallow third-party developers to access the passthrough image. We, therefore, mounted the 360 camera on top of the VR headset to synchronize the head and the camera orientations (See Figure 4b). This allowed us to use image processing on a real-world view for bystander detection. The camera and VR headset are tethered to a PC via USB. To detect bystanders outside the VR environment, the PC receives the live video feed from the camera and processes it using the computer vision algorithm Yolo [36] running on Processing.
156
+
157
+ Depending on the VR user's privacy setting, representations of the bystanders change inside VR. We created the representation in a virtual environment using Unity. Bystanders' location data is sent from Processing to Unity via JSON communication and the 360-degree video is streamed directly from the camera feed. The location data of detected bystanders consists of two types of information: (1) the angle difference from a user's current orientation; and (2) the distance. For our prototype, as a proof-of-concept, we estimate the distance by the bottom pixel's y-axis coordinate of the detected human body's bounding box, assuming that the further bystanders are located, the higher bottom pixel $y$ -axis position. In the Unity application, this bystander data is used to display the system's awareness cues.
158
+
159
+ § 3.4.2 SETUP FOR OUTSIDE VR REPRESENTATION
160
+
161
+ The setup for outside VR representation consists of two components: a public display and color mode indicators (comprising a vest with LEDs and a projector). For the Color Indicators outside VR, we use a projector mounted on a tripod, connected to the PC via HDMI. The LED vest has an RGB LED strip woven into the garment, which is controlled by an Arduino microcontroller, connected to the PC, and controlled via Serial communication.
162
+
163
+ § 4 SCENARIO
164
+
165
+ To illustrate how Vice VRsa can support bidirectional awareness of VR users and bystanders, we describe the following scenario where Victoria (a VR user) and Bob (a bystander) working for different companies, while physically co-located in a co-working space.
166
+
167
+ § 4.1 VICTORIA'S PERSPECTIVE: VR USER
168
+
169
+ Green Mode Victoria wears a VR headset and plays a game for a break during work. While gaming, she does not care if someone is watching her. In fact, she wants to encourage others to join her in playing the game. Thus, she sets Vice VRsa's privacy mode to the green mode, and no representation of any bystander is displayed inside her VR space (Figure 5a).
170
+
171
+ Yellow Mode After finishing the game, she joins her team's weekly social meeting through VR and starts a casual conversation. This meeting is casual and nothing sensitive is discussed. While engaging in the conversations, she wants to know if someone is around in the physical space where she is located. She is worried that talking or laughing loudly might disturb people around her. She therefore adjusts the privacy mode to the yellow mode which allows her to be informed about bystanders' presence. A large Halo appears on the left-hand side, telling her that there is someone close by. Thus, she decides to keep her voice down (Figure 5b).
172
+
173
+ Orange Mode After the team social meeting, she joins her team meeting where several important items are discussed. She not only wants to be mindful of disturbing others but also to have a thorough understanding of her surroundings (e.g., how many people are nearby and where are they). She changes her privacy mode to the orange mode, which activates the radar view. She can see there is one bystander on her left close by. While she feels okay that there is someone nearby, she chooses her words carefully to not disclose too much information about the discussion items (Figure 5c).
174
+
175
+ Red Mode After her team meeting, she needs to have a confidential meeting with two team members about a new car design they have been working on. Due to the sensitive nature of the information, she wants to make sure no unauthorized person gains any knowledge about the confidential information. She, therefore, switches to the red mode, which not only provides her the awareness about bystanders' presence and distance through the Halo and Radar but also allows her to see outside the VR through the live view whenever there is a bystander. After a Halo appears on her left, she turns her head and sees a stranger nearby. She politely requests privacy, which the bystander adheres to. Victoria can now be certain that no one is around and continue her confidential design review (See Figure 5d).
176
+
177
+ § 4.2 BOB'S PERSPECTIVE: BYSTANDER
178
+
179
+ Green Mode When Bob is walking around the office, he finds that Victoria is wearing a VR headset and vigorously throwing her arms around. He sees her being engulfed in green light. He understands this as an invitation to come closer. He stops closer by and watches Victoria play a game in VR, which is being relayed on the nearby public display (Figure 5e). He hopes to join her for a round of games.
180
+
181
+ Yellow Mode After a few moments, Bob notices that Victoria sat down to have a conversation. He sees the lights around her change to yellow, and that the public display shows Victoria having her weekly team social meeting, which he believes is not a sensitive meeting. He also sees on the display that his rough location seems detected as 'close left'. While he is now aware of the change and the system recording information, he does not feel that he needs to leave (See Figure 5f).
182
+
183
+ Orange Mode A few minutes later, Bob notices that the colored lights changed to orange. He feels more cautious and hears Victoria talk about work topics. Bob notices that the public display shows his precise location on a radar view in relation to Victoria's position, and he sees that she is no longer in the team social meeting but cannot see any meeting details. He feels that he should not be close and overhear her conversation. Thus, he takes a few steps back from her and continues working on his phone (Figure 5g).
184
+
185
+ Red Mode Suddenly, Bob notices that the lights around Victoria turn red. Right after that, Victoria turns her head towards him. She speaks to him and asks him for privacy. Additionally, Bob finds that the public display shows a warning sign and message underneath that says 'Privacy Please!'. Therefore, Bob understands that she needs complete privacy and walks away immediately, giving Victoria the requested space (Figure 5h).
186
+
187
+ § 5 INITIAL SUBJECTIVE FEEDBACK
188
+
189
+ Our focus of this work is to evaluate a newly introduced concept of the system rather than assess the system's usability. We, therefore, wanted to gain initial insights and feedback on Vice VRsa's utility and usefulness and subjective feedback through interviews with expert VR users. The goal was to learn how Vice VRsa could increase awareness about bystanders, if sharing what is being recorded about the environment could be useful for bystanders, and how Vice VRsa's components would be understood. The study underwent our institution's internal ethics review process. We recruited expert VR users via email from within our institution to participate in a guided walk-through of the functionality of Vice VRsa as demonstrated through video scenarios. Sessions lasted one hour and were conducted via Zoom. Participants were compensated with an equivalent of 75 USD. During the study, we asked participants about prior experience conducting professional work in VR and how they assert their privacy needs in such situations. We showed them videos of the bystanders as well as the VR user's perspective of each of the four privacy modes (green, yellow, orange, red), stopping at several points throughout the videos to ask questions.
190
+
191
+ < g r a p h i c s >
192
+
193
+ Figure 5: We illustrate a scenario to depict how Vice VRsa’s interactions can be used based on the level of privacy mode in various contexts of a VR user(a, b, c, d)and a bystander(e, f, g, h).
194
+
195
+ We recruited seven participants ( 1 female, 6 male; 29-53 years old, $\mathrm{M} = {38.5},\mathrm{{SD}} = {6.97}$ ) with an average of 3.7 years experience of professionally working with VR (1-5 years, SD=1.39). Except one, all participants were currently working weekly or daily in VR, and their roles ranged from XR product, UX, and instructional content designers, to XR customer success managers, and XR researchers. Their professional experience with VR ranged from developing concept designs, training people or demonstrating VR applications, to using VR systems for architectural design tasks.
196
+
197
+ § 5.1 BYSTANDER'S PERSPECTIVE
198
+
199
+ From the bystander's perspective, all participants agreed that the awareness of the VR user's desired level of privacy was useful to regulate their behavior (i.e., by seeing the color indicators from a distance and the details on the public display closeup). However, while all participants stated that green and red would be universally understood, they also stated that the color coding would need to be learned before fully understood. All participants agreed that getting information about the VR user's activity was helpful, primarily to know if they could be interrupted and, in the case of the green mode, finding opportune moments to do so. P7 compared this to regular desktop usage in a shared office, where "you can see what everyone is doing and if they're on a call" (P7). Interestingly, six participants also stated that the more privacy-sensitive a task is, the less likely they would interrupt the VR user, and this is also reflected in their behavior toward interrupting VR users. While all agreed that they would regulate their behavior depending on the VR user's privacy settings, 4/7 also stated that the system would intrigue them to watch the VR user. This points to an interesting dilemma, where the addition of color indication and public display could create a honeypot effect, opposing the purpose of the system.
200
+
201
+ § 5.1.1 AWARENESS ABOUT UNWANTED RECORDINGS
202
+
203
+ Only 3/7 participants stated it was useful as bystanders to know what was being recorded about them. In fact, most would not mind having their distance or location tracked in general, but as soon as their video feed would appear inside VR, they would want to know about this and find the system useful. As one participant put it "[As a bystander] I don't care about position, but if [the VR users] are watching me, they are intruding my own privacy. Especially when recording my face, this would be going a little too far."
204
+
205
+ In sum, as bystanders, participants appreciate the information on the public display for awareness about the VR user's activity, to know when to regulate their presence and opportune moments for interruptions. While they stated that the visuals on the public displays would pose a learning curve, they stated that several aspects of the system could be easily understood, especially in the most restrictive (privacy warning sign) and most permissive (live stream of VR content) modes.
206
+
207
+ § 5.2 VR USER'S PERSPECTIVE
208
+
209
+ From the VR user perspective, 6/7 participants would find Vice VRsa useful for awareness of their surroundings and 5/7 could imagine using the system. Several participants found Vice VRsa unnecessary at this stage as they mostly work from home and would hear others enter their office. However, all agreed that when working in a shared office or public space, bringing situational awareness into VR and sharing one's privacy needs with bystanders would be useful.
210
+
211
+ § 5.2.1 AWARENESS ABOUT UNWITTING MONITORING
212
+
213
+ While most participants stated that their usage of VR in a public/shared space is usually for non-confidential tasks (e.g., trade shows, or playing games), they said that for any privacy-sensitive tasks, they would move out of the public space (e.g., by going into a closed meeting room). Most agreed that they would continue doing so for highly sensitive tasks (such as performance conversations (P4), client meetings (P5), or confidential design reviews (P2, P1)). On the contrary, P3 stated that even while working on sensitive prototypes, they would not always move to separate space, as "[bystanders] can't see what I'm seeing and cannot really make out what I'm talking about [from the] fragments they catch". However, several participants(P4, P7)also stated they are self-conscious about their actions when using VR and would therefore find the awareness about bystanders useful to "[not] look silly" (P7).
214
+
215
+ Using Vice VRsa's red mode to identify who is around was highlighted as most useful. This allows VR users to decide if they would need to break out of VR to negotiate their privacy needs or if they could ignore the bystander. P3 even pointed out that it would be good if the system could automatically identify if the bystander was trusted and not show any awareness cues if so.
216
+
217
+ § 5.2.2 DISTRACTION / BREAKING IMMERSION
218
+
219
+ Four participants mentioned that they would find the visuals distracting, especially the green halo as it grows directly in their peripheral vision. The radar view, however, was seen as less distracting as it "just sits there in the corner and I can ignore it [... and] the red dot doesn't suddenly grow into my face" (P6). While participants agreed that setting a VR user's privacy mode would be useful to communicate with bystanders, several participants also were unsure if the yellow and orange modes were needed. They found the green mode useful to communicate their actions to bystanders for the purpose of sharing the experience (e.g., gaming or teaching) and creating an awareness of what they were doing so that bystanders could adjust their behavior (e.g., they knew if the VR user is interruptible and if so, identifying an opportune moment for doing so). The red mode was seen as useful to clearly express the need for privacy.
220
+
221
+ § 6 DISCUSSION
222
+
223
+ We explored the design space of balancing VR user and bystander privacy and awareness through the implementation and evaluation of Vice VRsa, which includes the development of several novel interactions to improve mutual awareness.
224
+
225
+ § 6.1 NECESSITY FOR PRIVACY
226
+
227
+ One interesting finding from the expert feedback was about the necessity of the system from the VR users' perspective. Several participants mentioned that if they were truly doing something private, they are either already in a private space (e.g., a home), or would move from a public space to a closed meeting room. However, it is not always possible to move to a meeting room due to the availability, or the portability of necessary hardware (motion trackers, desktop computers, etc), especially for higher-end VR setups. Additionally, many of the participants have largely been working from home in recent years, which may impact their sensitivity about privacy.
228
+
229
+ Similarly, not all of the experts interviewed considered bystander awareness about the tracking completely necessary as they had no problem with the VR user knowing their presence. However, when they became aware that the VR headset cameras could record them without their knowledge, their perspective shifted to a more privacy-minded approach. It is foreseen that the use of these devices becomes more commonplace as with AR devices [47]. The sensors become higher fidelity and the real-world privacy risks associated with their use will increase. If not carefully designed, VR hardware could be used to record individuals without their knowledge and erode trust between VR users and bystanders.
230
+
231
+ § 6.2 AWARENESS VS. INTRUSION
232
+
233
+ In our work, while the goal was to examine that the concepts were clear and that participants could understand the prototype's functionalities, several participants commented on the representations' implementation. For example, the halo was too large and distracting whereas the radar was seen as less-intrusive. We believe that further refinements could be made to these techniques in practice. For instance, shrinking the size of the halo or increasing its transparency could reduce the interruption to the main task.
234
+
235
+ Beyond visual feedback, other modalities could be considered as a way to be less intrusive both for the VR user and the bystander in the environment. For the user in VR, subtle haptic cues may be used to indicate the direction and presence of a person, or the use of air flow could provide the simulated sensation of a person walking by [39]. Spatial audio could perform a similar function, supplying information about bystanders through a potentially unused modality [33]. For the bystander, directional speakers may alert them as they walk into the area where they may be sensed, and cause them to look around for further cues about what might record them.
236
+
237
+ § 6.3 A PRIVACY ARMS RACE
238
+
239
+ During our feedback sessions, several expert VR users pointed out that the system is creating an additional overhead, by relying on the user manually setting the privacy mode. While, in this paper, we aimed to explore the design space of Vice VRsa, future work should explore how a privacy mode can be inferred from the user's activity. For example, a system could know that meetings are generally private. A system could go even further, identify who a bystander is, and automatically determine if they are privy to the meeting's content. However, this creates an even bigger dilemma where the system not only records the video footage of the bystander but also connects their identity and contexts from additional databases. This presents another embodiment of the ever-present arms race between defenders of privacy and those who exploit information.
240
+
241
+ § 6.4 SOCIAL PROTOCOLS STILL MEDIATE INTERACTIONS
242
+
243
+ The interfaces explored with Vice VRsa still require humans to follow established social protocols. The awareness indicators for both bystanders and VR users are designed to replace or augment some of the natural perceptual and social cues that are lost when users enter VR, but they are not intended to prevent bad actors on either side. We have found that Vice VRsa can add value to VR/bystander interactions, and can be a tool to help support interpersonal communication in the face of a technological barrier.
244
+
245
+ § 6.5 ECOLOGICAL VALIDITY
246
+
247
+ Our focus of the initial study was to run a preliminary evaluation of a newly introduced concept rather than evaluate the system's usability. To that end, our study goal was to demonstrate the system's workflow and illustrate the system as Ledo et al. suggested as a 'demonstration' [22] and gather feedback on the interaction concepts' potential usefulness. Accordingly, we conducted our evaluation with an interview study by showing the videos that describe Vice VRsa interactions. While participants could not directly interact with Vice VRsa, we found that the concepts of Vice VRsa were understood and that participants felt that Vice VRsa could help improve the balance between bystanders' and VR users' privacy.
248
+
249
+ Furthermore, we chose to gather feedback from VR experts to understand whether and how any newly introduced privacy-related interactions could potentially jeopardize the overall usage of VR. Since privacy is a secondary concern for regular end-users [8], complicated interactions may discourage their adoption [52]. In the future, it will be interesting to collaborate with privacy experts to assess and refine the interaction design and to deploy Vice VRsa in the wild to run a longitudinal study to examine its long-run effect. This will help close the gap between in-lab and field studies [24]. For example, prior work suggests that habituation over time could affect an end-user's security and privacy behaviors [50].
250
+
251
+ § 7 CONCLUSION
252
+
253
+ The immersive nature of VR leaves users vulnerable to surveillance by bystanders or threats to physical safety. To alleviate that, users can use external cameras to view their surroundings. However, this might infringe on bystanders' privacy. In this work, we developed Vice VRsa, a framework and set of interactions to mediate the privacy of VR users and their bystanders. Vice VRsa transparently communicates presence, recording, and desired privacy through techniques inside VR (Halo, Radar, Live View) as well as to the bystander (Color Indicator and Public Display). In our preliminary evaluation with VR experts, we found that Vice VRsa could help address privacy concerns for both, VR user and bystander. We see Vice VRsa as an initial step towards addressing the emerging problem regarding mutual awareness between VR users and bystanders about their privacy in VR hybrid settings.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/hZlwUFmka-U/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,335 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Generating Packed Rectilinear Display Text Layouts with Weighted Word Emphasis
2
+
3
+ Category: Graphics
4
+
5
+ ## Abstract
6
+
7
+ A common text layout style is a "packed rectilinear layout," in which non-overlapping word bounding boxes are packed so that their union forms a rectangle with no holes other than word and line spacing. Designing variations of these layouts while preserving word emphasis is a difficult and time-consuming process. We present a display text layout algorithm in which designers specify parameters that control the visual emphasis of words in these layouts. The number of possible layouts for a phrase follows the sequence of Big Schröder numbers as our algorithm involves the recursive subdivision of a rectangular bounding box. We conducted interviews with designers to understand their preferences and reasoning. They rated the best-fitting layouts generated by our system to be very similar to designs that they would have created themselves.
8
+
9
+ Index Terms: Human-centered computing-Human computer interaction (HCI)-Interactive systems and tools-; Computer graphics-Graphics systems and interfaces-
10
+
11
+ ## 1 INTRODUCTION
12
+
13
+ Display text layouts are stylized typographical arrangements consisting of short phrases, used for applications like headlines, advertisements, and logos. They require skill to design because they combine both typography and graphic design. This is in contrast with body text, which is relatively simple and uniform to lay out. Designers often need to emphasize certain words in a layout to convey the intended meaning of the phrase. However, the shapes and sizes of words has a direct effect on the layout and small changes to the text can have cascading effects on the overall layout, changing the emphasis. For example, Figure 1 is a layout generated using an Adobe Magic Text ${}^{1}$ template in which a small change to the text changes the emphasis of the layout from "healthy" to "how to." Designing aesthetically pleasing layouts that emphasize certain words is a common but time-consuming process because of the many possible layout variations for any given phrase.
14
+
15
+ The relative emphasis of words is a key factor in readability and semantics of the original phrase. Designers often wish to emphasize certain words in a layout, but they are also constrained by the shape of the layout, reading order, or the locations of less salient words. It is difficult to strike a balance between readability, semantics, and aesthetics. Our goal is to support designers in this task by generating variations of display text layouts that satisfy these constraints.
16
+
17
+ An automated and assisted display text system should ideally allow a user to specify parameters to control the visual emphasis of words in a layout without sacrificing its aesthetic quality. Existing techniques for automatically generating $2\mathrm{D}$ layouts from a given set of visual elements are mostly focused on different use cases, like magazines [9], photo collages [7], and other single-page graphic designs [14], which are less rigid in the relative placement of text elements than display text layouts.
18
+
19
+ In this work, we focus on packed rectilinear layouts, such as the example in Figure 1. These consist of words with non-overlapping bounding boxes packed so that the union of all bounding boxes forms a rectangle with no holes other than word and line spacing. Our algorithm generates all possible packed rectilinear layouts for a phrase and prioritizes the layout variations based on their adherence to the desired relative emphasis of words. We also present the results from a series of semi-structured interviews with graphic design experts that aimed to build our understanding of design decisions in creative typesetting.
20
+
21
+ ![01963e0e-1772-73d7-9cc9-5a3e12f7d700_0_966_413_637_331_0.jpg](images/01963e0e-1772-73d7-9cc9-5a3e12f7d700_0_966_413_637_331_0.jpg)
22
+
23
+ Figure 1: Example of unintended emphasis in display text with the template approach in Adobe Magic Text: (a) template layout and colour emphasize semantically important words; (b) changing the word 'HEALTHY' to 'DECENT' significantly alters the visual emphasis of words reducing the readability and saliency of the design. Note the new word is only one character shorter and colours of all words are unchanged; the difference is entirely due to the layout.
24
+
25
+ ## 2 BACKGROUND
26
+
27
+ The automatic layout of visual elements has been an area of extensive research, but typography imposes unique constraints on the visual layout process. Words need to be presented in an order consistent with the reading direction of the chosen script, and word emphasis depends on many visual factors.
28
+
29
+ Automatic Layout Techniques Existing work on graphic design can inform our development of a display text layout technique. Magazine covers share many similarities with display text layouts, including the need for emphasis on certain elements and constraints on design proportions. A magazine cover layout typically consists of a large background image with blocks of text around the edges of the page. Existing machine-learning approaches for magazines take the salience of the background image into account, but they do not focus on the relative positioning among elements $\left\lbrack {9,{18}}\right\rbrack$ .
30
+
31
+ Grid-based layout algorithms divide a canvas into different areas $\left\lbrack {6,8}\right\rbrack$ . This approach works well for the layout of documents where margins between elements and irregular packing are permissible, but it cannot be easily extended to display text layouts where the relative placement of elements is constrained by reading order.
32
+
33
+ Layout approaches that involve the relative placement of elements can be applied to display text layouts if they offer controls for the overall shape of the layout. Kraus [11] proposes tree-based automatic text layout. His method describes the relationships between words through alignment operators at internal nodes, where each operator describes the relative alignment of the node's children. These trees can be traversed to create layouts using the relative positioning between nodes. Blocked Recursive Image Composition (BRIC) [3] is another visual layout technique that automates the creation of design variations. This technique arranges a set of visual elements relative to one another spatially with constraints driven by recursive decomposition of the elements. BRIC respects element aspect ratios and includes precise spacing between elements unless adjustments are necessary to preserve the aspect ratio. Elements are represented in a binary tree where each internal node describes the alignment of its children.
34
+
35
+ ---
36
+
37
+ ${}^{1}$ https://express.adobe.com/
38
+
39
+ ---
40
+
41
+ ![01963e0e-1772-73d7-9cc9-5a3e12f7d700_1_161_155_667_255_0.jpg](images/01963e0e-1772-73d7-9cc9-5a3e12f7d700_1_161_155_667_255_0.jpg)
42
+
43
+ Figure 2: A layout with its corresponding tree structure. Here, H represents a horizontal alignment and $\mathrm{V}$ represents a vertical alignment between subtrees.
44
+
45
+ Graphic Design Principles The composition of text can be approached with layout principles that are widely used in graphic design. Bauerly et al. [4] presented two experiments that explored the effect of symmetry, balance, and quantity of construction elements on interface aesthetic judgments. In our work, we extend these principles and formalize the templates presented by Bauerly et al. in order to automatically generate visually pleasing layouts.
46
+
47
+ O'Donovan et al. [14] proposed an energy-based approach derived from design principles to analyze, create, and evaluate the design quality of layouts. In the evaluation stage, the importance of each element, labels specifying element alignment, and a grid-based segmentation are derived for an input layout. These are used as inputs to an energy function. The energy function also considers the visual salience of the image on the location of the text. Although their system produced visually pleasing results, the technique is very time-intensive and not interactive.
48
+
49
+ DesignScape [13] is another tool that provides layout suggestions for designers by varying attributes such as alignment and scale for design elements. Their tool provides layout options that can be selected as well as an adaptive interface that adjusts elements automatically with any change in the layout from the user.
50
+
51
+ Text Attributes Legibility at-a-glance is a crucial feature of successful display text layouts. Sawyer et al. [16] explored which attributes make layouts legible upon a quick glance and compared these attributes across eight popular sans serif fonts.
52
+
53
+ "Personality" is a concept that is used by designers to determine the font selection for different designs, but not a well-defined term. Researchers have tried to find empirical measures that are associated with certain moods using a subset of letters to determine font personality [12] and through crowd-sourced opinions on font connotations [17].
54
+
55
+ While past work has presented techniques for flexible layouts of visual elements in general, display text layouts have specific constraints, such as reading order and aspect ratio, and we focus this work on them. We present a technique for generating all possible packed rectilinear layouts for a text phrase and rank the layouts based on designer preferences and design principles. The design of our tool was guided by a series of interviews with expert designers.
56
+
57
+ ## 3 TECHNIQUE
58
+
59
+ We present an algorithm for generating and ranking all packed rectangular layouts that are possible for a given phrase. The generated layouts adhere to these characteristics:
60
+
61
+ - each word must be to the right of or below the previous word in the phrase;
62
+
63
+ - the convex hull of all the words in the layout must closely approximate a rectangle;
64
+
65
+ - the layout must be filled with words, word spacing, or leading (the vertical space between lines).
66
+
67
+ Let $\overrightarrow{w} = \left( {{w}_{1}\ldots ,{w}_{n}}\right)$ be a sequence of $n$ words representing the phrase to be laid out, and let $\overrightarrow{e} = \left( {{e}_{1}\ldots ,{e}_{n}}\right)$ be a vector representing the designer's intended emphasis goal for each word, which we also refer to as the emphasis schema. For example, $\overrightarrow{e} = \left( {4,1,1,3}\right)$ means the first word should be emphasized most, followed by the fourth word, with the remaining two words equally least emphasized The numeric value of a characteristic, such as height or width of each word, can be represented using a characteristic vector $\overrightarrow{c} =$ $\left( {{c}_{1}\ldots ,{c}_{n}}\right)$ . These characteristics can be any parameterized attribute that contributes to word emphasis. Our goal is to compute a emphasis adherence score $E$ for every possible packed rectilinear layout for a given phrase.
68
+
69
+ We chose exhaustive generation of layouts because it allows us to find the optimal layouts that fit an emphasis schema and gives designers the maximum number of possible layouts to use as a template. This also provides a wider variety of layouts for us to use as examples when answering questions about aesthetics and design Layouts that do not match the emphasis schema closely might also be valuable to the designer as a starting point for designs, so it is useful to present all variations as possibilities.
70
+
71
+ #### 3.0.1 Big Schröder Numbers
72
+
73
+ The constraints imposed by word aspect ratios and reading order allow us to predict exactly how many variations of each phrase are possible. The number of possible layouts for a phrase consisting of $n$ words follows the sequence of Big Schröder numbers [2]. The first ten terms of the Big Schröder number sequence are 1, 2, 6, 22,90,394,1806,8558,41586,206098, which is an exponential sequence. Big Schröder numbers describe the number of ways a rectangle can be divided into $n + 1$ rectangles using $n$ distinct guillotine cuts, which mirrors how packed rectilinear layouts are essentially subdivisions of a rectangular layout outline [15].
74
+
75
+ ### 3.1 Layout Variation Generation
76
+
77
+ Our technique for generating all possible packed rectilinear layouts of a phrase uses a tree structure similar to the image layout algorithm, BRIC [3]. All layouts can be constructed by alternating between vertical and horizontal alignments of two sets of words The key difference in our algorithm is the presence of additional geometric and layout constraints that are inherent in typographic layouts. Typographic layouts need to be designed with constraints on reading order and there is less flexibility in aspect ratio for each of the elements.
78
+
79
+ #### 3.1.1 Tree Construction
80
+
81
+ Each layout variation can be characterized by a tree where each leaf node represents a word and each internal node represents a vertical or horizontal alignment between its subtrees. Nodes that share a common parent have the same height in the case of a horizontal alignment or width in the case of a vertical alignment. Figure 2 shows an example where the word TYPE is placed in a horizontal configuration with a subtree containing a vertical arrangement of the rest of the words in the phrase.
82
+
83
+ For each subdivision of the phrase into two non-empty subsequences, we recursively compute all layouts for each of the two subsequences. We then generate layout variations of the whole phrase by placing a layout from each of the two subsequence sets horizontally and vertically adjacent to one another. When placing them horizontally, we scale each recursive layout uniformly to have the same height, and when placing them vertically, we scale each to have the same width. If the first subsequence has $i$ layouts, and the second $j$ layouts, this generates ${2ij}$ combinations.
84
+
85
+ ![01963e0e-1772-73d7-9cc9-5a3e12f7d700_2_165_175_1471_1327_0.jpg](images/01963e0e-1772-73d7-9cc9-5a3e12f7d700_2_165_175_1471_1327_0.jpg)
86
+
87
+ Figure 3: Each row shows the top 5 layouts for different emphasis goal vectors (highest to lowest match, left to right).
88
+
89
+ The alignments alternate between all vertical and all horizontal in a given level because a tree where a parent and its children have the same alignment is equivalent to one where the children have been moved to be siblings of the parent.
90
+
91
+ #### 3.1.2 Word Order
92
+
93
+ In the tree construction process, we determine the order of the placement of the children using the order of the words with which they are associated: the recursive layout for the second subsequence of words must be to the right of the layout for the first subsequence, or below it. The resulting layout always has words that are later in the phrase placed to the right of, or under, preceding words. This follows the reading order convention for text in English, which is a Z-shaped reading order left-to-right, top-to-bottom.
94
+
95
+ #### 3.1.3 Spacing
96
+
97
+ Leading is the baseline-to baseline vertical distance between lines of text. It is often specified as a fraction of the text size, which makes it difficult to determine leading when a display text layout uses multiple font sizes. We used equal distances for leading and horizontal space between words, with the exception of consecutive words that are the same height, which use the default horizontal spacing for the given font.
98
+
99
+ Optical margin alignment, or margin kerning, is the process of adjusting the horizontal spacing of a letter that overhangs on the margin of a piece of text to create the appearance of being aligned flush with the edge [19]. In packed rectangular display text layouts, this optical alignment is necessary for each word to achieve an optically aligned packing. We created a table of horizontal offsets, similar to an optical margin kerning table, to indicate the offsets required so that the edge of the word appears flush with the edge of the overall layout.
100
+
101
+ ### 3.2 Layout Prioritization
102
+
103
+ After generating layouts, we prioritize them based on the Euclidean distance between $\overrightarrow{e}$ and layout attributes $\overrightarrow{c}$ . $\overrightarrow{e}$ is a vector of $n$ numbers representing the relative emphasis of each word in the phrase. The numbers are positive and do not need to be unique. $\overrightarrow{c}$ represents the values of any parameterized attribute, or characteristic, of the words in the phrase. We focus on word height in the examples presented in this work, but other attributes such as font weight and colour could also be used.
104
+
105
+ Given an emphasis schema $\overrightarrow{e}$ and characteristic vector $\overrightarrow{c}$ , both of size $n$ , the Euclidean distance can be calculated:
106
+
107
+ $$
108
+ d\left( {e, c}\right) = \sqrt{\mathop{\sum }\limits_{{i = 1}}^{n}{\left( {c}_{i} - {e}_{i}\right) }^{2}}
109
+ $$
110
+
111
+ Note $d\left( {e, c}\right)$ can also be calculated using other distance metrics such as cosine similarity, but we did not find that using them made a noticeable difference in layout quality empirically. The final emphasis score, $E$ , can be calculated as a linear combination of the values of $d\left( {e, c}\right)$ for all the characteristic vectors that the designer wishes to include. Users can specify the number of layouts they would like to see, and the algorithm will select that many matches with the smallest value of $E$ .
112
+
113
+ ### 3.3 Implementation
114
+
115
+ Our algorithm was implemented as a design tool in Processing ${}^{2}$ using the Geomerative library. ${}^{3}$ It is currently calibrated for the Verdana Bold font, but can be adapted for other fonts. We chose Verdana because Josephson et al. found that Verdana was the most readable among their selection of fonts [10], and it is recommended for displaying letters and digits with high legibility [5].
116
+
117
+ While our algorithm is exponential in the number of words, it works well with display text, which generally has fewer than ten words. Figure 3 shows a series of example layouts generated using our tool with varying emphasis schemas.
118
+
119
+ The number of possible layouts increases quickly with each additional word, but this is unlikely to be a computational issue for display text layouts with ten words or fewer. In one execution of the implementation of the algorithm on a consumer-grade ${2.50}\mathrm{{GHz}}$ processor, our tool took 66 milliseconds to generate all layouts and select the 5 layouts that best fit the emphasis schema for 4 words (22 variations). On longer phrases of 9 words (41,586 variations), it took 82.5 seconds.
120
+
121
+ ## 4 INTERVIEW STUDY
122
+
123
+ We conducted semi-structured interviews with five design experts to better understand design practices and preferences in packed rectilinear layouts, and to validate the efficacy of layouts generated by our tool. The goals of these interviews were:
124
+
125
+ - to understand designer preferences for packed rectilinear layouts
126
+
127
+ - to develop a hierarchy of visual emphasis methods
128
+
129
+ - to evaluate the efficacy of our layout prioritization method
130
+
131
+ This design study was split into two sessions: a within-subjects experiment involving web-based design tasks and a semi-structured interview to clarify the responses from the experiment comments. The web-based task primed the designers to think about designing packed rectilinear layouts before the interviews.
132
+
133
+ ![01963e0e-1772-73d7-9cc9-5a3e12f7d700_3_934_149_707_374_0.jpg](images/01963e0e-1772-73d7-9cc9-5a3e12f7d700_3_934_149_707_374_0.jpg)
134
+
135
+ Figure 4: The scaling task with a scaling factor of 3.
136
+
137
+ ### 4.1 Participants
138
+
139
+ We recruited participants (3 female, 2 male) using the Adobe Illustrator Prerelease Forum, and selected participants with at least 10 years of professional design experience, with an average of 22.4 years of design experience. Participants received $\$ {100}\mathrm{{CAD}}$ for successful completion of the study.
140
+
141
+ - P1 is a teacher with over 30 years of experience with graphic design and typesetting.
142
+
143
+ - P2 is an illustrator and multidisciplinary designer with over 10 years of graphic design and typesetting experience.
144
+
145
+ - P3 is a designer with 21 years of experience in graphic design and 15 years of typesetting experience.
146
+
147
+ - P4 is a Workflows and Adobe Instructor with 25 years of graphic design and typesetting experience.
148
+
149
+ - P5 is a graphic designer with 26 years of graphic design and typesetting experience.
150
+
151
+ ### 4.2 Procedure
152
+
153
+ This user study was divided into a priming task followed by a semi-structured interview and design task with an experimenter. The web-based task was hosted on a Google Firebase server and created using the JsPsych ${}^{4}$ framework in JavaScript.
154
+
155
+ #### 4.2.1 Priming Tasks (20 Minutes)
156
+
157
+ First, the participant completed a Scaling Task. In this web-based activity, they were asked to scale a word relative to another word using a slider until they were at certain scales relative to one another (Figure 4). The interface had no visual guidance tools presented on-screen. There were three different target scales $({0.5} \times ,2 \times$ , and $3 \times )$ across four different words of varying lengths $(\mathrm{{NO}},\mathrm{{CATS}},\mathrm{{EAT}}$ , and GRASS) for a total of 12 scaling tasks per participant. This task was designed to test which metric, such as height, width, or area, designers used to determine relative size and the degree to which it matched their actual selections.
158
+
159
+ Second, the participant completed a Ranking Task. They were given an emphasis schema, and asked to rank five layout designs for the same phrase from best to worst according to how they fit the emphasis schema. This was done by dragging the image of the layout into an ordering (Figure 5). Participants were also asked about how much they liked the first and second choices in their ranking and could provide further explanation through a free-response box.
160
+
161
+ For the ranking task, we used five 4-word phrases, "ALL FROGS GO HERE", "ALL HORSES LOVE GRASS", "MY IMPALA JUMPS HIGH", "NO CATS EAT ORCAS", and "SOME CATS LIKE DOGS", each with five different emphasis schemas,(1,1,1,1), $\left( {1,1,2,1}\right) ,\left( {1,2,3,4}\right) ,\left( {4,3,2,1}\right)$ , and(3,2,5,1). These phrases have a variety of word-length distributions; for example, all words are the same length in "SOME CATS LIKE DOGS". Each participant completed 25 ranking tasks covering all combinations of phrase and emphasis schemas. For each task, we selected the top five layout variations based on scores from our tool and presented them to the participant in randomized order.
162
+
163
+ ---
164
+
165
+ ${}^{2}$ https://processing.org/
166
+
167
+ ${}^{3}$ http://www.ricardmarxer.com/geomerative/
168
+
169
+ ${}^{4}$ https://www.jspsych.org/
170
+
171
+ ---
172
+
173
+ ![01963e0e-1772-73d7-9cc9-5a3e12f7d700_4_157_148_706_380_0.jpg](images/01963e0e-1772-73d7-9cc9-5a3e12f7d700_4_157_148_706_380_0.jpg)
174
+
175
+ Figure 5: The ranking task with an emphasis schema of(1,1,1,1) and the phrase "MY IMPALA JUMPS HIGH". The circles represent rankings from left (highest) to right (lowest).
176
+
177
+ The order of the ranking task was grouped by emphasis schema. After each group of five phrases with the same emphasis schema, participants were asked to describe the strategies they used to rank the designs. The experimenter later used these responses to guide the semi-structured interview.
178
+
179
+ #### 4.2.2 Design Interview (40 Minutes)
180
+
181
+ The semi-structured interviews focused on six main themes related to the design of packed rectilinear layouts:
182
+
183
+ - Readability: What makes a layout readable? What are the considerations that must be made to ensure designs are understandable at different scales?
184
+
185
+ - Ambiguity: Which factors cause layouts to have ambiguous reading order or meaning?
186
+
187
+ - Alignment: How should words of different scales be aligned in packed rectilinear layouts?
188
+
189
+ - Spacing: What determines leading and spacing between words when there are words of various sizes in a layout?
190
+
191
+ - Emphasis: Which factors can be used to emphasize certain words in a layout?
192
+
193
+ - Scaling: How is relative scaling between words determined?
194
+
195
+ Designers were encouraged to share their computer screens and create designs to illustrate the ideas that they discussed in these design sessions. For example, the experimenter prompted some of the designers to resolve reading order ambiguity in a given design and present their version of the layout. Examples generated by designers are discussed below.
196
+
197
+ ## 5 RESULTS
198
+
199
+ ### 5.1 Scaling Preferences
200
+
201
+ We found that designers used height to judge the relative size of different words. The meaning of "size" in the scaling task prompt was intentionally ambiguous so that designers would use the size metrics that conformed to their internalized rules for text layout. Some possible correlates to emphasis include word height, area, length, and diagonal length. As seen in Figure 6, the user selections aligned better with estimation based on height than estimation based on area.
202
+
203
+ The average error between user selection and height determined by the scaling value of the given task was -17.29%( $\sigma = {18.45}$ , one outlier at ${0.5}\mathrm{x}$ scale removed). We also asked the designers which strategies they used to determine relative scaling and all participants responded that they used the height of the word to determine size.
204
+
205
+ P1 judged relative scale by finding a tall letter with a flat top such as T to use as a benchmark. For words with no such letter available such as WOW or COO, they reported that they squinted and looked at the word upside down to see the perceived edges of the word without being distracted by its meaning or familiar shape. P2 also reported using a similar technique to exclude the overshoots of the rounded letters from their analysis of the shape.
206
+
207
+ All of the design experts that we interviewed mentioned "eyeballing it", or optical compensation, in reference to the spacing between two words of different font sizes and with the scaling tasks. P2 expressed how they used the overall height of the letter as a baseline in their mind to compare font scaling, but the designers did not follow references as strictly as we had previously imagined.
208
+
209
+ ### 5.2 Ranking Preferences
210
+
211
+ We compared the preferences of designers in the layout ranking portion of the study with the emphasis adherence of our tool using Spearman’s rank correlation coefficient, $\rho .{}^{5}$ Comparing the rankings given by designers and the rankings determined by our tool gave $\rho = {0.99}$ , which indicates a very high level of agreement between the designers and our proposed rankings.
212
+
213
+ ### 5.3 Semi-structured Interview
214
+
215
+ #### 5.3.1 Readability
216
+
217
+ Across all the design experts that we interviewed, the consensus was that readability was the most important consideration for display text The designers considered left-to-right as the dominant direction that readers' eyes will move, followed by top-to-bottom. P3 explained that readers "naturally just read left [to] right, at least in Western language." P2 also expressed similar ideas about the default reading order for readers of English. For P1, absolute scale played a key part in readability, and by extension, how they ranked their preference for a layout. If any of the words in the given examples were too small to comfortably read, they automatically ranked it lower than the other layouts. To ensure readability at different scales for display text layouts, P3 talked about how they would shrink the canvas to simulate reading the layout from very far away.
218
+
219
+ #### 5.3.2 Ambiguity
220
+
221
+ Reading order ambiguities arise when there are deviations from the usual Z-shaped reading order that most readers of western languages are accustomed to seeing, which prioritizes left-to-right and then top-to-bottom reading. Deviations from this reading order without additional ordering cues can confuse the reader and negatively affect understanding. When asked to elaborate on their preferences for reading order, P5 said "I'm never going to read down, I'm always going to read across unless there's a break, or some other visual clue that those things go together like color, or different font."
222
+
223
+ The designers had several approaches for reducing ambiguity in layouts. One option for reducing ambiguity is to group words based on a certain attribute. During the free design portion of the interview, P4 created a layout that led users to read down by grouping based on different fonts and weight (Figure 7b) Another technique is to increase spacing between groups to make them distinct visual elements.
224
+
225
+ #### 5.3.3 Alignment
226
+
227
+ In packed rectilinear layouts, all words on the borders of the layout must be aligned to create a straight edge. Through interviews with designers, we found that this is usually determined using some form of optical margin alignment, with or without the use of the alignments defined by the font. P5 discussed how they often relied on optical bounds instead of inking boundaries to determine alignment. For example, if a word began with the letter " $\mathrm{O}$ " and was on the left edge, they were inclined to move it slightly more to the left to let the curve hang over the edge of the layout.
228
+
229
+ ---
230
+
231
+ ${}^{5}\rho$ ranges between -1 for low correlation between two rankings and 1 for high correlation between rankings.
232
+
233
+ ---
234
+
235
+ ![01963e0e-1772-73d7-9cc9-5a3e12f7d700_5_163_152_1476_590_0.jpg](images/01963e0e-1772-73d7-9cc9-5a3e12f7d700_5_163_152_1476_590_0.jpg)
236
+
237
+ Figure 6: Scaling task selections for all participants, grouped by intended scale. The base scale was ${20}\mathrm{{pt}}$ .
238
+
239
+ ![01963e0e-1772-73d7-9cc9-5a3e12f7d700_5_167_864_692_241_0.jpg](images/01963e0e-1772-73d7-9cc9-5a3e12f7d700_5_167_864_692_241_0.jpg)
240
+
241
+ Figure 7: (a) An example of a layout that has a slanted vertical axis (b) An example of a layout that says SOME CATS LIKE DOGS which could be mistakenly read as SOME LIKE CATS DOGS. Grouping through font choice reduces ambiguity.
242
+
243
+ P2 introduced an interesting example of putting the vertical axis of the layout on a slant (Figure 7a). While our tool does not currently support these layouts, a tilt factor could be added to the algorithm for specific fonts that do not have a perfectly vertical $y$ axis.
244
+
245
+ #### 5.3.4 Spacing
246
+
247
+ Leading and spacing are usually font attributes, but these are often manipulated by designers in display text layouts. These attributes are often designed with body text in mind so they are often irrelevant for display text. The spacing between words is highly dependent on the specific design, so there was less consensus between designers. In general, our participants used leading and spacing that were the same height and width, and used the default spacing for one of the fonts as a size reference.
248
+
249
+ P1 reported that their method for determining the approximate spacing between words in a layout with varying font sizes and packing is to take the standard space between words in the smallest font and use that as the size for leading and horizontal spaces. P2 had a slightly different approach of using double the default leading between the smallest words in the layout.
250
+
251
+ In order to separate two groups of words, P2 said that the space between groups should be "at least the length or the width of one of the largest characters." They also expressed how leading could also be doubled to create a vertical separation between word pairs.
252
+
253
+ P4 mentioned how font weighting also affects the amount of space they choose to add between words, "when words are bolder the designer tends to give the word more space to let it breathe."
254
+
255
+ #### 5.3.5 Emphasis
256
+
257
+ Emphasis relies on the contrast between a word and its surroundings. It can be achieved through changing many different attributes such as the font, weight, colour, size, and placement. Size is the emphasis technique that we focused on in the priming task, but the designers in our study provided suggestions on how they use other techniques, depending on design needs. When asked about in-situ emphasis techniques that would not alter the layout, P1's top preference was adjusting the weight. Their second choice was to edit the font of the emphasized word, and their third choice was to use colour. When asked about factors that affect emphasis in a layout, P3 said "I think scale is probably more important than placement."
258
+
259
+ ## 6 DISCUSSION AND FUTURE WORK
260
+
261
+ ### 6.1 Text Attributes
262
+
263
+ In this work, we focused on using word height as a proxy for emphasis. Other factors, such as colour or contrast differences between words, font weight, or using italics, can also affect the level of emphasis on a given word in a phrase. We only evaluated variations in word heights due to the exponentially increasing number of possible layout variations for each attribute, but future work might explore how these attributes can be used in conjunction with height to create varying levels of emphasis for a given word.
264
+
265
+ ### 6.2 Variable Fonts
266
+
267
+ In the freestyle design portion of the interviews, many of the designers chose to use variable fonts to change the horizontal span and aspect ratio of words. Variable fonts, or OpenType Font Variations [1], are fonts with continuously adjustable parameters. While our tool did not take advantage of variable font weighting, it is a promising direction of future exploration. Variable fonts allow designers to change the emphasis of a certain word in a layout without changing the relative positions of each word in the layout but this would require parameterizations to create better emphasis metrics to determine $\overrightarrow{c}$ .
268
+
269
+ During the free design portion of the study, P1 used variable fonts to make fine-grain adjustments to word weighting. In particular, they increased the weights of words that had a smaller font size to give all words in the layout similar weight despite differing sizes.
270
+
271
+ ### 6.3 Rotation
272
+
273
+ Rotation was suggested by P1 and P4 as a way to de-emphasize certain words. Our packing algorithm could also be used for text rotated 90 degrees because the underlying principle of using the aspect ratio remains the same. However, we did not investigate the effect of rotational variations using our tool because it would have drastically increased the number of layouts that we considered. A pilot study for this project found that rotation slows reading speed and can be used to de-emphasize words in a layout. Future work might explore how rotations affect designers' preferences for layouts and how to model the resulting visual relevance.
274
+
275
+ ### 6.4 Semantics
276
+
277
+ In this work, we discussed emphasis of words without a direct connection to semantics. With real-world design tasks, there is often a connection between semantic importance and emphasis. Language models could be used to detect the most important words in a layouts automatically and provide a starting point for users to specify their emphasis preferences. For example, articles such as "the" or "an" are unlikely to require emphasis in a layout. Semantics could also affect the placement of words as different clauses of the phrase might require separation. In future enhancements, semantic breaks could be entered into the algorithm to create wider gaps between different clauses and reduce reading order ambiguity.
278
+
279
+ ### 6.5 Optimization
280
+
281
+ The current implementation of the algorithm iterates over all possibilities of the layout, but performance could be improved by caching repeated subtrees during layout variation generation to avoid repeated computation.
282
+
283
+ ### 6.6 Ambiguity and Filtering
284
+
285
+ Ambiguity filters could be used to discard certain layout variations prior to the ranking and prioritization step. The primary source of reading order ambiguity is where non-consecutive words in the phrase have a similar height and are approximately aligned horizontally. Designers might wish to enable other filters, such as a minimum word height or maximum size disparity between words.
286
+
287
+ ### 6.7 Fine Tuning
288
+
289
+ While our tool provides a starting point for designers, much of design is in details that are specific to each design project. The current implementation of our tool does not provide users with dynamic control over parameters such as the space between words. Future implementations of the tool might provide further support for designers to tune the generation of variations to reduce the space of variations to only include layouts that better align with their vision.
290
+
291
+ The layouts produced by our tool would likely be post-processed by designers in professional design programs to further reduce ambiguity and enhance aesthetic qualities. Our tool currently outputs editable SVG files, but this algorithm could be implemented as a plugin for professional design tools such as Adobe Illustrator to create a seamless design experience in one application.
292
+
293
+ ## 7 CONCLUSION
294
+
295
+ In this work, we present a new tool for automatically generating packed rectangular typographical layouts and prioritizing layout variations based on emphasis schemes. The automatic generation and prioritization of design variations allow a designer to explore all combinations of packed rectilinear layouts for a given phrase without the need for manual alignment and resizing. Automatic typographical layout tools can introduce designers to possible layouts that would have otherwise been too time-consuming to explore. These suggestions can serve as a starting point for designers when creating packed rectangular display text layouts.
296
+
297
+ ## REFERENCES
298
+
299
+ [1] OpenType Font Variations Overview. https://learn.microsoft.com/en-us/typography/opentype/spec/otvaroverview.
300
+
301
+ [2] The On-line Encyclopedia of Integer Sequences® (oeis®). https: //oeis.org/A006318.
302
+
303
+ [3] C. B. Atkins. Blocked recursive image composition. In Proceedings of the 16th ACM international conference on Multimedia, pp. 821-824, 2008.
304
+
305
+ [4] M. Bauerly and Y. Liu. Computational modeling and experimental investigation of effects of compositional elements on interface and design aesthetics. International Journal of Human-Computer Studies, 64(8):670-682, 2006.
306
+
307
+ [5] B. S. Chaparro, A. D. Shaikh, A. Chaparro, and E. C. Merkle. Comparing the legibility of six cleartype typefaces to Verdana and Times New Roman. Information Design Journal, 18(1):36-49, 2010.
308
+
309
+ [6] S. Feiner. A grid-based approach to automating display layout. In Proc. Graphics Interface, vol. 88, pp. 192-197, 1988.
310
+
311
+ [7] S. J. Harrington, J. F. Naveda, R. P. Jones, P. Roetling, and N. Thakkar. Aesthetic measures for automated document layout. In Proceedings of the 2004 ACM symposium on Document engineering, pp. 109-111, 2004.
312
+
313
+ [8] C. Jacobs, W. Li, E. Schrier, D. Bargeron, and D. Salesin. Adaptive grid-based document layout. ACM Transactions on Graphics (TOG), 22(3):838-847, 2003.
314
+
315
+ [9] A. Jahanian, J. Liu, Q. Lin, D. Tretter, E. O'Brien-Strain, S. C. Lee, N. Lyons, and J. Allebach. Recommendation system for automatic design of magazine covers. In Proceedings of the 2013 international conference on Intelligent user interfaces, pp. 95-106, 2013.
316
+
317
+ [10] S. Josephson. Keeping your readers' eyes on the screen: An eye-tracking study comparing sans serif and serif typefaces. Visual communication quarterly, 15(1-2):67-79, 2008.
318
+
319
+ [11] W. F. Kraus. A data model for automatically generating typographical layouts. In 2020 IEEE International Conference on Multimedia and Expo (ICME). AAAI, 2020.
320
+
321
+ [12] J. Mackiewicz. How to use five letterforms to gauge a typeface's personality: A research-driven method. Journal of technical writing and communication, 35(3):291-315, 2005.
322
+
323
+ [13] P. O'Donovan, A. Agarwala, and A. Hertzmann. DesignScape: Design with interactive layout suggestions. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI '15, p. 1221-1224. Association for Computing Machinery, New York, NY, USA, 2015. doi: 10.1145/2702123.2702149
324
+
325
+ [14] P. O'Donovan, A. Agarwala, and A. Hertzmann. Learning layouts for single-page graphic designs. IEEE transactions on Visualization and Computer Graphics, 20(8):1200-1213, 2014.
326
+
327
+ [15] F. Qi and B.-N. Guo. Some explicit and recursive formulas of the large and little Schröder numbers. Arab Journal of Mathematical Sciences, 23(2):141-147, 2017. doi: 10.1016/j.ajmsc.2016.06.002
328
+
329
+ [16] B. D. Sawyer, J. Dobres, N. Chahine, and B. Reimer. The great typography bake-off: comparing legibility at-a-glance. Ergonomics, 63(4):391-398, 2020.
330
+
331
+ [17] A. Shirani, F. Dernoncourt, J. Echevarria, P. Asente, N. Lipka, and T. Solorio. Let me choose: From verbal context to font selection. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 8607-8613. Association for Computational Linguistics, Online, July 2020. doi: 10.18653/v1/2020.acl-main.762
332
+
333
+ [18] S. Tabata, H. Yoshihara, H. Maeda, and K. Yokoyama. Automatic layout generation for graphical design magazines. In ACM SIGGRAPH 2019 Posters, pp. 1-2. 2019.
334
+
335
+ [19] H. Thè Thanh. Micro-typographic extensions to the TEX typesetting system. 2000.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/hZlwUFmka-U/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,283 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § GENERATING PACKED RECTILINEAR DISPLAY TEXT LAYOUTS WITH WEIGHTED WORD EMPHASIS
2
+
3
+ Category: Graphics
4
+
5
+ § ABSTRACT
6
+
7
+ A common text layout style is a "packed rectilinear layout," in which non-overlapping word bounding boxes are packed so that their union forms a rectangle with no holes other than word and line spacing. Designing variations of these layouts while preserving word emphasis is a difficult and time-consuming process. We present a display text layout algorithm in which designers specify parameters that control the visual emphasis of words in these layouts. The number of possible layouts for a phrase follows the sequence of Big Schröder numbers as our algorithm involves the recursive subdivision of a rectangular bounding box. We conducted interviews with designers to understand their preferences and reasoning. They rated the best-fitting layouts generated by our system to be very similar to designs that they would have created themselves.
8
+
9
+ Index Terms: Human-centered computing-Human computer interaction (HCI)-Interactive systems and tools-; Computer graphics-Graphics systems and interfaces-
10
+
11
+ § 1 INTRODUCTION
12
+
13
+ Display text layouts are stylized typographical arrangements consisting of short phrases, used for applications like headlines, advertisements, and logos. They require skill to design because they combine both typography and graphic design. This is in contrast with body text, which is relatively simple and uniform to lay out. Designers often need to emphasize certain words in a layout to convey the intended meaning of the phrase. However, the shapes and sizes of words has a direct effect on the layout and small changes to the text can have cascading effects on the overall layout, changing the emphasis. For example, Figure 1 is a layout generated using an Adobe Magic Text ${}^{1}$ template in which a small change to the text changes the emphasis of the layout from "healthy" to "how to." Designing aesthetically pleasing layouts that emphasize certain words is a common but time-consuming process because of the many possible layout variations for any given phrase.
14
+
15
+ The relative emphasis of words is a key factor in readability and semantics of the original phrase. Designers often wish to emphasize certain words in a layout, but they are also constrained by the shape of the layout, reading order, or the locations of less salient words. It is difficult to strike a balance between readability, semantics, and aesthetics. Our goal is to support designers in this task by generating variations of display text layouts that satisfy these constraints.
16
+
17
+ An automated and assisted display text system should ideally allow a user to specify parameters to control the visual emphasis of words in a layout without sacrificing its aesthetic quality. Existing techniques for automatically generating $2\mathrm{D}$ layouts from a given set of visual elements are mostly focused on different use cases, like magazines [9], photo collages [7], and other single-page graphic designs [14], which are less rigid in the relative placement of text elements than display text layouts.
18
+
19
+ In this work, we focus on packed rectilinear layouts, such as the example in Figure 1. These consist of words with non-overlapping bounding boxes packed so that the union of all bounding boxes forms a rectangle with no holes other than word and line spacing. Our algorithm generates all possible packed rectilinear layouts for a phrase and prioritizes the layout variations based on their adherence to the desired relative emphasis of words. We also present the results from a series of semi-structured interviews with graphic design experts that aimed to build our understanding of design decisions in creative typesetting.
20
+
21
+ < g r a p h i c s >
22
+
23
+ Figure 1: Example of unintended emphasis in display text with the template approach in Adobe Magic Text: (a) template layout and colour emphasize semantically important words; (b) changing the word 'HEALTHY' to 'DECENT' significantly alters the visual emphasis of words reducing the readability and saliency of the design. Note the new word is only one character shorter and colours of all words are unchanged; the difference is entirely due to the layout.
24
+
25
+ § 2 BACKGROUND
26
+
27
+ The automatic layout of visual elements has been an area of extensive research, but typography imposes unique constraints on the visual layout process. Words need to be presented in an order consistent with the reading direction of the chosen script, and word emphasis depends on many visual factors.
28
+
29
+ Automatic Layout Techniques Existing work on graphic design can inform our development of a display text layout technique. Magazine covers share many similarities with display text layouts, including the need for emphasis on certain elements and constraints on design proportions. A magazine cover layout typically consists of a large background image with blocks of text around the edges of the page. Existing machine-learning approaches for magazines take the salience of the background image into account, but they do not focus on the relative positioning among elements $\left\lbrack {9,{18}}\right\rbrack$ .
30
+
31
+ Grid-based layout algorithms divide a canvas into different areas $\left\lbrack {6,8}\right\rbrack$ . This approach works well for the layout of documents where margins between elements and irregular packing are permissible, but it cannot be easily extended to display text layouts where the relative placement of elements is constrained by reading order.
32
+
33
+ Layout approaches that involve the relative placement of elements can be applied to display text layouts if they offer controls for the overall shape of the layout. Kraus [11] proposes tree-based automatic text layout. His method describes the relationships between words through alignment operators at internal nodes, where each operator describes the relative alignment of the node's children. These trees can be traversed to create layouts using the relative positioning between nodes. Blocked Recursive Image Composition (BRIC) [3] is another visual layout technique that automates the creation of design variations. This technique arranges a set of visual elements relative to one another spatially with constraints driven by recursive decomposition of the elements. BRIC respects element aspect ratios and includes precise spacing between elements unless adjustments are necessary to preserve the aspect ratio. Elements are represented in a binary tree where each internal node describes the alignment of its children.
34
+
35
+ ${}^{1}$ https://express.adobe.com/
36
+
37
+ < g r a p h i c s >
38
+
39
+ Figure 2: A layout with its corresponding tree structure. Here, H represents a horizontal alignment and $\mathrm{V}$ represents a vertical alignment between subtrees.
40
+
41
+ Graphic Design Principles The composition of text can be approached with layout principles that are widely used in graphic design. Bauerly et al. [4] presented two experiments that explored the effect of symmetry, balance, and quantity of construction elements on interface aesthetic judgments. In our work, we extend these principles and formalize the templates presented by Bauerly et al. in order to automatically generate visually pleasing layouts.
42
+
43
+ O'Donovan et al. [14] proposed an energy-based approach derived from design principles to analyze, create, and evaluate the design quality of layouts. In the evaluation stage, the importance of each element, labels specifying element alignment, and a grid-based segmentation are derived for an input layout. These are used as inputs to an energy function. The energy function also considers the visual salience of the image on the location of the text. Although their system produced visually pleasing results, the technique is very time-intensive and not interactive.
44
+
45
+ DesignScape [13] is another tool that provides layout suggestions for designers by varying attributes such as alignment and scale for design elements. Their tool provides layout options that can be selected as well as an adaptive interface that adjusts elements automatically with any change in the layout from the user.
46
+
47
+ Text Attributes Legibility at-a-glance is a crucial feature of successful display text layouts. Sawyer et al. [16] explored which attributes make layouts legible upon a quick glance and compared these attributes across eight popular sans serif fonts.
48
+
49
+ "Personality" is a concept that is used by designers to determine the font selection for different designs, but not a well-defined term. Researchers have tried to find empirical measures that are associated with certain moods using a subset of letters to determine font personality [12] and through crowd-sourced opinions on font connotations [17].
50
+
51
+ While past work has presented techniques for flexible layouts of visual elements in general, display text layouts have specific constraints, such as reading order and aspect ratio, and we focus this work on them. We present a technique for generating all possible packed rectilinear layouts for a text phrase and rank the layouts based on designer preferences and design principles. The design of our tool was guided by a series of interviews with expert designers.
52
+
53
+ § 3 TECHNIQUE
54
+
55
+ We present an algorithm for generating and ranking all packed rectangular layouts that are possible for a given phrase. The generated layouts adhere to these characteristics:
56
+
57
+ * each word must be to the right of or below the previous word in the phrase;
58
+
59
+ * the convex hull of all the words in the layout must closely approximate a rectangle;
60
+
61
+ * the layout must be filled with words, word spacing, or leading (the vertical space between lines).
62
+
63
+ Let $\overrightarrow{w} = \left( {{w}_{1}\ldots ,{w}_{n}}\right)$ be a sequence of $n$ words representing the phrase to be laid out, and let $\overrightarrow{e} = \left( {{e}_{1}\ldots ,{e}_{n}}\right)$ be a vector representing the designer's intended emphasis goal for each word, which we also refer to as the emphasis schema. For example, $\overrightarrow{e} = \left( {4,1,1,3}\right)$ means the first word should be emphasized most, followed by the fourth word, with the remaining two words equally least emphasized The numeric value of a characteristic, such as height or width of each word, can be represented using a characteristic vector $\overrightarrow{c} =$ $\left( {{c}_{1}\ldots ,{c}_{n}}\right)$ . These characteristics can be any parameterized attribute that contributes to word emphasis. Our goal is to compute a emphasis adherence score $E$ for every possible packed rectilinear layout for a given phrase.
64
+
65
+ We chose exhaustive generation of layouts because it allows us to find the optimal layouts that fit an emphasis schema and gives designers the maximum number of possible layouts to use as a template. This also provides a wider variety of layouts for us to use as examples when answering questions about aesthetics and design Layouts that do not match the emphasis schema closely might also be valuable to the designer as a starting point for designs, so it is useful to present all variations as possibilities.
66
+
67
+ § 3.0.1 BIG SCHRÖDER NUMBERS
68
+
69
+ The constraints imposed by word aspect ratios and reading order allow us to predict exactly how many variations of each phrase are possible. The number of possible layouts for a phrase consisting of $n$ words follows the sequence of Big Schröder numbers [2]. The first ten terms of the Big Schröder number sequence are 1, 2, 6, 22,90,394,1806,8558,41586,206098, which is an exponential sequence. Big Schröder numbers describe the number of ways a rectangle can be divided into $n + 1$ rectangles using $n$ distinct guillotine cuts, which mirrors how packed rectilinear layouts are essentially subdivisions of a rectangular layout outline [15].
70
+
71
+ § 3.1 LAYOUT VARIATION GENERATION
72
+
73
+ Our technique for generating all possible packed rectilinear layouts of a phrase uses a tree structure similar to the image layout algorithm, BRIC [3]. All layouts can be constructed by alternating between vertical and horizontal alignments of two sets of words The key difference in our algorithm is the presence of additional geometric and layout constraints that are inherent in typographic layouts. Typographic layouts need to be designed with constraints on reading order and there is less flexibility in aspect ratio for each of the elements.
74
+
75
+ § 3.1.1 TREE CONSTRUCTION
76
+
77
+ Each layout variation can be characterized by a tree where each leaf node represents a word and each internal node represents a vertical or horizontal alignment between its subtrees. Nodes that share a common parent have the same height in the case of a horizontal alignment or width in the case of a vertical alignment. Figure 2 shows an example where the word TYPE is placed in a horizontal configuration with a subtree containing a vertical arrangement of the rest of the words in the phrase.
78
+
79
+ For each subdivision of the phrase into two non-empty subsequences, we recursively compute all layouts for each of the two subsequences. We then generate layout variations of the whole phrase by placing a layout from each of the two subsequence sets horizontally and vertically adjacent to one another. When placing them horizontally, we scale each recursive layout uniformly to have the same height, and when placing them vertically, we scale each to have the same width. If the first subsequence has $i$ layouts, and the second $j$ layouts, this generates ${2ij}$ combinations.
80
+
81
+ < g r a p h i c s >
82
+
83
+ Figure 3: Each row shows the top 5 layouts for different emphasis goal vectors (highest to lowest match, left to right).
84
+
85
+ The alignments alternate between all vertical and all horizontal in a given level because a tree where a parent and its children have the same alignment is equivalent to one where the children have been moved to be siblings of the parent.
86
+
87
+ § 3.1.2 WORD ORDER
88
+
89
+ In the tree construction process, we determine the order of the placement of the children using the order of the words with which they are associated: the recursive layout for the second subsequence of words must be to the right of the layout for the first subsequence, or below it. The resulting layout always has words that are later in the phrase placed to the right of, or under, preceding words. This follows the reading order convention for text in English, which is a Z-shaped reading order left-to-right, top-to-bottom.
90
+
91
+ § 3.1.3 SPACING
92
+
93
+ Leading is the baseline-to baseline vertical distance between lines of text. It is often specified as a fraction of the text size, which makes it difficult to determine leading when a display text layout uses multiple font sizes. We used equal distances for leading and horizontal space between words, with the exception of consecutive words that are the same height, which use the default horizontal spacing for the given font.
94
+
95
+ Optical margin alignment, or margin kerning, is the process of adjusting the horizontal spacing of a letter that overhangs on the margin of a piece of text to create the appearance of being aligned flush with the edge [19]. In packed rectangular display text layouts, this optical alignment is necessary for each word to achieve an optically aligned packing. We created a table of horizontal offsets, similar to an optical margin kerning table, to indicate the offsets required so that the edge of the word appears flush with the edge of the overall layout.
96
+
97
+ § 3.2 LAYOUT PRIORITIZATION
98
+
99
+ After generating layouts, we prioritize them based on the Euclidean distance between $\overrightarrow{e}$ and layout attributes $\overrightarrow{c}$ . $\overrightarrow{e}$ is a vector of $n$ numbers representing the relative emphasis of each word in the phrase. The numbers are positive and do not need to be unique. $\overrightarrow{c}$ represents the values of any parameterized attribute, or characteristic, of the words in the phrase. We focus on word height in the examples presented in this work, but other attributes such as font weight and colour could also be used.
100
+
101
+ Given an emphasis schema $\overrightarrow{e}$ and characteristic vector $\overrightarrow{c}$ , both of size $n$ , the Euclidean distance can be calculated:
102
+
103
+ $$
104
+ d\left( {e,c}\right) = \sqrt{\mathop{\sum }\limits_{{i = 1}}^{n}{\left( {c}_{i} - {e}_{i}\right) }^{2}}
105
+ $$
106
+
107
+ Note $d\left( {e,c}\right)$ can also be calculated using other distance metrics such as cosine similarity, but we did not find that using them made a noticeable difference in layout quality empirically. The final emphasis score, $E$ , can be calculated as a linear combination of the values of $d\left( {e,c}\right)$ for all the characteristic vectors that the designer wishes to include. Users can specify the number of layouts they would like to see, and the algorithm will select that many matches with the smallest value of $E$ .
108
+
109
+ § 3.3 IMPLEMENTATION
110
+
111
+ Our algorithm was implemented as a design tool in Processing ${}^{2}$ using the Geomerative library. ${}^{3}$ It is currently calibrated for the Verdana Bold font, but can be adapted for other fonts. We chose Verdana because Josephson et al. found that Verdana was the most readable among their selection of fonts [10], and it is recommended for displaying letters and digits with high legibility [5].
112
+
113
+ While our algorithm is exponential in the number of words, it works well with display text, which generally has fewer than ten words. Figure 3 shows a series of example layouts generated using our tool with varying emphasis schemas.
114
+
115
+ The number of possible layouts increases quickly with each additional word, but this is unlikely to be a computational issue for display text layouts with ten words or fewer. In one execution of the implementation of the algorithm on a consumer-grade ${2.50}\mathrm{{GHz}}$ processor, our tool took 66 milliseconds to generate all layouts and select the 5 layouts that best fit the emphasis schema for 4 words (22 variations). On longer phrases of 9 words (41,586 variations), it took 82.5 seconds.
116
+
117
+ § 4 INTERVIEW STUDY
118
+
119
+ We conducted semi-structured interviews with five design experts to better understand design practices and preferences in packed rectilinear layouts, and to validate the efficacy of layouts generated by our tool. The goals of these interviews were:
120
+
121
+ * to understand designer preferences for packed rectilinear layouts
122
+
123
+ * to develop a hierarchy of visual emphasis methods
124
+
125
+ * to evaluate the efficacy of our layout prioritization method
126
+
127
+ This design study was split into two sessions: a within-subjects experiment involving web-based design tasks and a semi-structured interview to clarify the responses from the experiment comments. The web-based task primed the designers to think about designing packed rectilinear layouts before the interviews.
128
+
129
+ < g r a p h i c s >
130
+
131
+ Figure 4: The scaling task with a scaling factor of 3.
132
+
133
+ § 4.1 PARTICIPANTS
134
+
135
+ We recruited participants (3 female, 2 male) using the Adobe Illustrator Prerelease Forum, and selected participants with at least 10 years of professional design experience, with an average of 22.4 years of design experience. Participants received $\$ {100}\mathrm{{CAD}}$ for successful completion of the study.
136
+
137
+ * P1 is a teacher with over 30 years of experience with graphic design and typesetting.
138
+
139
+ * P2 is an illustrator and multidisciplinary designer with over 10 years of graphic design and typesetting experience.
140
+
141
+ * P3 is a designer with 21 years of experience in graphic design and 15 years of typesetting experience.
142
+
143
+ * P4 is a Workflows and Adobe Instructor with 25 years of graphic design and typesetting experience.
144
+
145
+ * P5 is a graphic designer with 26 years of graphic design and typesetting experience.
146
+
147
+ § 4.2 PROCEDURE
148
+
149
+ This user study was divided into a priming task followed by a semi-structured interview and design task with an experimenter. The web-based task was hosted on a Google Firebase server and created using the JsPsych ${}^{4}$ framework in JavaScript.
150
+
151
+ § 4.2.1 PRIMING TASKS (20 MINUTES)
152
+
153
+ First, the participant completed a Scaling Task. In this web-based activity, they were asked to scale a word relative to another word using a slider until they were at certain scales relative to one another (Figure 4). The interface had no visual guidance tools presented on-screen. There were three different target scales $({0.5} \times ,2 \times$ , and $3 \times )$ across four different words of varying lengths $(\mathrm{{NO}},\mathrm{{CATS}},\mathrm{{EAT}}$ , and GRASS) for a total of 12 scaling tasks per participant. This task was designed to test which metric, such as height, width, or area, designers used to determine relative size and the degree to which it matched their actual selections.
154
+
155
+ Second, the participant completed a Ranking Task. They were given an emphasis schema, and asked to rank five layout designs for the same phrase from best to worst according to how they fit the emphasis schema. This was done by dragging the image of the layout into an ordering (Figure 5). Participants were also asked about how much they liked the first and second choices in their ranking and could provide further explanation through a free-response box.
156
+
157
+ For the ranking task, we used five 4-word phrases, "ALL FROGS GO HERE", "ALL HORSES LOVE GRASS", "MY IMPALA JUMPS HIGH", "NO CATS EAT ORCAS", and "SOME CATS LIKE DOGS", each with five different emphasis schemas,(1,1,1,1), $\left( {1,1,2,1}\right) ,\left( {1,2,3,4}\right) ,\left( {4,3,2,1}\right)$ , and(3,2,5,1). These phrases have a variety of word-length distributions; for example, all words are the same length in "SOME CATS LIKE DOGS". Each participant completed 25 ranking tasks covering all combinations of phrase and emphasis schemas. For each task, we selected the top five layout variations based on scores from our tool and presented them to the participant in randomized order.
158
+
159
+ ${}^{2}$ https://processing.org/
160
+
161
+ ${}^{3}$ http://www.ricardmarxer.com/geomerative/
162
+
163
+ ${}^{4}$ https://www.jspsych.org/
164
+
165
+ < g r a p h i c s >
166
+
167
+ Figure 5: The ranking task with an emphasis schema of(1,1,1,1) and the phrase "MY IMPALA JUMPS HIGH". The circles represent rankings from left (highest) to right (lowest).
168
+
169
+ The order of the ranking task was grouped by emphasis schema. After each group of five phrases with the same emphasis schema, participants were asked to describe the strategies they used to rank the designs. The experimenter later used these responses to guide the semi-structured interview.
170
+
171
+ § 4.2.2 DESIGN INTERVIEW (40 MINUTES)
172
+
173
+ The semi-structured interviews focused on six main themes related to the design of packed rectilinear layouts:
174
+
175
+ * Readability: What makes a layout readable? What are the considerations that must be made to ensure designs are understandable at different scales?
176
+
177
+ * Ambiguity: Which factors cause layouts to have ambiguous reading order or meaning?
178
+
179
+ * Alignment: How should words of different scales be aligned in packed rectilinear layouts?
180
+
181
+ * Spacing: What determines leading and spacing between words when there are words of various sizes in a layout?
182
+
183
+ * Emphasis: Which factors can be used to emphasize certain words in a layout?
184
+
185
+ * Scaling: How is relative scaling between words determined?
186
+
187
+ Designers were encouraged to share their computer screens and create designs to illustrate the ideas that they discussed in these design sessions. For example, the experimenter prompted some of the designers to resolve reading order ambiguity in a given design and present their version of the layout. Examples generated by designers are discussed below.
188
+
189
+ § 5 RESULTS
190
+
191
+ § 5.1 SCALING PREFERENCES
192
+
193
+ We found that designers used height to judge the relative size of different words. The meaning of "size" in the scaling task prompt was intentionally ambiguous so that designers would use the size metrics that conformed to their internalized rules for text layout. Some possible correlates to emphasis include word height, area, length, and diagonal length. As seen in Figure 6, the user selections aligned better with estimation based on height than estimation based on area.
194
+
195
+ The average error between user selection and height determined by the scaling value of the given task was -17.29%( $\sigma = {18.45}$ , one outlier at ${0.5}\mathrm{x}$ scale removed). We also asked the designers which strategies they used to determine relative scaling and all participants responded that they used the height of the word to determine size.
196
+
197
+ P1 judged relative scale by finding a tall letter with a flat top such as T to use as a benchmark. For words with no such letter available such as WOW or COO, they reported that they squinted and looked at the word upside down to see the perceived edges of the word without being distracted by its meaning or familiar shape. P2 also reported using a similar technique to exclude the overshoots of the rounded letters from their analysis of the shape.
198
+
199
+ All of the design experts that we interviewed mentioned "eyeballing it", or optical compensation, in reference to the spacing between two words of different font sizes and with the scaling tasks. P2 expressed how they used the overall height of the letter as a baseline in their mind to compare font scaling, but the designers did not follow references as strictly as we had previously imagined.
200
+
201
+ § 5.2 RANKING PREFERENCES
202
+
203
+ We compared the preferences of designers in the layout ranking portion of the study with the emphasis adherence of our tool using Spearman’s rank correlation coefficient, $\rho .{}^{5}$ Comparing the rankings given by designers and the rankings determined by our tool gave $\rho = {0.99}$ , which indicates a very high level of agreement between the designers and our proposed rankings.
204
+
205
+ § 5.3 SEMI-STRUCTURED INTERVIEW
206
+
207
+ § 5.3.1 READABILITY
208
+
209
+ Across all the design experts that we interviewed, the consensus was that readability was the most important consideration for display text The designers considered left-to-right as the dominant direction that readers' eyes will move, followed by top-to-bottom. P3 explained that readers "naturally just read left [to] right, at least in Western language." P2 also expressed similar ideas about the default reading order for readers of English. For P1, absolute scale played a key part in readability, and by extension, how they ranked their preference for a layout. If any of the words in the given examples were too small to comfortably read, they automatically ranked it lower than the other layouts. To ensure readability at different scales for display text layouts, P3 talked about how they would shrink the canvas to simulate reading the layout from very far away.
210
+
211
+ § 5.3.2 AMBIGUITY
212
+
213
+ Reading order ambiguities arise when there are deviations from the usual Z-shaped reading order that most readers of western languages are accustomed to seeing, which prioritizes left-to-right and then top-to-bottom reading. Deviations from this reading order without additional ordering cues can confuse the reader and negatively affect understanding. When asked to elaborate on their preferences for reading order, P5 said "I'm never going to read down, I'm always going to read across unless there's a break, or some other visual clue that those things go together like color, or different font."
214
+
215
+ The designers had several approaches for reducing ambiguity in layouts. One option for reducing ambiguity is to group words based on a certain attribute. During the free design portion of the interview, P4 created a layout that led users to read down by grouping based on different fonts and weight (Figure 7b) Another technique is to increase spacing between groups to make them distinct visual elements.
216
+
217
+ § 5.3.3 ALIGNMENT
218
+
219
+ In packed rectilinear layouts, all words on the borders of the layout must be aligned to create a straight edge. Through interviews with designers, we found that this is usually determined using some form of optical margin alignment, with or without the use of the alignments defined by the font. P5 discussed how they often relied on optical bounds instead of inking boundaries to determine alignment. For example, if a word began with the letter " $\mathrm{O}$ " and was on the left edge, they were inclined to move it slightly more to the left to let the curve hang over the edge of the layout.
220
+
221
+ ${}^{5}\rho$ ranges between -1 for low correlation between two rankings and 1 for high correlation between rankings.
222
+
223
+ < g r a p h i c s >
224
+
225
+ Figure 6: Scaling task selections for all participants, grouped by intended scale. The base scale was ${20}\mathrm{{pt}}$ .
226
+
227
+ < g r a p h i c s >
228
+
229
+ Figure 7: (a) An example of a layout that has a slanted vertical axis (b) An example of a layout that says SOME CATS LIKE DOGS which could be mistakenly read as SOME LIKE CATS DOGS. Grouping through font choice reduces ambiguity.
230
+
231
+ P2 introduced an interesting example of putting the vertical axis of the layout on a slant (Figure 7a). While our tool does not currently support these layouts, a tilt factor could be added to the algorithm for specific fonts that do not have a perfectly vertical $y$ axis.
232
+
233
+ § 5.3.4 SPACING
234
+
235
+ Leading and spacing are usually font attributes, but these are often manipulated by designers in display text layouts. These attributes are often designed with body text in mind so they are often irrelevant for display text. The spacing between words is highly dependent on the specific design, so there was less consensus between designers. In general, our participants used leading and spacing that were the same height and width, and used the default spacing for one of the fonts as a size reference.
236
+
237
+ P1 reported that their method for determining the approximate spacing between words in a layout with varying font sizes and packing is to take the standard space between words in the smallest font and use that as the size for leading and horizontal spaces. P2 had a slightly different approach of using double the default leading between the smallest words in the layout.
238
+
239
+ In order to separate two groups of words, P2 said that the space between groups should be "at least the length or the width of one of the largest characters." They also expressed how leading could also be doubled to create a vertical separation between word pairs.
240
+
241
+ P4 mentioned how font weighting also affects the amount of space they choose to add between words, "when words are bolder the designer tends to give the word more space to let it breathe."
242
+
243
+ § 5.3.5 EMPHASIS
244
+
245
+ Emphasis relies on the contrast between a word and its surroundings. It can be achieved through changing many different attributes such as the font, weight, colour, size, and placement. Size is the emphasis technique that we focused on in the priming task, but the designers in our study provided suggestions on how they use other techniques, depending on design needs. When asked about in-situ emphasis techniques that would not alter the layout, P1's top preference was adjusting the weight. Their second choice was to edit the font of the emphasized word, and their third choice was to use colour. When asked about factors that affect emphasis in a layout, P3 said "I think scale is probably more important than placement."
246
+
247
+ § 6 DISCUSSION AND FUTURE WORK
248
+
249
+ § 6.1 TEXT ATTRIBUTES
250
+
251
+ In this work, we focused on using word height as a proxy for emphasis. Other factors, such as colour or contrast differences between words, font weight, or using italics, can also affect the level of emphasis on a given word in a phrase. We only evaluated variations in word heights due to the exponentially increasing number of possible layout variations for each attribute, but future work might explore how these attributes can be used in conjunction with height to create varying levels of emphasis for a given word.
252
+
253
+ § 6.2 VARIABLE FONTS
254
+
255
+ In the freestyle design portion of the interviews, many of the designers chose to use variable fonts to change the horizontal span and aspect ratio of words. Variable fonts, or OpenType Font Variations [1], are fonts with continuously adjustable parameters. While our tool did not take advantage of variable font weighting, it is a promising direction of future exploration. Variable fonts allow designers to change the emphasis of a certain word in a layout without changing the relative positions of each word in the layout but this would require parameterizations to create better emphasis metrics to determine $\overrightarrow{c}$ .
256
+
257
+ During the free design portion of the study, P1 used variable fonts to make fine-grain adjustments to word weighting. In particular, they increased the weights of words that had a smaller font size to give all words in the layout similar weight despite differing sizes.
258
+
259
+ § 6.3 ROTATION
260
+
261
+ Rotation was suggested by P1 and P4 as a way to de-emphasize certain words. Our packing algorithm could also be used for text rotated 90 degrees because the underlying principle of using the aspect ratio remains the same. However, we did not investigate the effect of rotational variations using our tool because it would have drastically increased the number of layouts that we considered. A pilot study for this project found that rotation slows reading speed and can be used to de-emphasize words in a layout. Future work might explore how rotations affect designers' preferences for layouts and how to model the resulting visual relevance.
262
+
263
+ § 6.4 SEMANTICS
264
+
265
+ In this work, we discussed emphasis of words without a direct connection to semantics. With real-world design tasks, there is often a connection between semantic importance and emphasis. Language models could be used to detect the most important words in a layouts automatically and provide a starting point for users to specify their emphasis preferences. For example, articles such as "the" or "an" are unlikely to require emphasis in a layout. Semantics could also affect the placement of words as different clauses of the phrase might require separation. In future enhancements, semantic breaks could be entered into the algorithm to create wider gaps between different clauses and reduce reading order ambiguity.
266
+
267
+ § 6.5 OPTIMIZATION
268
+
269
+ The current implementation of the algorithm iterates over all possibilities of the layout, but performance could be improved by caching repeated subtrees during layout variation generation to avoid repeated computation.
270
+
271
+ § 6.6 AMBIGUITY AND FILTERING
272
+
273
+ Ambiguity filters could be used to discard certain layout variations prior to the ranking and prioritization step. The primary source of reading order ambiguity is where non-consecutive words in the phrase have a similar height and are approximately aligned horizontally. Designers might wish to enable other filters, such as a minimum word height or maximum size disparity between words.
274
+
275
+ § 6.7 FINE TUNING
276
+
277
+ While our tool provides a starting point for designers, much of design is in details that are specific to each design project. The current implementation of our tool does not provide users with dynamic control over parameters such as the space between words. Future implementations of the tool might provide further support for designers to tune the generation of variations to reduce the space of variations to only include layouts that better align with their vision.
278
+
279
+ The layouts produced by our tool would likely be post-processed by designers in professional design programs to further reduce ambiguity and enhance aesthetic qualities. Our tool currently outputs editable SVG files, but this algorithm could be implemented as a plugin for professional design tools such as Adobe Illustrator to create a seamless design experience in one application.
280
+
281
+ § 7 CONCLUSION
282
+
283
+ In this work, we present a new tool for automatically generating packed rectangular typographical layouts and prioritizing layout variations based on emphasis schemes. The automatic generation and prioritization of design variations allow a designer to explore all combinations of packed rectilinear layouts for a given phrase without the need for manual alignment and resizing. Automatic typographical layout tools can introduce designers to possible layouts that would have otherwise been too time-consuming to explore. These suggestions can serve as a starting point for designers when creating packed rectangular display text layouts.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/ivIPr2ukrwk/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,473 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Supporting Visual Comparison and Pattern Identification in Widescale Genomic Datasets
2
+
3
+ Category: Research
4
+
5
+ ![01963e09-aaee-7674-9d5b-0f9ef7d3d570_0_222_386_1354_676_0.jpg](images/01963e09-aaee-7674-9d5b-0f9ef7d3d570_0_222_386_1354_676_0.jpg)
6
+
7
+ Figure 1: Visualization comparing SNPs at a specific genomic region in 52 varieties of Canola. The varieties are ordered by their increasing levels of aliphatic glucosinolates. Every SNP is coloured blue (match) or red (mismatch) based on its similarity to the reference variety at the top; missing SNPs are encoded in white. A reference map of phenotypic trait values is shown at left, and connections to genomic location are shown at bottom.
8
+
9
+ ## Abstract
10
+
11
+ Large-scale linear datasets are often visualized using a tabular structure (rows and columns). Visual analysis tasks in such systems involve comparisons and identification of patterns across rows and columns, but these tasks can be hard to perform as the table increases in size because rows and columns of interest can be far apart in the table. This problem is particularly evident in table visualizations of genomic datasets like SNPs, which are genetic markers used in comparing different variants of an organism. Visual analysis of SNP datasets has a wide range of applications in plant breeding, genome-wide association studies, and pharmacogenetics. However, current SNP visualizations are limited in their support for complex analytic tasks in wide-scale tables. Through ongoing collaborations with genomic researchers and plant breeders, we have identified a set of new interaction requirements for visual analysis of SNP datasets, and we have developed a new visualization tool with new interaction techniques that satisfy the requirements. Our requirements and techniques provide new understanding of how to support complex visual analysis in large-scale table visualizations.
12
+
13
+ Index Terms: Human-centered computing-Visualization-Visualization systems and tools-Visualization toolkits; Human-centered computing-Interaction design-Interaction design process and methods-User Interface design.
14
+
15
+ ## 1 INTRODUCTION
16
+
17
+ In many visual-analytics domains, analysts use wide linear datasets that have many features or observations about each set of entities - e.g., genomic data, time-series data, sequential documents, or population data. These datasets are often displayed using table visualizations in which each cell's value is encoded using a visual variable such as colour (e.g., [65, 75, 94]). A main goal in working with table visualizations is to find insights that are based on seeing patterns in the visualized data: e.g., determining that a particular row is different from a reference row in an important way, that a particular column shows a pattern across the different rows, or that two columns show a similar (or contrasting) pattern to each other. These tasks involve two main activities in the visual workspace: finding patterns in the rows and columns that indicate potential correlations, and comparing rows or columns (either to a reference or to other parts of the data).
18
+
19
+ In the genomics domain, a common example of wide datasets is Single Nucleotide Polymorphism data: SNPs are genetic differences between genomes at a single base pair, and can be important in understanding the relationship of an organism's genotype to its phenotype (i.e., its observable traits). For example, SNP analysis is extremely common in plant breeding research, since SNPs have proven to be important markers for desirable crop traits such as flowering time, disease resistance, or protein content.
20
+
21
+ Plant breeders and genomic researchers are now able to quickly and easily produce datasets that collect large sets of SNPs (numbering from hundreds to tens of thousands) for many different varieties of a crop - e.g., Figure 1 shows a visualization of SNPs in 52 varieties of Canola. When a collection of SNPs are inherited together near a common loci they are referred to as a haplotype because they indicate a potential genetic linkage. Studying these clusters of SNPs and the DNA around their locations can help researchers identify specific mutations that affect the plant's characteristics, and can help breeders identify candidate genes for future crossings (although SNPs occur in both genes and in non-coding regions).
22
+
23
+ Many tools have been introduced that visualize SNP haplotypes, but few systems have focused on the interactions that breeders and genomic researchers need to carry out during exploratory investigations. Current tools are limited in their support for visual exploration - particularly in terms of lightweight visual comparisons in the wide datasets that are now common in breeding (e.g., tens of thousands of columns). For example, mechanisms for navigating and comparing different columns in wide tables is of particular importance because most genetic locations in a plant's genome have dependencies and related locations that may be far away (e.g., due to the polyploid nature of many plant genomes that leads to multiple copies of genes).
24
+
25
+ To better support visual exploration in wide datasets, we have been working with genomic researchers and plant breeders for the past five years to identify specific analysis tasks in large SNP tables, and interaction requirements that will support those tasks. We identified the following six specific requirements:
26
+
27
+ - Flexible and fast re-ordering mechanisms so that users can quickly look at several arrangements of the SNP table (e.g., different domain-specific clustering and sorting methods as well as manual re-ordering);
28
+
29
+ - Lightweight row comparisons that allow temporary changes to encodings so that a quick comparison can be made without altering the overall organization of the table (e.g., being able to check the difference between two rows without re-setting the reference row);
30
+
31
+ - Comparisons between related columns that allow multiple genetic locations to be compared even if they are far away in the table (e.g., comparing SNPs at two locations that have orthologous genes);
32
+
33
+ - Flexible encoding of differences that allow users to rapidly switch between the variety of ways in which the "difference" between two plant varieties can be shown (e.g., alternate colour schemes to show show existence of difference from a reference, 'cascading' differences, the type of difference, or the specific details for both varieties);
34
+
35
+ - Support for location awareness because the scale and organization of SNP table visualizations can lead to difficulty in tracking where a SNP is in the plant's genome (e.g., whether a SNP is in an important region that is known to control other traits);
36
+
37
+ - Managing and revisiting table configurations to simplify navigation through the huge "configuration space" of ways that the user's current view of the table can be ordered, encoded, and positioned (e.g., keeping track of what other clustering approaches have been tried, or how to get back to a previously-viewed configuration of the table).
38
+
39
+ We have developed a new SNP-haplotype viewer that provides novel interaction techniques to meet these requirements. The viewer provides lightweight mechanisms for arranging the table, comparing rows and columns, and looking at different encodings; it also shows explicit information about genomic and table location, and includes a 'configuration snapshot' tool that provides automatic and manual saving of configurations as well as visualization of the saved states versions so that they can be compared, revisited, and annotated.
40
+
41
+ Our work makes two main contributions: first, we identify several new interaction requirements for visual analysis of wide linear datasets - these arise from our collaborations in the plant-breeding domain, but there are several applications of the requirements to other types of wide tabular data; and second, we demonstrate new interaction techniques that can satisfy those requirements in a working genomics visualization tool. Our SNP-haplotype visualization is open-source and is freely available at [address removed for review].
42
+
43
+ ## 2 BACKGROUND AND RELATED WORK
44
+
45
+ Three areas of prior work underlie our research: systems and techniques for table visualizations, techniques for and studies of visual comparison, and genomic visualizations of SNP data.
46
+
47
+ ### 2.1 Visualizations of Tables
48
+
49
+ Tables have long been a standard way of communicating structured information using spatial layout. Table visualizations - which encode each cell's data value with a visual variable (e.g., colour, size, or position within the cell) - have also been in use for more than a century, and have been well known since Bertin's work (e.g., [8] and others as reviewed by Perin et al [75]). Table visualizations (sometimes called heat maps or colour-shaded matrices) allow large tables to be inspected and explored in a relatively small space, and tools for making visual tables are now a standard part of many visualization systems such as Tableau (tableau.com), PowerBI (powerbi.microsoft.com), and ggplot2 (ggplot2.tidyverse.com).
50
+
51
+ Table visualizations have been used in many different ways and in many different domains: for example, to summarize the characteristics of a set of locations $\left\lbrack {8,{41},{60}}\right\rbrack$ ; to show the magnitude of a variable of interest (e.g., expression level or abundance of ions) for different samples $\left\lbrack {{46},{66},{94}}\right\rbrack$ ; to explore student engagement in online classes [19]; to explore database contents [54]; to show interactions in social networks [32]; to analyse energy demand over time for different buildings [98]; or to track employee performance through a set of criteria [99].
52
+
53
+ Some of the primary goals when visualizing tables are to help users understand relationships between the entities represented in the tables rows, the features or characteristics represented in columns, and associations between rows and columns. Analytics work in many domains where table visualizations are used is often equivocal and under-specified: for example, in the domain of genomics, Nusrat states "data visualization is essential for interpretation and hypothesis generation as well as a valuable aid in communicating discoveries. Visual tools bridge the gap between algorithmic approaches and the cognitive skills of investigators. [...] A key challenge in data-driven research is to discover unexpected patterns and to formulate hypotheses in an unbiased manner in vast amounts of genomic and other associated data" ( [72], p. 781).
54
+
55
+ Within this context, researchers have investigated many different aspects of designing, interpreting, and interacting with table visualizations. First, several projects have considered the problem of generating table visualizations: for example, Perin and colleagues revisited Bertin's early methodology for producing visual encodings inside table cells, and developed a tool for interactively creating table visualizations with a range of visual variables; others have developed tools for quickly creating table visualizations from spreadsheets [10] and arbitrary CSV files [12]. Researchers have also considered how to provide access to the table's values within the visualization: for example, Rao and Card's Table Lens provided a bar-chart encoding of cell values and mechansims for quickly sorting by column, and used a focus+context mechanism to allow detailed inspection of certain rows within the graphic presentation [79]; the Table Lens has also been extended by other researchers to allow multiple colour maps and clustering support [52]). A different approach was explored by Han and Nacenta, who created "Fat Fonts" that show both a scalar value and provide a visual representation of the value through amount of ink [37]. Table representations have also been adapted to show hierarchical data (e.g., [25, 56])
56
+
57
+ Second, many researchers have investigated ways of ordering and arranging a table to best reveal patterns in the data. Careful manual arrangement of rows and columns was an important part of Bertin's original methodology [8], and many tools allow manual reordering of rows and columns. However, with larger datasets, manual ordering is not feasible, so automated algorithms for clustering or "pattern mining" $\left\lbrack {{23},{50}}\right\rbrack$ are often employed - these can use similarity (e.g., genetic similarity) to create a tree from the table's rows [94], or can look for visual patterns in the table data (e.g., [9, 21, 52, 55, 75]).
58
+
59
+ Third, many systems provide explicit support for specific tasks, such as ranking candidates (e.g., [35,92]), interactively looking for patterns (e.g., [9]), navigating through versions of tables that change over time (e.g., [76]), extracting and comparing data subsets from different tables (e.g., [33]), interaction techniques for working with event sequences [36], or dimensionality reduction (e.g., [9,25]).
60
+
61
+ Finally, matrix visualizations are a subtype of table visualizations in which the two dimensions of the table represent the same features for two entities, and each cell represents a degree of association between the entities for that feature. Matrix visualizations have also been used in many domains: for example, to show graphs and networks (e.g., $\left\lbrack {7,{43},{44}}\right\rbrack$ ), term co-occurrence (e.g.,[32]), genomic similarity (e.g., [38]), physical connections in folded structures (e.g., $\left\lbrack {{20},{95}}\right\rbrack$ , software evolution (e.g.,[83]), or classification errors (i.e., confusion matrices [31]). Researchers have also investigated several novel representations for matrices, including dual views that pair a matrix with its corresponding node-link diagram [42], integration of matrices into existing node-link structures [44], extensions that allow display of multivariate data [97], and 'matrices of heatmaps' to increase the number of dimensions that can be shown [82].
62
+
63
+ ### 2.2 Supporting Visual Comparisons in Visualizations
64
+
65
+ Comparisons are a common and frequent task in visual analytics, and techniques for supporting comparison have been widely studied. Many techniques can be classified using the three approaches proposed by Gleicher: juxtaposition, superimposition, and explicit encoding $\left\lbrack {{27},{28},{62}}\right\rbrack$ . Juxtaposition involves placing visualizations in close proximity, in order to allow users to see similarities and differences in parallel parts of the visualizations - e.g., if two line charts are presented side by side, viewers can compare values and trends in the charts (as long as all representations use the same layout and scale so that visual differences accurately reflect differences in the underlying data). A common technique that juxtaposes several visualizations is the small-multiples method [8]: each of the multiples has a similar layout but different data, allowing comparisons by looking across the images. This idea has been used in many ways, including well-known techniques such as scatterplot matrices [45], as well as extensions to immersive environments (e.g., [57]). Juxtaposition can also be achieved interactively: for example, Tominski's CompaRing approach brings comparison candidates close to the cursor when the user selects an object [88].
66
+
67
+ Superimposition involves putting two datasets in the same visualization so that differences are visible in the same reference frame - e.g., instead of showing two line charts side by side, the two lines can be shown in the same chart. Superimposition puts the datasets into the same reference frame, allowing similarities and differences to be seen more clearly. However, this method has the problem of clutter: the density of some representations mean that they do not work well as overlays (e.g., space-filling methods or dense data spaces), and the approach works best with sparse data (although the visual presentations can be adjusted to reduce occlusion).
68
+
69
+ Explicit encoding of a comparison involves creating and visualizing a new dataset that explicitly represents and visualizes a specific comparison - e.g., the data from two line charts can be used to create a new dataset showing the difference between the lines, and then this new dataset can be shown explicitly as a new line (either in addition to or instead of the existing lines). There are many types of explicit encoding that are possible: for example, showing the existence of differences, the magnitude of differences, or the type of differences (limited only by the ways in which two datasets can be compared) [69]. Researchers have demonstrated several explicit-encoding methods in visualization research, including colour-based differences (e.g., showing same/different colouring, or amount of difference), "diff matrices" that show differences between pairs of lines, displayed in a matrix [85], annotations that indicate differences in one of the representations being compared (e.g., coloured lines showing missing or added elements in a tree [11]), differences between tables at different time periods [69], changes between video frames [13], or "shine-through" representations to highlight differences in overlays [89].
70
+
71
+ Researchers have also extended Gleicher's three basic categories to include other representations. Different visualizations can be presented sequentially in the same location, either using the idea of Rapid Serial Visual Presentation (RSVP) [5], or using animation to smoothly morph from one dataset to another [22]. This technique is a combination of juxtaposition and superimposition using time (i.e., temporal juxtaposition), and can address the occlusion problem while still making use of the common spatial frame. Tominski showed a variation on this idea in a technique that allowed the user to 'peel back' a top representation to look at the bottom representation [89]. Other researchers have extended the idea of juxtaposition by nesting one visualization inside another, which allows different types of comparisons [49], and have introduced the concept of overloading one representation with details from another - e.g., showing graph elements that are present in one visualization but not in another [48].
72
+
73
+ In addition to comparison approaches based on spatial layout, researchers have also considered the actions and interactions that are part of visual comparison tasks. For example, von Landesberger specified the workflow involved in a visual comparison task [91]; Wu developed a "view composition algebra" to understand and compose actions in ad-hoc comparison settings [96]; Jardine and colleagues investigated the low-level perceptual processes involved in visual comparison [47]; and Kehrer and colleagues defined a formal model of category comparisons in small-multiple displays [53]. An additional higher-level consideration is the amount of effort required to carry out a visual comparison - low-effort techniques are critically important for supporting effective exploration of large datasets. A few researchers have explicitly focused on effort reduction - for example, Tominski's CompaRing which reduced the steps required to bring comparators into juxtaposition [88].
74
+
75
+ Several studies have also been conducted to look at the performance of different techniques for supporting visual comparisons. Early perceptual studies investigated performance on comparisons between elements in bar charts [14] and individual differences in same/different visual comparison tasks [15]. Several studies have followed up on these results to look at comparisons in standard chart types (e.g., $\left\lbrack {{86},{87}}\right\rbrack$ ) the effect of chart size and space usageon interpretation [39], and the effect of glyph types on reading and comparing time-series visualizations [24]. Several researchers have evaluated the basic processes involved in visual comparison: for example, $\mathrm{{Lu}}$ and colleagues created a model of just-noticeable differences as the basis of visual comparison, and explored this idea with bar charts, bubble charts, and pie charts [61], and Ondov and colleagues studied low-level perceptual tasks to compare performance in several presentation styles (overlays, small multiples, and animated transitions) [73]. Other studies have considered specific representations or analysis scenarios: for example, user performance in visual comparison, slope estimation, and discrimination tasks for multiple time-series visualizations [49]; the performance of square and triangular matrix representations as well as different methods of matrix juxtaposition [59]; the effectiveness of small multiples compared to animated transitions for seeing changes in graphs [2]; and user performance when comparing ranked data in tables [6].
76
+
77
+ ### 2.3 Genomic Visualizations and SNP Haplotypes
78
+
79
+ There are many types of genomic visualization that are used to show a wide range of information - for example, sequences and sequence alignment, levels of gene expression or ion abundance, conserved regions of the genome (i.e., synteny), or structural variation across different samples (e.g., $\left\lbrack {1,3,{18},{63},{64},{72},{80},{100}}\right\rbrack -$ see [72] for a broad survey). In particular, recent advances in sequencing capabilities and the increasing availability of genomic data has led to the use of genetic analysis and genomic visualization in the domain of plant breeding where one of the main goals is to connect a crop plant's genotype to its phenotype - the observable characteristics or traits of the plant. Plant breeders and genomic scientists investigate how genetics affect important crop traits such as oil and protein content, plant height, resistance to disease, or heat tolerance; this knowledge can be used to create hypotheses and choose candidates for breeding in order to try and introduce and retain desirable traits [51].
80
+
81
+ Although complete sequencing of individual genomes is still time-consuming, it has become feasible to identify large numbers of genetic markers in a genome using the "genotyping-by-sequencing" approach [17, 77] that generates sets of markers called SNPs for a variety. SNP markers are often associated with differences in traits of interest, and so SNP visualizations are an important part of marker-assisted breeding [51].
82
+
83
+ Several systems have been developed for showing SNP data, including capabilities in general-purpose genomic visualization tools (e.g., JBrowse [18] or Gosling [63]) as well as dedicated applications such as Haploview [4], Flapjack [65], SNP-Vista [84], or GCViT [93]. These systems often show table visualizations with individuals in rows and SNPs in columns, as well as association matrices that show co-occurrence of different alleles within a haplotype group [4], or histograms of SNP counts within a given window size [93]. Many tools provide clustering capabilities (e.g., using a genetic-similarity dendrogram [84]) as well as interactive zoom to let users see details of the alleles (e.g., the actual nucleotides). A few tools are paired with algorithms for conducting genome-wide association studies (GWAS) that look for correlations between SNPs and measured traits of interest (e.g., [30]). However, there are still many limitations in current genomic visualizations in terms of support for the task of interactive visual comparison, although a few examples of research that focuses on comparison do exist: very early work developed diagrammatic methods for comparing DNA sequences [26]; Glueck and colleagues developed the PhenoBlocks visualization with the goal of supporting comparisons across phenotypes [29]; Mitra and colleagues developed methods for comparing metage-nomic datasets [67]; and recent research by Ripken and colleagues conducted requirements interviews with biologists for working with genomic data in a VR environment for immersive analytics - the identified requirements included the need to compare data subsets, and the need to flexibly reorder and group the data [81]).
84
+
85
+ A specific limitation of current SNP-haplotype viewers is that most tools have been primarily built for analysis of diploid genomes (e.g., humans or animals) whereas plants are often polyploid, with multiple copies of each gene [58]; breeders and researchers often need to consider the effects of all orthologous locations together during exploration, but simultaneous visual access to orthologues is not well supported in most tools. The drawbacks of current tools and our collaborations with plant breeders and genomic researchers led us to the new requirements and visual features described below.
86
+
87
+ ## 3 APPLICATION DOMAIN
88
+
89
+ To contextualize the design of a visualization tool for SNPs, we provide an overview of the biological background for the domain, and a characterization of the dataset used in the visualization.
90
+
91
+ ### 3.1 Biological Background
92
+
93
+ Genomics research involves the study of an organism's DNA in order to understand its structure, function, and evolution [74,78]. An organism's complete set of DNA is called its genome, consisting of a large set of nucleotides that encode the instructions responsible for the organism's development and function [68]. There are four nucleotide bases - Adenine (A), Guanine (G), Cytosine (C) and Thymine (T). A variation in a single nucleotide in the genome at a specific position is called a Single Nucleotide Polymorphism or SNP. These variations tend to exist in a significant fraction of the population (1% or more) and the different variants of a particular SNP are called alleles. When a set of SNPs that are adjacent to each other in the genome are inherited together they are referred to as a haplotype. Mapping the location of these haplotypes can help researchers in classifying different variant populations.
94
+
95
+ ### 3.2 Data Characterization
96
+
97
+ SNP data can be represented in different types of files such as VCF (Variant Call Format) or Hapmap (Haplotype Map) and is often analyzed in combination with additional data sources such as a GFF (General Feature Format) file for position of genes, and a phenotypic-trait table. At the most basic level, however, SNP data is ordered based on genomic position and classified according to the population line (variety) such that each SNP has the following features:
98
+
99
+ - Identifier: Every SNP is given a unique identifier that is common across all the different parental lines of a single organism.
100
+
101
+ - Possible Alleles: The different nucleotide variants that exist for a SNP; while most common SNPs have two alleles, triallelic SNPs have been identified in human genomes.
102
+
103
+ - Position: The location of a SNP in the genome, typically encoded relative to a chromosome.
104
+
105
+ - Value: The nucleotide variant present in the given population line; the value can be empty when the data is missing.
106
+
107
+ Table visualizations of SNPs use the inherent ordering, and then build a table at the genome, chromosome, or region level. Other datasets can supplement the SNP information to indicate, for example, the gene that the SNP is on, or copy number variations at that genomic location. In addition, other data sources can describe each variety - e.g., phenotypic traits such as flowering time, protein content, seed size etc, or dendogram trees that cluster the lines based on their genetic distance. These additional datasets are primarily used to control the order of the rows.
108
+
109
+ ## 4 REQUIREMENTS FOR SNP-HAPLOTYPE ANALYSIS
110
+
111
+ We have been working with genomic researchers and plant breeders over the past five years to understand user tasks and requirements for visual exploration in genomic datasets. Our collaborating research groups are interested both in producing new crop variants that have improved agronomic or nutrition traits, and also in exploring genetic evidence for hypotheses about physiological mechanisms and plant evolution (e.g., (removed for anonymity)). Requirements analysis has been carried out in an iterative and collaborative fashion with these research groups, and we have developed and deployed several versions of our haplotype visualization - the prototypes have been used as a foundation for discussions about user tasks and visual-exploration needs. Based on our discussions, we have identified the following requirements that go beyond what is available in current SNP visualizations:
112
+
113
+ ${R}_{1}$ . Flexible and fast re-ordering mechanisms. Genomic crop analysis involves looking for associations between SNPs, genes, and traits of interest - and to do this, users need to be able to quickly look at several arrangements of the SNP table. For example, ordering rows by genetic similarity, sorting by a measured trait, clustering by allele group for a particular SNP, or arranging rows manually (based on the user's knowledge of the varieties) are all common manipulation methods for our collaborators. In addition, it is valuable to be able to move between these different arrangements quickly and easily.
114
+
115
+ ![01963e09-aaee-7674-9d5b-0f9ef7d3d570_4_152_149_1497_733_0.jpg](images/01963e09-aaee-7674-9d5b-0f9ef7d3d570_4_152_149_1497_733_0.jpg)
116
+
117
+ Figure 2: The SNP browser’s three main views: genome-level overview (top left); chromosome-level view with highlighted viewfinder rectangle (top right); region view with match/difference colouring (bottom left); region view with nucleotide colouring (bottom right).
118
+
119
+ ${R}_{2}$ . Lightweight row comparisons. Because there are many ways in which varieties can be compared, users need lightweight mechanisms for quickly seeing how one row compares to another without changing the global ordering of the table. In addition to simple selection of a reference variety that changes the global visualization, there is a need for low-effort ways of comparing any two given rows. For example, in a table that is colour-coded based on differences from a single reference, users need a way to do a quick comparison of the differences between two varieties without changing the overall reference.
120
+
121
+ ${R}_{3}$ . Comparisons between related columns. A genetic location in a plant genome is often related to other locations: for example, many plant species are polyploid (i.e., they have duplicate copies of genes elsewhere in the genome), and many genes also have dependencies with other parts of the genome (e.g., a gene in one location may be regulated by another). This means that users need to compare the columns of a table visualization as well as the rows - and need easy access to related locations, since a SNP table may be many thousands of columns wide.
122
+
123
+ ${R}_{4}$ . Flexible encoding of differences. There are many ways in which genomic researchers think about the "difference" between two varieties: they may be interested simply in the existence of differences between a variety and a reference; they may want to see specific differences at the nucleotide level; they may be interested in exact matches between alleles or partial matches (e.g., heterozygous nucleotide pairs); or they may want to see 'cascading' differences that build up across multiple varieties. Alternate encodings (e.g., using colour maps) can show different kinds of differences, but users need to be able to switch between encodings quickly and easily.
124
+
125
+ ${R}_{5}$ . Support for location awareness. The size of SNP table visualizations (e.g., tens of thousands of columns) means that it can be difficult for users to maintain awareness of where they are in the genome - a problem that is exacerbated by the fact that SNPs are simply ordered in the table, rather than positioned relative to their actual genomic location. As a result, it is critical that any visualization provide support for awareness of location, both at a high level (e.g., "what chromosome am I looking at?") and at a low level (e.g., "what gene is this SNP on, and how many neighboring SNPs are on the same gene?").
126
+
127
+ ${R}_{6}$ . Managing and revisiting table configurations. With multiple ordering mechanisms, multiple colour encodings, and zoom and pan navigation, there are an enormous number of possible configurations for the table visualization. It can be very difficult for users to remember where they have been in this "configuration space" and how they can get back to a previous configuration (e.g., to show a pattern to a colleague or to revisit a previous candidate). Although provenance tools have been introduced for several visualization systems (e.g., [16, 34]), no current genomic visualization systems (to our knowledge) provide any support for this requirement.
128
+
129
+ ## 5 System Overview
130
+
131
+ Our haplotype browser is a web-based application for visualizing and exploring SNP groups across multiple varieties (parental lines) of crop species such as Canola (Brassica napus), lentil (Lens culinaris), or wheat (Triticum aestivum). The system provides several table visualizations at different genomic scales, with varieties in the table's rows and SNPs in the columns (see Figure 2. After the user selects or loads a datafile, the system displays a genome-wide overview of all varieties and SNPs, divided into chromosomes. Since there are often many thousands of SNPs for each variety (e.g., 30,000 in the Canola dataset of Figure 2), this table is highly compressed horizontally, and so primarily serves as a consistent frame of reference that helps the user orient themselves to the data and keep track of navigational cues such as the zoom region. The main user interaction at the overview level is to select a chromosome for closer analysis, which is then displayed as a second table below the overview.
132
+
133
+ The chromosome view uses the same tabular organization as the overview, but at a higher zoom level, where users can start to identify patterns in the data and locations for closer investigation — for example, the central region of the chromosome view in Figure 2 shows that there are a number of varieties that differ in terms of several contiguous SNPs. To zoom in further on this region, the chromosome view provides a viewfinder rectangle that selects a subset for a third view that shows only the region of interest (yellow rectangle in Figure 2.
134
+
135
+ ![01963e09-aaee-7674-9d5b-0f9ef7d3d570_5_147_147_1502_601_0.jpg](images/01963e09-aaee-7674-9d5b-0f9ef7d3d570_5_147_147_1502_601_0.jpg)
136
+
137
+ Figure 4: Visualization of genes as pointed arrowheads indicate their position and orientation in the genome. The fine gray lines are connecting SNPs with their physical location in the genome.
138
+
139
+ The region view is shown at the bottom of Figure 2. When the zoom level is high enough in this view, the names of the SNPs are shown at the top of the table, and the actual base pairs are also drawn in the table cell. In this view, several additional interactions are available. The user can pan (by dragging) and adjust the zoom level (using a slider above the view), and can hover over any cell to show a tooltip with information about the SNP and its corresponding alleles. Button toggles are provided above this view for the user to move left or right across the region in small step increments to investigate neighbouring SNP clusters. There also a pair of input boxes to enter a specific start and end position if the user is targeting a known genetic loci. All three views use the same basic encoding scheme, as described in the following section.
140
+
141
+ ## 6 VISUAL ENCODING DESIGN
142
+
143
+ SNP data is primarily visualized through a simple coloured tabular grid where the level of detail changes depending on the genomic resolution. In encoding this dataset we followed previous SNP genotype visualizers (e.g., $\left\lbrack {4,{65},{70}}\right\rbrack$ that plot the parental lines horizontally with colored SNP markers running vertically. We extend this design space in our visualization by providing three panels: a main SNP panel and two supporting panels of associated data, with coordinated interaction support among all three for complex analysis tasks. The main panel, visualizing the SNP markers, is at the center of our visualization. To its left is the line ordering panel that encodes the ordering of the parental lines either via a dendogram tree or a heatmap of phenotypic traits. The final panel is positioned underneath the main panel and visualizes the genetic-to-physical location map of the SNPs and the corresponding genes around the loci. The visual encoding of all three panel is flexible and can change based on a variety of interaction and selection parameters.
144
+
145
+ ### 6.1 Main SNP Panel
146
+
147
+ The main table visualization has several possible colour encodings - some of these are based on comparisons of each line to a reference line (shown at the top of the table), and some based on underlying genetic information.
148
+
149
+ The first (and default) color scheme is an explicit encoding of differences to the reference line: if a SNP allele in a particular line matches with the SNP allele in the reference line, it is painted blue and, if there is a mismatch, it is painted red. Since each allele is inherited from one parent, the alleles are always shown in pairs and can be homozygous (same) or heterozygous (different alleles in the pair). Since most SNPs have two possible alleles ( for example $\mathrm{A}/\mathrm{C}$ ), the three possible genotypes could be either a homozygous pair of the first allele (AA) or a homozygous pair of the second allele (CC) or a heterozygous pair of both (AC or CA). In the default color scheme, a SNP is considered to match if one among the pair of alleles are the same (and is thus painted blue). The second color scheme is a variation of the first, and ignores partially-matching SNPs - i.e., a marker is painted blue only if the alleles from both parents match the alleles in the reference SNP.
150
+
151
+ The third color scheme is used to investigate the homozygosity of SNP clusters - it paints a SNP marker blue if the pair of alleles within the SNP are the same, or red if they are different. This can help researchers isolate parental lines with a higher concentration of heterozygous SNP pairs. The fourth color scheme uses the underlying DNA, with the SNP marker colored based on the nucleotide bases present in the alleles. There are four basic colors used for each of the four homozygous base pairs (AA, GG, CC and TT) and all heterozygous base pairs are painted purple. This colour scheme is shown in Figure 2 (bottom right), where many SNPs show two main groups with either the AA or the GG allele. A fifth and final color scheme is used to visualize similarity among lines in a cascaded fashion with each line colored based on its similarity with all the lines above it. Its discussed in detail in the dynamic color scheme subsection in the interaction feature design section below as it only works in certain scenarios depending on the number of lines being visualized. In all five color schemes, missing data where a SNP is not present in a line or its allele is unknown is painted white.
152
+
153
+ The organisation of the table visualization is based on the genomic resolution. At the whole genome level, the SNPs are grouped into chromosomes in order to provide an overview of the dataset and also highlight large-scale patterns (e.g., large clusters of missing SNPs either across the lines vertically or in a single line horizontally indicating an error during sequencing or the SNP assaying process). It also provides spatial context for the user as they investigate SNP clusters in a specific region. When a chromosome has been selected, it is highlighted using a white background in the genome view. Canvas rendering at this level is optimized through an algorithm that filters out minuscule SNP variations to improve rendering speed. This optimization occurs automatically when the size of the rendered SNP markers goes below a single pixel.
154
+
155
+ In the chromosome view, painting of the SNP markers is the same as the genome level, but with the addition of a viewfinder window that allows selection of a region for closer analysis. In the region view, SNP markers are painted using the chosen colour scheme, along with a label in each cell indicating the pair of alleles in the SNP. At this resolution additional markers can also be painted on top of the SNPs such as copy number variations. These are either insertions or deletions in genes at specific locations across the genome and are highlighted as red or white circles with white circles indicating insertions and red circles deletions as shown in Figure 3.
156
+
157
+ ### 6.2 Line Ordering Panel
158
+
159
+ The ordering of the different parental lines is important to researchers because several insights can be gained by identifying similar regions in the table's columns - this is because the extent of similarity in the SNP clusters around a loci across the lines is an indication of shared ancestry or origin between the lines. By default our visualization system orders the lines based on a dendogram tree provided by the user. This tree structure visualizes every parental line as a leaf node in a tree, and it clusters lines based on evolutionary distance. This arrangement can help researchers in studying the SNPs of a particular subset of the lines that are similar to each other.
160
+
161
+ The other ordering mechanism consists of heatmaps of different phenotypic traits for each of the parental lines. The trait map contains one column for each trait (e.g., seed size or protein content), with colouring based on a heatmap of the range of values for that trait. The Virdis color palette is used for the heatmaps for easier distinction between the lines [71]. The lines can be ordered by sorting them based on any of the column values, which places lines with similar phenotypes closer to each other. This feature is explored further in the interaction design section below.
162
+
163
+ ### 6.3 Gene Loci Panel
164
+
165
+ SNPs in the main view are ordered from left to right based on their genetic position in the genome. However, because SNPs may be unevenly distributed across the genome, the position of a SNP's column does not match its physical location in the genome. This makes it difficult to visually indicate additional information regarding the genetic loci of the SNPs. To address this problem we provide a visual map that provides the entire genomic scale of investigation underneath the SNP view and connects every SNP to its actual physical location in the genome, as shown in Figure 4. Additional datasets like gene density maps or gene markers are then placed underneath this physical map so that they corresponding to the location of the SNPs. For the whole-genome view, this panel is hidden as the density of lines makes it difficult to discern positional information. In the chromosome view, the panel is used to show a simple scale indicating the actual physical location of the SNPs in terms of number of base pairs, and can be used to highlight additional datasets like gene density tracks. In the region view, the panel shows individual genes located near the loci of the SNPs. The genes are visualized as horizontal arrows, with the direction of the arrow indicating the orientation of the gene (see Figure 4). Clicking on a gene arrow shows the gene ID and additional information (e.g., the function of the gene or the protein it encodes).
166
+
167
+ ## 7 INTERACTION FEATURE DESIGN
168
+
169
+ Here we outline the different interactive design features in our visualization that address the six major requirements.
170
+
171
+ ### 7.1 Dynamic Ordering of Lines
172
+
173
+ Users are given several option to order the different parental/variety lines. By default, lines are ordered according to a dendogram tree based on an input file provided by the user. This mechanism clusters lines that are evolutionarily similar. If a dendrogram file is not available, the lines are sorted automatically based on the SNP similarity with the reference line. This approach ensures that matching SNP clusters get pushed to the top of the main view while the lines that differ the most are pushed towards the bottom. Additionally users are also given the option of manually selecting a subset of lines through a multi select dropdown list. The order of lines in this case is determined based on the order in which the lines are selected. This gives researchers the option to investigate specific patterns that they might have observed in the dataset in greater detail by only comparing those lines.
174
+
175
+ If a file containing phenotype trait values is provided by the user, then the lines can also be ordered based on these traits. Users are first given an option to select the traits they are interested in mapping for all available traits in the file (the order of selection determines the placing of the trait columns from left to right). Then users are given an option to order the lines based on a specific trait value. This ordering can also be changed by users by clicking the column head of any phenotype trait in the trait map.
176
+
177
+ ### 7.2 Navigating Multiple Genomic Resolutions
178
+
179
+ When investigating large-scale datasets, users need to be able to navigate quickly while still maintaining contextual information regarding their position in the dataset. We provide location context through the three coordinated views described above (genome, chromosome, region). Navigating from genome to chromosome involves clicking on the desired chromosome, and then selecting a region involves positioning the viewfinder window. The viewfinder is translucent by default to ensure that it does not occlude the view of the chromosome, and has a darker border at the bottom indicating the region that has been selected (Figure 3). The user can drag the viewfinder and adjust its left and right extents with the mouse.
180
+
181
+ In scenarios where SNP density is high in a chromosome, it might be difficult to use the viewfinder to zoom into a small enough region due to the limited size of the window. To address this issue, a navigation panel is available in the region view to aid users in controlling the region of interest. It contains two input boxes to enter genomic start and end position (base pair locations) from the start of the chromosome. This allows researchers to look at all SNPs near a specific gene loci (e.g., that corresponds to a particular protein). The view also included navigation buttons that let the user move the region in small incremental steps, and a slider provides additional control over the zoom level of the region view. As the user interacts with the navigation panel, the corresponding changes are reflected in the viewfinder in the chromosome view, maintaining location awareness between the views and from the table to the genome.
182
+
183
+ ### 7.3 Dynamic Color Scheme
184
+
185
+ Apart from the four basic color schemes discussed above, we also offer users a novel way to compare a small subset of lines through a cascading waterfall color pattern. When users manually select fewer than ten varieties for comparison, the visualization changes into a dynamic color scheme for visualizing accumulating differences, instead of the standard blue and red scheme for match and mismatches. In the cascade color scheme, every line is compared with all the lines above it instead of just the one reference line at the top. To encode the similarity pattern, every line is first assigned a unique color. Then all the SNPs in that line are compared with
186
+
187
+ Online Submission ID: 0 the lines vertically above it starting from the top. If a SNP markers matches with any of the lines above it, the color of the topmost matching line is assigned to the SNP. If the marker doesn't match any of the lines above it, it is considered novel and is painted in the unique color assigned to the line. This ensure a cascading waterflow style of coloring, such that all the SNPs in the first line have the same color because there are no SNPs above them. In the second line, all the SNPs that match the first line are painted in the color of the first line and all the SNPs that do not match are painted in the color of the second line. This flow continues until the final line is a mixture of different colors of all the lines above it depending on the precedence of SNPs markers present in it. This offers researchers a insight into the origin of a unique cluster of SNPs. However this color scheme only works for ten lines or fewer, due to the limitations on the number of colours that users can reliably distinguish.
188
+
189
+ ![01963e09-aaee-7674-9d5b-0f9ef7d3d570_7_154_153_1491_565_0.jpg](images/01963e09-aaee-7674-9d5b-0f9ef7d3d570_7_154_153_1491_565_0.jpg)
190
+
191
+ Figure 5: Split View demonstrating comparison of SNP clusters across two different genomic regions in the same chromosome. Each view has a SNP column pinned to the left and the line NAM 25 is selected and highlighted across both views.
192
+
193
+ ### 7.4 Row and Column Highlighting
194
+
195
+ In large datasets with many rows and columns, it can be hard for users to navigate across the SNP view to identify a specific SNP marker and its allele annotations. To support users with this task, we offer a row highlighting option that lets users highlight a specific row by clicking on the line name (this draws white guidelines across the SNP view as shown in Figure 5). To further aid users with this problem, we also offer a tooltip feature that shows all the details such as the corresponding line name and the SNP index of a specific marker when the mouse is hovered over it. The guidelines and tooltips greatly improve the user's ability to trace SNPs along a row - and the highlighting can also act as a temporary landmark that allows visual inspection of rows above and below the guidelines.
196
+
197
+ Another issue that can occur during visual comparison of two SNP columns is the distance between their loci. If the SNPs are far enough apart they cannot be viewed in the region window (without zooming out to the point where too much detail is lost). To solve this problem we introduced a column pinning mechanism that lets users pin a SNP column to the beginning of the region view. The selected SNP column is also highlighted in the region view by changing the color of the allele annotations on each marker in the column from white to black. Users can then pan across the chromosome to a different location and compare the SNP columns in that region with the pinned SNP column (see Figure 5, in which two SNP columns have been pinned in the region view). An additional advantage of this feature is that it also lets users temporarily mark and highlight a SNP column that might have caught their interest for further investigation.
198
+
199
+ ### 7.5 Multi-Region Analysis
200
+
201
+ Although SNPs are inherited in clusters around a specific gene loci, several SNP clusters across the genome can be related to each other due to gene duplication or dependency. This is a common issue for poylploid plants which may have several duplicated copies of the same gene. The regulation and expression of these genes can vary based on the SNP clusters within or around them, which means that researchers often have to jump between multiple regions in the genome to compare these SNP clusters. While the column-pinning feature discussed above helps in this situation to an extent, it only lets users compare a single column at a time, meaning they lose context of the neighbouring SNPs in the cluster. To address this problem, we implemented a split-screen view that splits the region view into two parts with each part focusing on a different region. All of the other features discussed above such as row and column highlighting get carried over into these split views. This gives users the option to pin two different SNP columns and compare their neighbouring clusters in a side-by-side view (as shown in Figure 5).
202
+
203
+ ### 7.6 Lightweight Comparison Preview
204
+
205
+ Based on the feedback collected from our collaborators, one of the most commonly used selection feature is the ability to switch the reference row at the top and compare a different line with the other lines. In certain datasets like lentils (Lens culinaris) the number of lines that are being studied are quite high due to the large number of possible cultivars and variants. This is a general problem in most food crops as they are cultivated across the world in a variety of environmental conditions with different outcomes due to the selective breeding process. This means breeders often need a way to carry out lightweight row comparisons across the lines 'on the fly' without switching the reference line of the entire visualization. To address this issue we offer a preview mode along with the row highlighting feature. Users can first select a row in the SNP view by clicking its line name. This highlights the entire row with white guide lines. Users can then hover their mouse over any other row to perform a quick comparison between the two rows and update the coloring in the row that is being hovered. The coloring switches back to its default state once the mouse hover over the row is removed. This way researchers can quickly compare select rows without having to switch the main reference row at the top and update the whole Online Submission ID: 0 SNP view. Similarly, we also offer previews for the interactions in the trait map that re-orders the table: users can hover their mouse over the column head of a trait column to see a quick preview in a small floating window next to the mouse cursor of what the SNP view would look like if the lines were ordered based on the that phenotype. The preview goes away as soon the mouse is moved away from the column head.
206
+
207
+ ![01963e09-aaee-7674-9d5b-0f9ef7d3d570_8_154_149_1494_418_0.jpg](images/01963e09-aaee-7674-9d5b-0f9ef7d3d570_8_154_149_1494_418_0.jpg)
208
+
209
+ Figure 6: A preview of the SNP view is shown in the top left corner above the phenotype trait map when the user hovers their mouse over a particular phenotype. The preview shows what the SNP view would look like if the lines were sorted using that particular phenotype.
210
+
211
+ ### 7.7 Revisitation Support Through Snapshots
212
+
213
+ The exploratory nature of our visualization tool can lead to an increase in the spatial cognition demand in users due to the nature of interacting with the dataset at multiple resolutions and complex filtering scenarios. This problem becomes further evident when users have to rely on context switching between different viewpoints for visual comparison when looking at SNP markers in different regions. To address this issue our system maintains an in-memory store of the sequence of actions that led to the current state of the visualization in the interface. Each of these memory states are stored along with a thumbnail image (snapshot) of the visualization at that instant. A floating snapshot panel that is minimized by default is available for users to pull up and explore prior states of the visualization. Users can then click on any of these snapshots to go back to the state of the visualization at that prior point in time. This providers users with a lightweight history tracking mechanism that can help them retrace their steps during data exploration. The snapshots are automatically tagged with a note that indicates the chromosome name and the start and end position of the region view. This note can be edited by users to also include other points of interest if needed. The snapshot feature also provides users a novel way of interacting with the system by creating snapshots of multiple regions of interest and going back and forth between them for quick visual lookup and comparison. The system includes mechanisms for automatic creation of snapshots (e.g., if the user stays in a particular configuration for 30 seconds) as well as manual creation (through a keyboard shortcut).
214
+
215
+ ## 8 ITERATIVE REFINEMENT, TESTING, AND CURRENT USE
216
+
217
+ The design of our visualization system was iteratively refined over a period of two years through multiple rounds of feedback from our research collaborators as they used our system to explore their different datasets. During this period our system was also stress tested with larger datasets (e.g., 29,000 SNP markers across 1000 lines of barley, and the first 10,000 SNP markers across 328 lentil varieties). An example of visualization of the large lentil dataset is shown in Figure 8; this demonstrates the usability of our system even with very large tables - even at this scale, the visualization shows genome-level patterns such as lines with an extensive set of missing markers ( 8 (b)) or SNP clusters that are completely different across the majority of lines ( 8 (a)). Our system is also in use by a group of plant breeders to showcase the diversity of agronomically important traits among a population of Canola founder lines, and has been adapted for use with several other use cases including exploration of genotypes for Blackleg, a common oilseed pathogen. Our tool is open-source and freely available [repository removed for review]. and has been integrated into a major North American pulse crop database to visualize the differences among their various cultivars.
218
+
219
+ ## 9 DISCUSSION AND FUTURE WORK
220
+
221
+ In the following sections we consider the relationship of our requirements and techniques to previous work on table visualization and visual comparison, discuss ways that our techniques can be applied to datasets outside the domain of genomics, and outline a set of directions for future work.
222
+
223
+ ### 9.1 Requirements and Techniques in Context of Previ- ous Work
224
+
225
+ Working in real-world collaboration with genomic researchers and plant breeders means that our SNP-haplotype viewer implements some interaction techniques that are shared with what has been seen in previous systems - for example, two of our requirements match those identified in Ripken's interviews with biologists [81], although Ripken's research took a broader view and our requirements are thus more focused on the comparison tasks themselves; similarly, several systems have provided techniques for clustering, sorting, and manual row rearrangement (e.g., [46, 52, 66, 94, 99]).
226
+
227
+ However, several techniques and features are novel (or have novel adaptations to fit the scenario of large-scale SNP tables). First, our techniques for row and column comparisons are an advance in terms of user effort: the lightweight row comparisons, column pinning, split-screen views, and visual previews of column sorting substantially reduce the number of steps needed to carry out a visual comparison. Reducing effort in exploratory visual analysis is critical: although there may be ways to achieve the comparison using standard techniques, it is important to provide low-effort mechanisms so that users can follow exploratory paths without needing to think about multiple steps in the UI. Our goals here are similar to those of Tominski's CompaRing system [88], although his approach used juxtaposition whereas ours uses an explicit encoding of difference. Second, techniques such as providing three persistent zoom views that follow the structure of the genome, and visual tracks to indicate genomic location as well as gene commonality, assist the user in maintaining location awareness (since table locations are not well matched to actual locations). Third, providing snapshots to track, compare, share, and revisit table configurations is an extension to the work done previously on visualization provenance (e.g., $\left\lbrack {{34},{40}}\right\rbrack$ that broadens the focus from communication and storytelling to supporting the basic mechanics and processes of navigating through the complex parameter space of table configuration.
228
+
229
+ ![01963e09-aaee-7674-9d5b-0f9ef7d3d570_9_154_257_1491_406_0.jpg](images/01963e09-aaee-7674-9d5b-0f9ef7d3d570_9_154_257_1491_406_0.jpg)
230
+
231
+ Figure 7: A preview of the light weight row comparison feature. When users hover their mouse over a row after highlighting a different row. The hovered row is highlighted with a single guide line and its colouring is update to reflect similarity with the other highlighted row instead of the reference line at the top.
232
+
233
+ ![01963e09-aaee-7674-9d5b-0f9ef7d3d570_9_154_998_1494_906_0.jpg](images/01963e09-aaee-7674-9d5b-0f9ef7d3d570_9_154_998_1494_906_0.jpg)
234
+
235
+ Figure 8: SNP View visualizing the similarities between ${10}\mathrm{\;K}$ SNPs in a reference line across 328 Lentils varieties. a) Cluster of SNPs that dont match with the reference line across most of the varieties. b) One lentil variety line seen as an almost white line across the entire region indicating missing data across its entirety due to possible sequencing error.
236
+
237
+ ![01963e09-aaee-7674-9d5b-0f9ef7d3d570_10_151_147_1497_563_0.jpg](images/01963e09-aaee-7674-9d5b-0f9ef7d3d570_10_151_147_1497_563_0.jpg)
238
+
239
+ Figure 9: A example of the snapshot feature that lets users store their interaction history as a series of snapshots with thumbnails showing the state of the visualization when the snapshot was taken. The panel is minimized by default but can opened up as seen here and contains three snapshots.
240
+
241
+ ### 9.2 Generalizing to Other Types of Wide Datasets
242
+
243
+ Although we have focused on the domain of genomic research and the specific needs of our collaborators, we believe that several of our requirements and interaction techniques will be applicable to other domains as well. Column-based comparison tools will be useful whenever the data has columnar dependencies or links between columns. For example, if columns are used for temporal data, there may be cyclic relationships that need to be brought closer together for investigation (e.g., natural cycles such as seasons, or links created by external phenomena such as temperature data during sunspot years). Flexible row comparison mechanisms will also be important in any dataset where there are many entities, and where comparisons need to be made between rows as well as to an obvious reference row. For example, a dataset of baseball players (e.g., as was used in the Table Lens [79]) does not have a single clear reference, and it is likely that many different pairs of players could be compared for a given task. The idea of multiple flexible encodings can also be useful in other datasets - these allow users to cycle quickly through different perspectives on the comparison, gaining a broader view of differences. In particular, our 'cascading differences' encoding could be useful in showing the accumulation of changes when rows represent successive versions of a complex entity (e.g., a software code base). Finally, a configuration-snapshot mechanism should also be widely applicable in any visualization where users change organization frequently, and where users need to revisit recent configurations that they have previously explored.
244
+
245
+ ### 9.3 Future Research Directions
246
+
247
+ Our future research work will involve activities to improve the SNP viewer as part of our ongoing collaboration, and new projects to explore broader visualization issues raised by our experience. In the SNP visualization system, we will add algorithms for pattern mining in the table data (e.g., $\left\lbrack {9,{21},{55},{75}}\right\rbrack$ ) and tools for comparing these patterns to external evidence such as GWAS results. We will also add support for additional context tracks (e.g., to provide GWAS results or other gene-centric measurements such as expression level) - aligning GWAS results such as a Manhattan plot with the table visualization can provide a bridge between algorithmic approaches and visual analysis [72], and gives users a set of starting points for their exploration. We also plan to extend the interactions available with the configuration-snapshot tool (e.g., to provide explicit encoding of differences between two configurations (e.g., $\left\lbrack {{11},{69},{90}}\right\rbrack$ ).
248
+
249
+ In the broader visualization context, our initial goal is to test our new interaction techniques in other types of wide tabular datasets, and broaden our interaction requirements to encompass new tasks and comparison activities: for example, we will work with datasets that use continuous rather than discrete values (requiring new encodings for the table), we will test our tools with large-scale time series that contain cyclic column dependencies, and we will add additional techniques to work with table subsets $\left\lbrack {{33},{81}}\right\rbrack$ . We also plan to follow up on work that has looked at the details of visual comparisons (e.g., $\left\lbrack {{47},{53},{96}}\right\rbrack$ ) and assess the components of visual comparison in table visualizations (and support for these components) at a more fine-grained level.
250
+
251
+ ## 10 CONCLUSION
252
+
253
+ Analytics tasks in large-scale table visualizations involve comparisons and identification of patterns across rows and columns, but these tasks become more difficult when tables are large - as is the case for SNP analyses in genomic research. Current SNP visualizations are limited in their support for complex analytic tasks in wide-scale tables - both because they do not focus on interaction, and because they do not address issues raised by tables with thousands or tens of thousands of columns. In collaboration with genomic researchers and plant breeders, we have identified six new interaction requirements that will help to support visual analytics tasks with wide-scale SNP datasets. The requirements cover needs for flexible arrangements of the table, lightweight comparisons of both rows and columns, flexible visual encodings, and the ability to save table configurations. We developed a new SNP-haplotype viewer that implements interaction techniques for each of our proposed requirements; the tool has been in continuous and successful use by our collaborators over several years. Our work contributes both a better understanding of the needs for large-scale visual analysis in table visualizations, and specific interaction techniques that can address those needs.
254
+
255
+ ## REFERENCES
256
+
257
+ [1] D. Albers, C. Dewey, and M. Gleicher. Sequence surveyor: Leveraging overview for scalable genomic alignment visualization.
258
+
259
+ 17(12):2392-2401, 2011.
260
+
261
+ [2] D. Archambault, H. Purchase, and B. Pinaud. Animation, small multiples, and the effect of mental map preservation in dynamic
262
+
263
+ graphs. IEEE transactions on visualization and computer graphics, 17(4):539-552, 2010.
264
+
265
+ [3] V. Bandi and C. Gutwin. Interactive exploration of genomic conservation. In Graphics Interface 2020, 2020.
266
+
267
+ [4] J. C. Barrett. Haploview: Visualization and analysis of snp genotype data. Cold Spring Harbor Protocols, 2009(10):pdb-ip71, 2009.
268
+
269
+ [5] F. Beck, M. Burch, C. Vehlow, S. Diehl, and D. Weiskopf. Rapid serial visual presentation in dynamic graph visualization. In 2012 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), pp. 185-192. IEEE, 2012.
270
+
271
+ [6] M. Behrisch, J. Davey, S. Simon, T. Schreck, D. Keim, and J. Kohlhammer. Visual comparison of orderings and rankings. In EuroVis, 2013.
272
+
273
+ [7] P. Berger, H. Schumann, and C. Tominski. Visually exploring relations between structure and attributes in multivariate graphs. In 2019 23rd International Conference Information Visualisation (IV), pp. 261-268. IEEE, 2019.
274
+
275
+ [8] J. Bertin. Graphics and graphic information processing. de Gruyter, 1981.
276
+
277
+ [9] M. Blumenschein, M. Behrisch, S. Schmid, S. Butscher, D. R. Wahl, K. Villinger, B. Renner, H. Reiterer, and D. A. Keim. Smartexplore: Simplifying high-dimensional data analysis through a table-based visual analytics approach. In 2018 IEEE Conference on Visual Analytics Science and Technology (VAST), pp. 36-47. IEEE, 2018.
278
+
279
+ [10] R. Brath and M. Peters. Excel visualizer: one click wysiwyg spreadsheet visualization. In Tenth International Conference on Information Visualisation (IV'06), pp. 68-73. IEEE, 2006.
280
+
281
+ [11] S. Bremm, T. von Landesberger, M. Heß, T. Schreck, P. Weil, and K. Hamacherk. Interactive visual comparison of multiple trees. In 2011 IEEE Conference on Visual Analytics Science and Technology (VAST), pp. 31-40. IEEE, 2011.
282
+
283
+ [12] P. Carvalho, P. Hitzelberger, B. Otjacques, F. Bouali, and G. Venturini. Information visualization for csv open data files structure analysis. In IVAPP 2015-6th International Conference on Information Visualization Theory and Applications; VISIGRAPP, Proceedings, March, pp. ${101} - {108},{2015}$ .
284
+
285
+ [13] M. Chen, R. Botchen, R. Hashim, D. Weiskopf, T. Ertl, and I. Thornton. Visual signatures in video visualization. IEEE Transactions on Visualization and Computer Graphics, 12(5):1093-1100, 2006.
286
+
287
+ [14] W. S. Cleveland and R. McGill. Graphical perception: Theory, experimentation, and application to the development of graphical methods. Journal of the American statistical association, 79(387):531-554, 1984.
288
+
289
+ [15] L. A. Cooper. Individual differences in visual comparison processes. Perception & Psychophysics, 19(5):433-444, 1976.
290
+
291
+ [16] Z. Cutler, K. Gadhave, and A. Lex. Trrack: A library for provenance-tracking in web-based visualizations. In 2020 IEEE Visualization Conference (VIS), pp. 116-120. IEEE, 2020.
292
+
293
+ [17] S. Deschamps, V. Llaca, and G. D. May. Genotyping-by-sequencing in plants. Biology, 1(3):460-483, 2012.
294
+
295
+ [18] C. Diesh, G. J. Stevens, P. Xie, T. D. J. Martinez, E. A. Hershberg, A. Leung, E. Guo, S. Dider, J. Zhang, C. Bridge, et al. Jbrowse 2: A modular genome browser with views of synteny and structural variation. BioRxiv, pp. 2022-07, 2022.
296
+
297
+ [19] K. DOBASHI, C. P. FULFORD, M.-F. G. Lin, et al. A heat map generation to visualize engagement in classes using moodle learning logs. In 2019 4th international conference on information technology (InCIT), pp. 138-143. IEEE, 2019.
298
+
299
+ [20] N. C. Durand, J. T. Robinson, M. S. Shamim, I. Machol, J. P. Mesirov, E. S. Lander, and E. L. Aiden. Juicebox provides a visualization system for hi-c contact maps with unlimited zoom. Cell systems, 3(1):99-101, 2016.
300
+
301
+ [21] C. Eichner, H. Schumann, and C. Tominski. Making parameter dependencies of time-series segmentation visually understandable. In Computer Graphics Forum, vol. 39, pp. 607-622. Wiley Online Library, 2020.
302
+
303
+ [22] G. Ellis and A. Dix. A taxonomy of clutter reduction for informa-
304
+
305
+ tion visualisation. IEEE transactions on visualization and computer graphics, 13(6):1216-1223, 2007.
306
+
307
+ [23] P. Fournier-Viger, W. Gan, Y. Wu, M. Nouioua, W. Song, T. Truong, and H. Duong. Pattern mining: Current challenges and opportunities.
308
+
309
+ In Database Systems for Advanced Applications. DASFAA 2022 International Workshops: BDMS, BDQM, GDMA, IWBT, MAQTDS, and PMBD, Virtual Event, April 11-14, 2022, Proceedings, pp. 34-49. Springer, 2022.
310
+
311
+ [24] J. Fuchs, F. Fischer, F. Mansmann, E. Bertini, and P. Isenberg. Evaluation of alternative glyph designs for time series data in a small multiple setting. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 3237-3246, 2013.
312
+
313
+ [25] K. Furmanova, S. Gratzl, H. Stitz, T. Zichner, M. Jaresova, M. En-nemoser, A. Lex, and M. Streit. Taggle: Scalable visualization of tabular data through aggregation. arXiv preprint arXiv:1712.05944, 6, 2017.
314
+
315
+ [26] A. J. Gibbs and G. A. McIntyre. The diagram, a method for comparing sequences: Its use with amino acid and nucleotide sequences. European journal of biochemistry, 16(1):1-11, 1970.
316
+
317
+ [27] M. Gleicher. Considerations for visualizing comparison. IEEE transactions on visualization and computer graphics, 24(1):413-423, 2017.
318
+
319
+ [28] M. Gleicher, D. Albers, R. Walker, I. Jusufi, C. D. Hansen, and J. C. Roberts. Visual comparison for information visualization. Information Visualization, 10(4):289-309, 2011.
320
+
321
+ [29] M. Glueck, P. Hamilton, F. Chevalier, S. Breslav, A. Khan, D. Wigdor, and M. Brudno. Phenoblocks: Phenotype comparison visualizations. IEEE Transactions on Visualization and Computer Graphics, 22(1):101-110, 2015.
322
+
323
+ [30] J. R. González, L. Armengol, X. Solé, E. Guinó, J. M. Mercader, X. Estivill, and V. Moreno. Snpassoc: an $\mathrm{r}$ package to perform whole genome association studies. Bioinformatics, 23(5):654-655, 2007.
324
+
325
+ [31] J. Görtler, F. Hohman, D. Moritz, K. Wongsuphasawat, D. Ren, R. Nair, M. Kirchner, and K. Patel. Neo: Generalizing confusion matrix visualization to hierarchical and multi-output labels. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1-13, 2022.
326
+
327
+ [32] R. Gove, N. Gramsky, R. Kirby, E. Sefer, A. Sopan, C. Dunne, B. Shneiderman, and M. Taieb-Maimon. Netvisia: Heat map & matrix visualization of dynamic social network statistics & content. In 2011 IEEE third international conference on privacy, security, risk and trust and 2011 IEEE third international conference on social computing, pp. 19-26. IEEE, 2011.
328
+
329
+ [33] S. Gratzl, N. Gehlenborg, A. Lex, H. Pfister, and M. Streit. Domino: Extracting, comparing, and manipulating subsets across multiple tabular datasets. IEEE transactions on visualization and computer graphics, 20(12):2023-2032, 2014.
330
+
331
+ [34] S. Gratzl, A. Lex, N. Gehlenborg, N. Cosgrove, and M. Streit. From visual exploration to storytelling and back again. In Computer Graphics Forum, vol. 35, pp. 491-500. Wiley Online Library, 2016.
332
+
333
+ [35] S. Gratzl, A. Lex, N. Gehlenborg, H. Pfister, and M. Streit. Lineup: Visual analysis of multi-attribute rankings. IEEE transactions on visualization and computer graphics, 19(12):2277-2286, 2013.
334
+
335
+ [36] Y. Guo, S. Guo, Z. Jin, S. Kaul, D. Gotz, and N. Cao. Survey on visual analysis of event sequence data. IEEE Transactions on Visualization and Computer Graphics, 28(12):5091-5112, 2021.
336
+
337
+ [37] H. L. Han and M. A. Nacenta. The effect of visual and interactive representations on human performance and preference with scalar data fields. Proceedings of Graphics Interface 2020, 2020.
338
+
339
+ [38] A. Haug-Baltzell, S. A. Stephens, S. Davey, C. E. Scheidegger, and E. Lyons. Synmap2 and synmap3d: web-based whole-genome syn-teny browsers. Bioinformatics, 33(14):2197-2198, 2017.
340
+
341
+ [39] J. Heer, N. Kong, and M. Agrawala. Sizing the horizon: the effects of chart size and layering on the graphical perception of time series visualizations. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 1303-1312, 2009.
342
+
343
+ [40] J. Heer, J. Mackinlay, C. Stolte, and M. Agrawala. Graphical histories for visualization: Supporting analysis, communication, and evaluation. IEEE transactions on visualization and computer graphics, 14(6):1189-1196, 2008.
344
+
345
+ [41] N. Henry and J.-D. Fekete. Evaluating visual table data understanding.
346
+
347
+ In Proceedings of the 2006 AVI workshop on Beyond time and errors: novel evaluation methods for information visualization, pp. 1-5, 2006.
348
+
349
+ [42] N. Henry and J.-D. Fekete. Matrixexplorer: a dual-representation
350
+
351
+ system to explore social networks. IEEE transactions on visualization and computer graphics, 12(5):677-684, 2006.
352
+
353
+ [43] N. Henry and J.-D. Fekete. Matlink: Enhanced matrix visualization for analyzing social networks. In Human-Computer Interaction-INTERACT 2007, vol. 4663, pp. 288-302. Springer, 2007.
354
+
355
+ [44] N. Henry, J.-D. Fekete, and M. J. McGuffin. Nodetrix: a hybrid visualization of social networks. IEEE transactions on visualization and computer graphics, 13(6):1302-1309, 2007.
356
+
357
+ [45] H. Hinterberger. The visulab: An instrument for interactive, comparative visualization. Technical Report/ETH Zurich, Department of Computer Science, 682, 2010.
358
+
359
+ [46] J. Ivanisevic, H. P. Benton, D. Rinehart, A. Epstein, M. E. Kurczy, M. D. Boska, H. E. Gendelman, and G. Siuzdak. An interactive cluster heat map to visualize and explore multidimensional metabolomic data. Metabolomics, 11:1029-1034, 2015.
360
+
361
+ [47] N. Jardine, B. D. Ondov, N. Elmqvist, and S. Franconeri. The perceptual proxies of visual comparison. IEEE transactions on visualization and computer graphics, 26(1):1012-1021, 2019.
362
+
363
+ [48] W. Javed and N. Elmqvist. Exploring the design space of composite visualization. In 2012 ieee pacific visualization symposium, pp. 1-8. IEEE, 2012.
364
+
365
+ [49] W. Javed, B. McDonnel, and N. Elmqvist. Graphical perception of multiple time series. IEEE transactions on visualization and computer graphics, 16(6):927-934, 2010.
366
+
367
+ [50] W. Jentner, G. Lindholz, H. Hauptmann, M. El-Assady, K.-L. Ma, and D. Keim. Visual analytics of co-occurrences to discover subspaces in structured data. ACM Transactions on Interactive Intelligent Systems, 2023.
368
+
369
+ [51] G.-L. Jiang. Molecular markers and marker-assisted breeding in plants. Plant breeding from laboratories to fields, 3:45-83, 2013.
370
+
371
+ [52] M. John, C. Tominski, and H. Schumann. Visual and analytical extensions for the table lens. In Visualization and Data Analysis 2008, vol. 6809, pp. 62-73. SPIE, 2008.
372
+
373
+ [53] J. Kehrer, H. Piringer, W. Berger, and M. E. Gröller. A model for structure-based comparison of many categories in small-multiple displays. IEEE transactions on visualization and computer graphics, 19(12):2287-2296, 2013.
374
+
375
+ [54] D. A. Keim and H.-P. Kriegel. Visdb: Database exploration using multidimensional visualization. IEEE Computer Graphics and Applications, 14(5):40-49, 1994.
376
+
377
+ [55] C. K. Leung, E. W. Madill, and A. Pazdor. Visualization and visual knowledge discovery from big uncertain data. In 2022 26th International Conference Information Visualisation (IV), pp. 330-335. IEEE, 2022.
378
+
379
+ [56] G. Li, R. Li, Z. Wang, C. H. Liu, M. Lu, and G. Wang. Hitailor: Interactive transformation and visualization for hierarchical tabular data. IEEE Transactions on Visualization and Computer Graphics, 29(1):139-148, 2022.
380
+
381
+ [57] J. Liu, A. Prouzeau, B. Ens, and T. Dwyer. Design and evaluation of interactive small multiples data visualisation in immersive spaces. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 588-597. IEEE, 2020.
382
+
383
+ [58] S. Liu, Y. Liu, X. Yang, C. Tong, D. Edwards, I. A. Parkin, M. Zhao, J. Ma, J. Yu, S. Huang, et al. The brassica oleracea genome reveals the asymmetrical evolution of polyploid genomes. Nature communications, 5(1):3930, 2014.
384
+
385
+ [59] X. Liu and H.-W. Shen. The effects of representation and juxtaposition on graphical perception of matrix visualization. In Proceedings of the 33rd annual ACM conference on Human factors in computing systems, pp. 269-278, 2015.
386
+
387
+ [60] T. Loua. Atlas statistique de la population de Paris. J. Dejey & cie, 1873.
388
+
389
+ [61] M. Lu, J. Lanir, C. Wang, Y. Yao, W. Zhang, O. Deussen, and H. Huang. Modeling just noticeable differences in charts. IEEE transactions on visualization and computer graphics, 28(1):718-726, 2021.
390
+
391
+ [62] S. LYi, J. Jo, and J. Seo. Comparative layouts revisited: Design space,
392
+
393
+ guidelines, and future directions. IEEE Transactions on Visualization and Computer Graphics, 27(2):1525-1535, 2020.
394
+
395
+ [63] S. LYi, Q. Wang, F. Lekschas, and N. Gehlenborg. Gosling: A grammar-based toolkit for scalable and interactive genomics data
396
+
397
+ visualization. IEEE Transactions on Visualization and Computer Graphics, 28(1):140-150, 2021.
398
+
399
+ [64] M. Meyer, T. Munzner, and H. Pfister. Mizbee: a multiscale synteny browser. IEEE transactions on visualization and computer graphics, 15(6):897-904, 2009.
400
+
401
+ [65] I. Milne, P. Shaw, G. Stephen, M. Bayer, L. Cardle, W. T. Thomas, A. J. Flavell, and D. Marshall. Flapjack-graphical genotype visualization. Bioinformatics, 26(24):3133-3134, 2010.
402
+
403
+ [66] B. B. Misra. New software tools, databases, and resources in metabolomics: Updates from 2020. Metabolomics, 17(5):49, 2021.
404
+
405
+ [67] S. Mitra, B. Klar, and D. H. Huson. Visual and statistical comparison of metagenomes. Bioinformatics, 25(15):1849-1855, 2009.
406
+
407
+ [68] NHGRI. https://www.genome.gov/about-genomics/ fact-sheets/A-Brief-Guide-to-Genomics. Accessed: Jan. 2020.
408
+
409
+ [69] C. Niederer, H. Stitz, R. Hourieh, F. Grassinger, W. Aigner, and M. Streit. Taco: visualizing changes in tables over time. IEEE transactions on visualization and computer graphics, 24(1):677-686, 2017.
410
+
411
+ [70] H. Nijveen, M. van Kaauwen, D. G. Esselink, B. Hoegen, and B. Vos-man. Qualitysnpng: a user-friendly snp detection and visualization tool. Nucleic acids research, 41(W1):W587-W590, 2013.
412
+
413
+ [71] J. R. Nuñez, C. R. Anderton, and R. S. Renslow. Optimizing col-ormaps with consideration for color vision deficiency to enable accurate interpretation of scientific data. PloS one, 13(7):e0199239, 2018.
414
+
415
+ [72] S. Nusrat, T. Harbig, and N. Gehlenborg. Tasks, techniques, and tools for genomic data visualization. In Computer Graphics Forum, vol. 38, pp. 781-805. Wiley Online Library, 2019.
416
+
417
+ [73] B. Ondov, N. Jardine, N. Elmqvist, and S. Franconeri. Face to face: Evaluating visual comparison. IEEE transactions on visualization and computer graphics, 25(1):861-871, 2018.
418
+
419
+ [74] W. H. Organization et al. Genomics and world health: Report of the advisory committee on health research. 2002.
420
+
421
+ [75] C. Perin, P. Dragicevic, and J.-D. Fekete. Revisiting bertin matrices: New interactions for crafting tabular visualizations. IEEE transactions on visualization and computer graphics, 20(12):2082-2091, 2014.
422
+
423
+ [76] C. Perin, R. Vuillemot, and J.-D. Fekete. À table! improving temporal navigation in soccer ranking tables. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 887-896, 2014.
424
+
425
+ [77] J. A. Poland and T. W. Rife. Genotyping-by-sequencing for plant breeding and genetics. The plant genome, 5(3), 2012.
426
+
427
+ [78] L. A. Pray. Semi-conservative dna replication: Meselson and stahl. Nature Education, 1(1):98, 2008.
428
+
429
+ [79] R. Rao and S. K. Card. The table lens: merging graphical and symbolic representations in an interactive focus+ context visualization for tabular information. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 318-322, 1994.
430
+
431
+ [80] S. B. Reiff, A. J. Schroeder, K. Kırlı, A. Cosolo, C. Bakker, L. Mercado, S. Lee, A. D. Veit, A. K. Balashov, C. Vitzthum, et al. The 4d nucleome data portal as a resource for searching and visualizing curated nucleomics data. Nature communications, 13(1):2365, 2022.
432
+
433
+ [81] C. Ripken, S. Tusk, and C. Tominski. Immersive analytics of heterogeneous biological data informed through need-finding interviews. Proc. EuroVA, 2021.
434
+
435
+ [82] M. M. N. Rocha and C. G. da Silva. Heatmap matrix: a multidimensional data visualization technique. In Proceedings of the 31st Conference on Graphics, Patterns and Images (SIBGRAPI), 2018.
436
+
437
+ [83] S. Rufiange and G. Melançon. Animatrix: A matrix-based visualization of software evolution. In 2014 second IEEE working conference on software visualization, pp. 137-146. IEEE, 2014.
438
+
439
+ [84] N. Shah, M. V. Teplitsky, S. Minovitsky, L. A. Pennacchio, P. Hugen-holtz, B. Hamann, and I. L. Dubchak. Snp-vista: an interactive snp visualization tool. BMC bioinformatics, 6:1-7, 2005.
440
+
441
+ [85] H. Song, B. Lee, B. H. Kim, and J. Seo. Diffmatrix: Matrix-based
442
+
443
+ interactive visualization for comparing temporal trends. In EuroVis (Short Papers), 2012.
444
+
445
+ [86] A. Srinivasan, M. Brehmer, B. Lee, and S. M. Drucker. What's the difference? evaluating variations of multi-series bar charts for visual comparison tasks. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1-12, 2018.
446
+
447
+ [87] J. Talbot, V. Setlur, and A. Anand. Four experiments on the perception of bar charts. IEEE transactions on visualization and computer graphics, 20(12):2152-2160, 2014.
448
+
449
+ [88] C. Tominski. Comparing: reducing costs of visual comparison. In Proceedings of the Eurographics/IEEE VGTC Conference on Visualization: Short Papers, pp. 137-141, 2016.
450
+
451
+ [89] C. Tominski, C. Forsell, and J. Johansson. Interaction support for visual comparison inspired by natural behavior. IEEE Transactions on visualization and computer graphics, 18(12):2719-2728, 2012.
452
+
453
+ [90] C. Tominski, S. Gladisch, U. Kister, R. Dachselt, and H. Schumann. Interactive lenses for visualization: An extended survey. In Computer Graphics Forum, vol. 36, pp. 173-200. Wiley Online Library, 2017.
454
+
455
+ [91] T. von Landesberger. Insights by visual comparison: The state and challenges. IEEE computer graphics and applications, 38(3):140- 148, 2018.
456
+
457
+ [92] E. Wall, S. Das, R. Chawla, B. Kalidindi, E. T. Brown, and A. Endert. Podium: Ranking data using mixed-initiative visual analytics. IEEE transactions on visualization and computer graphics, 24(1):288-297, 2017.
458
+
459
+ [93] A. P. Wilkey, A. V. Brown, S. B. Cannon, and E. K. Cannon. Gcvit: a method for interactive, genome-wide visualization of resequencing and snp array data. BMC genomics, 21(1):1-9, 2020.
460
+
461
+ [94] L. Wilkinson and M. Friendly. The history of the cluster heat map. The American Statistician, 63(2):179-184, 2009.
462
+
463
+ [95] J. Wolff, L. Rabbani, R. Gilsbach, G. Richard, T. Manke, R. Backofen, and B. A. Grüning. Galaxy hicexplorer 3: a web server for reproducible hi-c, capture hi-c and single-cell hi-c data analysis, quality control and visualization. Nucleic acids research, 48(W1):W177- W184, 2020.
464
+
465
+ [96] E. Wu. View composition algebra for ad hoc comparison. IEEE Transactions on Visualization and Computer Graphics, 28(6):2470- 2485, 2022.
466
+
467
+ [97] Y. Yang, W. Xia, F. Lekschas, C. Nobre, R. Krüger, and H. Pfister. The pattern is in the details: An evaluation of interaction techniques for locating, searching, and contextualizing details in multivariate matrix visualizations. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1-15, 2022.
468
+
469
+ [98] I. Yarbrough, Q. Sun, D. Reeves, K. Hackman, R. Bennett, and D. Henshel. Visualizing building energy demand for building peak energy analysis. Energy and Buildings, 91:10-15, 2015.
470
+
471
+ [99] J. Zhao, M. Karimzadeh, L. S. Snyder, C. Surakitbanharn, Z. C. Qian, and D. S. Ebert. Metricsvis: A visual analytics system for evaluating employee performance in public safety agencies. IEEE transactions on visualization and computer graphics, 26(1):1193-1203, 2019.
472
+
473
+ [100] X. Zhu, Y. Zhang, Y. Wang, D. Tian, A. S. Belmont, J. R. Swedlow, and J. Ma. Nucleome browser: an integrative and multimodal data navigation platform for 4d nucleome. Nature Methods, 19(8):911- 913, 2022.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/ivIPr2ukrwk/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,253 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § SUPPORTING VISUAL COMPARISON AND PATTERN IDENTIFICATION IN WIDESCALE GENOMIC DATASETS
2
+
3
+ Category: Research
4
+
5
+ < g r a p h i c s >
6
+
7
+ Figure 1: Visualization comparing SNPs at a specific genomic region in 52 varieties of Canola. The varieties are ordered by their increasing levels of aliphatic glucosinolates. Every SNP is coloured blue (match) or red (mismatch) based on its similarity to the reference variety at the top; missing SNPs are encoded in white. A reference map of phenotypic trait values is shown at left, and connections to genomic location are shown at bottom.
8
+
9
+ § ABSTRACT
10
+
11
+ Large-scale linear datasets are often visualized using a tabular structure (rows and columns). Visual analysis tasks in such systems involve comparisons and identification of patterns across rows and columns, but these tasks can be hard to perform as the table increases in size because rows and columns of interest can be far apart in the table. This problem is particularly evident in table visualizations of genomic datasets like SNPs, which are genetic markers used in comparing different variants of an organism. Visual analysis of SNP datasets has a wide range of applications in plant breeding, genome-wide association studies, and pharmacogenetics. However, current SNP visualizations are limited in their support for complex analytic tasks in wide-scale tables. Through ongoing collaborations with genomic researchers and plant breeders, we have identified a set of new interaction requirements for visual analysis of SNP datasets, and we have developed a new visualization tool with new interaction techniques that satisfy the requirements. Our requirements and techniques provide new understanding of how to support complex visual analysis in large-scale table visualizations.
12
+
13
+ Index Terms: Human-centered computing-Visualization-Visualization systems and tools-Visualization toolkits; Human-centered computing-Interaction design-Interaction design process and methods-User Interface design.
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ In many visual-analytics domains, analysts use wide linear datasets that have many features or observations about each set of entities - e.g., genomic data, time-series data, sequential documents, or population data. These datasets are often displayed using table visualizations in which each cell's value is encoded using a visual variable such as colour (e.g., [65, 75, 94]). A main goal in working with table visualizations is to find insights that are based on seeing patterns in the visualized data: e.g., determining that a particular row is different from a reference row in an important way, that a particular column shows a pattern across the different rows, or that two columns show a similar (or contrasting) pattern to each other. These tasks involve two main activities in the visual workspace: finding patterns in the rows and columns that indicate potential correlations, and comparing rows or columns (either to a reference or to other parts of the data).
18
+
19
+ In the genomics domain, a common example of wide datasets is Single Nucleotide Polymorphism data: SNPs are genetic differences between genomes at a single base pair, and can be important in understanding the relationship of an organism's genotype to its phenotype (i.e., its observable traits). For example, SNP analysis is extremely common in plant breeding research, since SNPs have proven to be important markers for desirable crop traits such as flowering time, disease resistance, or protein content.
20
+
21
+ Plant breeders and genomic researchers are now able to quickly and easily produce datasets that collect large sets of SNPs (numbering from hundreds to tens of thousands) for many different varieties of a crop - e.g., Figure 1 shows a visualization of SNPs in 52 varieties of Canola. When a collection of SNPs are inherited together near a common loci they are referred to as a haplotype because they indicate a potential genetic linkage. Studying these clusters of SNPs and the DNA around their locations can help researchers identify specific mutations that affect the plant's characteristics, and can help breeders identify candidate genes for future crossings (although SNPs occur in both genes and in non-coding regions).
22
+
23
+ Many tools have been introduced that visualize SNP haplotypes, but few systems have focused on the interactions that breeders and genomic researchers need to carry out during exploratory investigations. Current tools are limited in their support for visual exploration - particularly in terms of lightweight visual comparisons in the wide datasets that are now common in breeding (e.g., tens of thousands of columns). For example, mechanisms for navigating and comparing different columns in wide tables is of particular importance because most genetic locations in a plant's genome have dependencies and related locations that may be far away (e.g., due to the polyploid nature of many plant genomes that leads to multiple copies of genes).
24
+
25
+ To better support visual exploration in wide datasets, we have been working with genomic researchers and plant breeders for the past five years to identify specific analysis tasks in large SNP tables, and interaction requirements that will support those tasks. We identified the following six specific requirements:
26
+
27
+ * Flexible and fast re-ordering mechanisms so that users can quickly look at several arrangements of the SNP table (e.g., different domain-specific clustering and sorting methods as well as manual re-ordering);
28
+
29
+ * Lightweight row comparisons that allow temporary changes to encodings so that a quick comparison can be made without altering the overall organization of the table (e.g., being able to check the difference between two rows without re-setting the reference row);
30
+
31
+ * Comparisons between related columns that allow multiple genetic locations to be compared even if they are far away in the table (e.g., comparing SNPs at two locations that have orthologous genes);
32
+
33
+ * Flexible encoding of differences that allow users to rapidly switch between the variety of ways in which the "difference" between two plant varieties can be shown (e.g., alternate colour schemes to show show existence of difference from a reference, 'cascading' differences, the type of difference, or the specific details for both varieties);
34
+
35
+ * Support for location awareness because the scale and organization of SNP table visualizations can lead to difficulty in tracking where a SNP is in the plant's genome (e.g., whether a SNP is in an important region that is known to control other traits);
36
+
37
+ * Managing and revisiting table configurations to simplify navigation through the huge "configuration space" of ways that the user's current view of the table can be ordered, encoded, and positioned (e.g., keeping track of what other clustering approaches have been tried, or how to get back to a previously-viewed configuration of the table).
38
+
39
+ We have developed a new SNP-haplotype viewer that provides novel interaction techniques to meet these requirements. The viewer provides lightweight mechanisms for arranging the table, comparing rows and columns, and looking at different encodings; it also shows explicit information about genomic and table location, and includes a 'configuration snapshot' tool that provides automatic and manual saving of configurations as well as visualization of the saved states versions so that they can be compared, revisited, and annotated.
40
+
41
+ Our work makes two main contributions: first, we identify several new interaction requirements for visual analysis of wide linear datasets - these arise from our collaborations in the plant-breeding domain, but there are several applications of the requirements to other types of wide tabular data; and second, we demonstrate new interaction techniques that can satisfy those requirements in a working genomics visualization tool. Our SNP-haplotype visualization is open-source and is freely available at [address removed for review].
42
+
43
+ § 2 BACKGROUND AND RELATED WORK
44
+
45
+ Three areas of prior work underlie our research: systems and techniques for table visualizations, techniques for and studies of visual comparison, and genomic visualizations of SNP data.
46
+
47
+ § 2.1 VISUALIZATIONS OF TABLES
48
+
49
+ Tables have long been a standard way of communicating structured information using spatial layout. Table visualizations - which encode each cell's data value with a visual variable (e.g., colour, size, or position within the cell) - have also been in use for more than a century, and have been well known since Bertin's work (e.g., [8] and others as reviewed by Perin et al [75]). Table visualizations (sometimes called heat maps or colour-shaded matrices) allow large tables to be inspected and explored in a relatively small space, and tools for making visual tables are now a standard part of many visualization systems such as Tableau (tableau.com), PowerBI (powerbi.microsoft.com), and ggplot2 (ggplot2.tidyverse.com).
50
+
51
+ Table visualizations have been used in many different ways and in many different domains: for example, to summarize the characteristics of a set of locations $\left\lbrack {8,{41},{60}}\right\rbrack$ ; to show the magnitude of a variable of interest (e.g., expression level or abundance of ions) for different samples $\left\lbrack {{46},{66},{94}}\right\rbrack$ ; to explore student engagement in online classes [19]; to explore database contents [54]; to show interactions in social networks [32]; to analyse energy demand over time for different buildings [98]; or to track employee performance through a set of criteria [99].
52
+
53
+ Some of the primary goals when visualizing tables are to help users understand relationships between the entities represented in the tables rows, the features or characteristics represented in columns, and associations between rows and columns. Analytics work in many domains where table visualizations are used is often equivocal and under-specified: for example, in the domain of genomics, Nusrat states "data visualization is essential for interpretation and hypothesis generation as well as a valuable aid in communicating discoveries. Visual tools bridge the gap between algorithmic approaches and the cognitive skills of investigators. [...] A key challenge in data-driven research is to discover unexpected patterns and to formulate hypotheses in an unbiased manner in vast amounts of genomic and other associated data" ( [72], p. 781).
54
+
55
+ Within this context, researchers have investigated many different aspects of designing, interpreting, and interacting with table visualizations. First, several projects have considered the problem of generating table visualizations: for example, Perin and colleagues revisited Bertin's early methodology for producing visual encodings inside table cells, and developed a tool for interactively creating table visualizations with a range of visual variables; others have developed tools for quickly creating table visualizations from spreadsheets [10] and arbitrary CSV files [12]. Researchers have also considered how to provide access to the table's values within the visualization: for example, Rao and Card's Table Lens provided a bar-chart encoding of cell values and mechansims for quickly sorting by column, and used a focus+context mechanism to allow detailed inspection of certain rows within the graphic presentation [79]; the Table Lens has also been extended by other researchers to allow multiple colour maps and clustering support [52]). A different approach was explored by Han and Nacenta, who created "Fat Fonts" that show both a scalar value and provide a visual representation of the value through amount of ink [37]. Table representations have also been adapted to show hierarchical data (e.g., [25, 56])
56
+
57
+ Second, many researchers have investigated ways of ordering and arranging a table to best reveal patterns in the data. Careful manual arrangement of rows and columns was an important part of Bertin's original methodology [8], and many tools allow manual reordering of rows and columns. However, with larger datasets, manual ordering is not feasible, so automated algorithms for clustering or "pattern mining" $\left\lbrack {{23},{50}}\right\rbrack$ are often employed - these can use similarity (e.g., genetic similarity) to create a tree from the table's rows [94], or can look for visual patterns in the table data (e.g., [9, 21, 52, 55, 75]).
58
+
59
+ Third, many systems provide explicit support for specific tasks, such as ranking candidates (e.g., [35,92]), interactively looking for patterns (e.g., [9]), navigating through versions of tables that change over time (e.g., [76]), extracting and comparing data subsets from different tables (e.g., [33]), interaction techniques for working with event sequences [36], or dimensionality reduction (e.g., [9,25]).
60
+
61
+ Finally, matrix visualizations are a subtype of table visualizations in which the two dimensions of the table represent the same features for two entities, and each cell represents a degree of association between the entities for that feature. Matrix visualizations have also been used in many domains: for example, to show graphs and networks (e.g., $\left\lbrack {7,{43},{44}}\right\rbrack$ ), term co-occurrence (e.g.,[32]), genomic similarity (e.g., [38]), physical connections in folded structures (e.g., $\left\lbrack {{20},{95}}\right\rbrack$ , software evolution (e.g.,[83]), or classification errors (i.e., confusion matrices [31]). Researchers have also investigated several novel representations for matrices, including dual views that pair a matrix with its corresponding node-link diagram [42], integration of matrices into existing node-link structures [44], extensions that allow display of multivariate data [97], and 'matrices of heatmaps' to increase the number of dimensions that can be shown [82].
62
+
63
+ § 2.2 SUPPORTING VISUAL COMPARISONS IN VISUALIZATIONS
64
+
65
+ Comparisons are a common and frequent task in visual analytics, and techniques for supporting comparison have been widely studied. Many techniques can be classified using the three approaches proposed by Gleicher: juxtaposition, superimposition, and explicit encoding $\left\lbrack {{27},{28},{62}}\right\rbrack$ . Juxtaposition involves placing visualizations in close proximity, in order to allow users to see similarities and differences in parallel parts of the visualizations - e.g., if two line charts are presented side by side, viewers can compare values and trends in the charts (as long as all representations use the same layout and scale so that visual differences accurately reflect differences in the underlying data). A common technique that juxtaposes several visualizations is the small-multiples method [8]: each of the multiples has a similar layout but different data, allowing comparisons by looking across the images. This idea has been used in many ways, including well-known techniques such as scatterplot matrices [45], as well as extensions to immersive environments (e.g., [57]). Juxtaposition can also be achieved interactively: for example, Tominski's CompaRing approach brings comparison candidates close to the cursor when the user selects an object [88].
66
+
67
+ Superimposition involves putting two datasets in the same visualization so that differences are visible in the same reference frame - e.g., instead of showing two line charts side by side, the two lines can be shown in the same chart. Superimposition puts the datasets into the same reference frame, allowing similarities and differences to be seen more clearly. However, this method has the problem of clutter: the density of some representations mean that they do not work well as overlays (e.g., space-filling methods or dense data spaces), and the approach works best with sparse data (although the visual presentations can be adjusted to reduce occlusion).
68
+
69
+ Explicit encoding of a comparison involves creating and visualizing a new dataset that explicitly represents and visualizes a specific comparison - e.g., the data from two line charts can be used to create a new dataset showing the difference between the lines, and then this new dataset can be shown explicitly as a new line (either in addition to or instead of the existing lines). There are many types of explicit encoding that are possible: for example, showing the existence of differences, the magnitude of differences, or the type of differences (limited only by the ways in which two datasets can be compared) [69]. Researchers have demonstrated several explicit-encoding methods in visualization research, including colour-based differences (e.g., showing same/different colouring, or amount of difference), "diff matrices" that show differences between pairs of lines, displayed in a matrix [85], annotations that indicate differences in one of the representations being compared (e.g., coloured lines showing missing or added elements in a tree [11]), differences between tables at different time periods [69], changes between video frames [13], or "shine-through" representations to highlight differences in overlays [89].
70
+
71
+ Researchers have also extended Gleicher's three basic categories to include other representations. Different visualizations can be presented sequentially in the same location, either using the idea of Rapid Serial Visual Presentation (RSVP) [5], or using animation to smoothly morph from one dataset to another [22]. This technique is a combination of juxtaposition and superimposition using time (i.e., temporal juxtaposition), and can address the occlusion problem while still making use of the common spatial frame. Tominski showed a variation on this idea in a technique that allowed the user to 'peel back' a top representation to look at the bottom representation [89]. Other researchers have extended the idea of juxtaposition by nesting one visualization inside another, which allows different types of comparisons [49], and have introduced the concept of overloading one representation with details from another - e.g., showing graph elements that are present in one visualization but not in another [48].
72
+
73
+ In addition to comparison approaches based on spatial layout, researchers have also considered the actions and interactions that are part of visual comparison tasks. For example, von Landesberger specified the workflow involved in a visual comparison task [91]; Wu developed a "view composition algebra" to understand and compose actions in ad-hoc comparison settings [96]; Jardine and colleagues investigated the low-level perceptual processes involved in visual comparison [47]; and Kehrer and colleagues defined a formal model of category comparisons in small-multiple displays [53]. An additional higher-level consideration is the amount of effort required to carry out a visual comparison - low-effort techniques are critically important for supporting effective exploration of large datasets. A few researchers have explicitly focused on effort reduction - for example, Tominski's CompaRing which reduced the steps required to bring comparators into juxtaposition [88].
74
+
75
+ Several studies have also been conducted to look at the performance of different techniques for supporting visual comparisons. Early perceptual studies investigated performance on comparisons between elements in bar charts [14] and individual differences in same/different visual comparison tasks [15]. Several studies have followed up on these results to look at comparisons in standard chart types (e.g., $\left\lbrack {{86},{87}}\right\rbrack$ ) the effect of chart size and space usageon interpretation [39], and the effect of glyph types on reading and comparing time-series visualizations [24]. Several researchers have evaluated the basic processes involved in visual comparison: for example, $\mathrm{{Lu}}$ and colleagues created a model of just-noticeable differences as the basis of visual comparison, and explored this idea with bar charts, bubble charts, and pie charts [61], and Ondov and colleagues studied low-level perceptual tasks to compare performance in several presentation styles (overlays, small multiples, and animated transitions) [73]. Other studies have considered specific representations or analysis scenarios: for example, user performance in visual comparison, slope estimation, and discrimination tasks for multiple time-series visualizations [49]; the performance of square and triangular matrix representations as well as different methods of matrix juxtaposition [59]; the effectiveness of small multiples compared to animated transitions for seeing changes in graphs [2]; and user performance when comparing ranked data in tables [6].
76
+
77
+ § 2.3 GENOMIC VISUALIZATIONS AND SNP HAPLOTYPES
78
+
79
+ There are many types of genomic visualization that are used to show a wide range of information - for example, sequences and sequence alignment, levels of gene expression or ion abundance, conserved regions of the genome (i.e., synteny), or structural variation across different samples (e.g., $\left\lbrack {1,3,{18},{63},{64},{72},{80},{100}}\right\rbrack -$ see [72] for a broad survey). In particular, recent advances in sequencing capabilities and the increasing availability of genomic data has led to the use of genetic analysis and genomic visualization in the domain of plant breeding where one of the main goals is to connect a crop plant's genotype to its phenotype - the observable characteristics or traits of the plant. Plant breeders and genomic scientists investigate how genetics affect important crop traits such as oil and protein content, plant height, resistance to disease, or heat tolerance; this knowledge can be used to create hypotheses and choose candidates for breeding in order to try and introduce and retain desirable traits [51].
80
+
81
+ Although complete sequencing of individual genomes is still time-consuming, it has become feasible to identify large numbers of genetic markers in a genome using the "genotyping-by-sequencing" approach [17, 77] that generates sets of markers called SNPs for a variety. SNP markers are often associated with differences in traits of interest, and so SNP visualizations are an important part of marker-assisted breeding [51].
82
+
83
+ Several systems have been developed for showing SNP data, including capabilities in general-purpose genomic visualization tools (e.g., JBrowse [18] or Gosling [63]) as well as dedicated applications such as Haploview [4], Flapjack [65], SNP-Vista [84], or GCViT [93]. These systems often show table visualizations with individuals in rows and SNPs in columns, as well as association matrices that show co-occurrence of different alleles within a haplotype group [4], or histograms of SNP counts within a given window size [93]. Many tools provide clustering capabilities (e.g., using a genetic-similarity dendrogram [84]) as well as interactive zoom to let users see details of the alleles (e.g., the actual nucleotides). A few tools are paired with algorithms for conducting genome-wide association studies (GWAS) that look for correlations between SNPs and measured traits of interest (e.g., [30]). However, there are still many limitations in current genomic visualizations in terms of support for the task of interactive visual comparison, although a few examples of research that focuses on comparison do exist: very early work developed diagrammatic methods for comparing DNA sequences [26]; Glueck and colleagues developed the PhenoBlocks visualization with the goal of supporting comparisons across phenotypes [29]; Mitra and colleagues developed methods for comparing metage-nomic datasets [67]; and recent research by Ripken and colleagues conducted requirements interviews with biologists for working with genomic data in a VR environment for immersive analytics - the identified requirements included the need to compare data subsets, and the need to flexibly reorder and group the data [81]).
84
+
85
+ A specific limitation of current SNP-haplotype viewers is that most tools have been primarily built for analysis of diploid genomes (e.g., humans or animals) whereas plants are often polyploid, with multiple copies of each gene [58]; breeders and researchers often need to consider the effects of all orthologous locations together during exploration, but simultaneous visual access to orthologues is not well supported in most tools. The drawbacks of current tools and our collaborations with plant breeders and genomic researchers led us to the new requirements and visual features described below.
86
+
87
+ § 3 APPLICATION DOMAIN
88
+
89
+ To contextualize the design of a visualization tool for SNPs, we provide an overview of the biological background for the domain, and a characterization of the dataset used in the visualization.
90
+
91
+ § 3.1 BIOLOGICAL BACKGROUND
92
+
93
+ Genomics research involves the study of an organism's DNA in order to understand its structure, function, and evolution [74,78]. An organism's complete set of DNA is called its genome, consisting of a large set of nucleotides that encode the instructions responsible for the organism's development and function [68]. There are four nucleotide bases - Adenine (A), Guanine (G), Cytosine (C) and Thymine (T). A variation in a single nucleotide in the genome at a specific position is called a Single Nucleotide Polymorphism or SNP. These variations tend to exist in a significant fraction of the population (1% or more) and the different variants of a particular SNP are called alleles. When a set of SNPs that are adjacent to each other in the genome are inherited together they are referred to as a haplotype. Mapping the location of these haplotypes can help researchers in classifying different variant populations.
94
+
95
+ § 3.2 DATA CHARACTERIZATION
96
+
97
+ SNP data can be represented in different types of files such as VCF (Variant Call Format) or Hapmap (Haplotype Map) and is often analyzed in combination with additional data sources such as a GFF (General Feature Format) file for position of genes, and a phenotypic-trait table. At the most basic level, however, SNP data is ordered based on genomic position and classified according to the population line (variety) such that each SNP has the following features:
98
+
99
+ * Identifier: Every SNP is given a unique identifier that is common across all the different parental lines of a single organism.
100
+
101
+ * Possible Alleles: The different nucleotide variants that exist for a SNP; while most common SNPs have two alleles, triallelic SNPs have been identified in human genomes.
102
+
103
+ * Position: The location of a SNP in the genome, typically encoded relative to a chromosome.
104
+
105
+ * Value: The nucleotide variant present in the given population line; the value can be empty when the data is missing.
106
+
107
+ Table visualizations of SNPs use the inherent ordering, and then build a table at the genome, chromosome, or region level. Other datasets can supplement the SNP information to indicate, for example, the gene that the SNP is on, or copy number variations at that genomic location. In addition, other data sources can describe each variety - e.g., phenotypic traits such as flowering time, protein content, seed size etc, or dendogram trees that cluster the lines based on their genetic distance. These additional datasets are primarily used to control the order of the rows.
108
+
109
+ § 4 REQUIREMENTS FOR SNP-HAPLOTYPE ANALYSIS
110
+
111
+ We have been working with genomic researchers and plant breeders over the past five years to understand user tasks and requirements for visual exploration in genomic datasets. Our collaborating research groups are interested both in producing new crop variants that have improved agronomic or nutrition traits, and also in exploring genetic evidence for hypotheses about physiological mechanisms and plant evolution (e.g., (removed for anonymity)). Requirements analysis has been carried out in an iterative and collaborative fashion with these research groups, and we have developed and deployed several versions of our haplotype visualization - the prototypes have been used as a foundation for discussions about user tasks and visual-exploration needs. Based on our discussions, we have identified the following requirements that go beyond what is available in current SNP visualizations:
112
+
113
+ ${R}_{1}$ . Flexible and fast re-ordering mechanisms. Genomic crop analysis involves looking for associations between SNPs, genes, and traits of interest - and to do this, users need to be able to quickly look at several arrangements of the SNP table. For example, ordering rows by genetic similarity, sorting by a measured trait, clustering by allele group for a particular SNP, or arranging rows manually (based on the user's knowledge of the varieties) are all common manipulation methods for our collaborators. In addition, it is valuable to be able to move between these different arrangements quickly and easily.
114
+
115
+ < g r a p h i c s >
116
+
117
+ Figure 2: The SNP browser’s three main views: genome-level overview (top left); chromosome-level view with highlighted viewfinder rectangle (top right); region view with match/difference colouring (bottom left); region view with nucleotide colouring (bottom right).
118
+
119
+ ${R}_{2}$ . Lightweight row comparisons. Because there are many ways in which varieties can be compared, users need lightweight mechanisms for quickly seeing how one row compares to another without changing the global ordering of the table. In addition to simple selection of a reference variety that changes the global visualization, there is a need for low-effort ways of comparing any two given rows. For example, in a table that is colour-coded based on differences from a single reference, users need a way to do a quick comparison of the differences between two varieties without changing the overall reference.
120
+
121
+ ${R}_{3}$ . Comparisons between related columns. A genetic location in a plant genome is often related to other locations: for example, many plant species are polyploid (i.e., they have duplicate copies of genes elsewhere in the genome), and many genes also have dependencies with other parts of the genome (e.g., a gene in one location may be regulated by another). This means that users need to compare the columns of a table visualization as well as the rows - and need easy access to related locations, since a SNP table may be many thousands of columns wide.
122
+
123
+ ${R}_{4}$ . Flexible encoding of differences. There are many ways in which genomic researchers think about the "difference" between two varieties: they may be interested simply in the existence of differences between a variety and a reference; they may want to see specific differences at the nucleotide level; they may be interested in exact matches between alleles or partial matches (e.g., heterozygous nucleotide pairs); or they may want to see 'cascading' differences that build up across multiple varieties. Alternate encodings (e.g., using colour maps) can show different kinds of differences, but users need to be able to switch between encodings quickly and easily.
124
+
125
+ ${R}_{5}$ . Support for location awareness. The size of SNP table visualizations (e.g., tens of thousands of columns) means that it can be difficult for users to maintain awareness of where they are in the genome - a problem that is exacerbated by the fact that SNPs are simply ordered in the table, rather than positioned relative to their actual genomic location. As a result, it is critical that any visualization provide support for awareness of location, both at a high level (e.g., "what chromosome am I looking at?") and at a low level (e.g., "what gene is this SNP on, and how many neighboring SNPs are on the same gene?").
126
+
127
+ ${R}_{6}$ . Managing and revisiting table configurations. With multiple ordering mechanisms, multiple colour encodings, and zoom and pan navigation, there are an enormous number of possible configurations for the table visualization. It can be very difficult for users to remember where they have been in this "configuration space" and how they can get back to a previous configuration (e.g., to show a pattern to a colleague or to revisit a previous candidate). Although provenance tools have been introduced for several visualization systems (e.g., [16, 34]), no current genomic visualization systems (to our knowledge) provide any support for this requirement.
128
+
129
+ § 5 SYSTEM OVERVIEW
130
+
131
+ Our haplotype browser is a web-based application for visualizing and exploring SNP groups across multiple varieties (parental lines) of crop species such as Canola (Brassica napus), lentil (Lens culinaris), or wheat (Triticum aestivum). The system provides several table visualizations at different genomic scales, with varieties in the table's rows and SNPs in the columns (see Figure 2. After the user selects or loads a datafile, the system displays a genome-wide overview of all varieties and SNPs, divided into chromosomes. Since there are often many thousands of SNPs for each variety (e.g., 30,000 in the Canola dataset of Figure 2), this table is highly compressed horizontally, and so primarily serves as a consistent frame of reference that helps the user orient themselves to the data and keep track of navigational cues such as the zoom region. The main user interaction at the overview level is to select a chromosome for closer analysis, which is then displayed as a second table below the overview.
132
+
133
+ The chromosome view uses the same tabular organization as the overview, but at a higher zoom level, where users can start to identify patterns in the data and locations for closer investigation — for example, the central region of the chromosome view in Figure 2 shows that there are a number of varieties that differ in terms of several contiguous SNPs. To zoom in further on this region, the chromosome view provides a viewfinder rectangle that selects a subset for a third view that shows only the region of interest (yellow rectangle in Figure 2.
134
+
135
+ < g r a p h i c s >
136
+
137
+ Figure 4: Visualization of genes as pointed arrowheads indicate their position and orientation in the genome. The fine gray lines are connecting SNPs with their physical location in the genome.
138
+
139
+ The region view is shown at the bottom of Figure 2. When the zoom level is high enough in this view, the names of the SNPs are shown at the top of the table, and the actual base pairs are also drawn in the table cell. In this view, several additional interactions are available. The user can pan (by dragging) and adjust the zoom level (using a slider above the view), and can hover over any cell to show a tooltip with information about the SNP and its corresponding alleles. Button toggles are provided above this view for the user to move left or right across the region in small step increments to investigate neighbouring SNP clusters. There also a pair of input boxes to enter a specific start and end position if the user is targeting a known genetic loci. All three views use the same basic encoding scheme, as described in the following section.
140
+
141
+ § 6 VISUAL ENCODING DESIGN
142
+
143
+ SNP data is primarily visualized through a simple coloured tabular grid where the level of detail changes depending on the genomic resolution. In encoding this dataset we followed previous SNP genotype visualizers (e.g., $\left\lbrack {4,{65},{70}}\right\rbrack$ that plot the parental lines horizontally with colored SNP markers running vertically. We extend this design space in our visualization by providing three panels: a main SNP panel and two supporting panels of associated data, with coordinated interaction support among all three for complex analysis tasks. The main panel, visualizing the SNP markers, is at the center of our visualization. To its left is the line ordering panel that encodes the ordering of the parental lines either via a dendogram tree or a heatmap of phenotypic traits. The final panel is positioned underneath the main panel and visualizes the genetic-to-physical location map of the SNPs and the corresponding genes around the loci. The visual encoding of all three panel is flexible and can change based on a variety of interaction and selection parameters.
144
+
145
+ § 6.1 MAIN SNP PANEL
146
+
147
+ The main table visualization has several possible colour encodings - some of these are based on comparisons of each line to a reference line (shown at the top of the table), and some based on underlying genetic information.
148
+
149
+ The first (and default) color scheme is an explicit encoding of differences to the reference line: if a SNP allele in a particular line matches with the SNP allele in the reference line, it is painted blue and, if there is a mismatch, it is painted red. Since each allele is inherited from one parent, the alleles are always shown in pairs and can be homozygous (same) or heterozygous (different alleles in the pair). Since most SNPs have two possible alleles ( for example $\mathrm{A}/\mathrm{C}$ ), the three possible genotypes could be either a homozygous pair of the first allele (AA) or a homozygous pair of the second allele (CC) or a heterozygous pair of both (AC or CA). In the default color scheme, a SNP is considered to match if one among the pair of alleles are the same (and is thus painted blue). The second color scheme is a variation of the first, and ignores partially-matching SNPs - i.e., a marker is painted blue only if the alleles from both parents match the alleles in the reference SNP.
150
+
151
+ The third color scheme is used to investigate the homozygosity of SNP clusters - it paints a SNP marker blue if the pair of alleles within the SNP are the same, or red if they are different. This can help researchers isolate parental lines with a higher concentration of heterozygous SNP pairs. The fourth color scheme uses the underlying DNA, with the SNP marker colored based on the nucleotide bases present in the alleles. There are four basic colors used for each of the four homozygous base pairs (AA, GG, CC and TT) and all heterozygous base pairs are painted purple. This colour scheme is shown in Figure 2 (bottom right), where many SNPs show two main groups with either the AA or the GG allele. A fifth and final color scheme is used to visualize similarity among lines in a cascaded fashion with each line colored based on its similarity with all the lines above it. Its discussed in detail in the dynamic color scheme subsection in the interaction feature design section below as it only works in certain scenarios depending on the number of lines being visualized. In all five color schemes, missing data where a SNP is not present in a line or its allele is unknown is painted white.
152
+
153
+ The organisation of the table visualization is based on the genomic resolution. At the whole genome level, the SNPs are grouped into chromosomes in order to provide an overview of the dataset and also highlight large-scale patterns (e.g., large clusters of missing SNPs either across the lines vertically or in a single line horizontally indicating an error during sequencing or the SNP assaying process). It also provides spatial context for the user as they investigate SNP clusters in a specific region. When a chromosome has been selected, it is highlighted using a white background in the genome view. Canvas rendering at this level is optimized through an algorithm that filters out minuscule SNP variations to improve rendering speed. This optimization occurs automatically when the size of the rendered SNP markers goes below a single pixel.
154
+
155
+ In the chromosome view, painting of the SNP markers is the same as the genome level, but with the addition of a viewfinder window that allows selection of a region for closer analysis. In the region view, SNP markers are painted using the chosen colour scheme, along with a label in each cell indicating the pair of alleles in the SNP. At this resolution additional markers can also be painted on top of the SNPs such as copy number variations. These are either insertions or deletions in genes at specific locations across the genome and are highlighted as red or white circles with white circles indicating insertions and red circles deletions as shown in Figure 3.
156
+
157
+ § 6.2 LINE ORDERING PANEL
158
+
159
+ The ordering of the different parental lines is important to researchers because several insights can be gained by identifying similar regions in the table's columns - this is because the extent of similarity in the SNP clusters around a loci across the lines is an indication of shared ancestry or origin between the lines. By default our visualization system orders the lines based on a dendogram tree provided by the user. This tree structure visualizes every parental line as a leaf node in a tree, and it clusters lines based on evolutionary distance. This arrangement can help researchers in studying the SNPs of a particular subset of the lines that are similar to each other.
160
+
161
+ The other ordering mechanism consists of heatmaps of different phenotypic traits for each of the parental lines. The trait map contains one column for each trait (e.g., seed size or protein content), with colouring based on a heatmap of the range of values for that trait. The Virdis color palette is used for the heatmaps for easier distinction between the lines [71]. The lines can be ordered by sorting them based on any of the column values, which places lines with similar phenotypes closer to each other. This feature is explored further in the interaction design section below.
162
+
163
+ § 6.3 GENE LOCI PANEL
164
+
165
+ SNPs in the main view are ordered from left to right based on their genetic position in the genome. However, because SNPs may be unevenly distributed across the genome, the position of a SNP's column does not match its physical location in the genome. This makes it difficult to visually indicate additional information regarding the genetic loci of the SNPs. To address this problem we provide a visual map that provides the entire genomic scale of investigation underneath the SNP view and connects every SNP to its actual physical location in the genome, as shown in Figure 4. Additional datasets like gene density maps or gene markers are then placed underneath this physical map so that they corresponding to the location of the SNPs. For the whole-genome view, this panel is hidden as the density of lines makes it difficult to discern positional information. In the chromosome view, the panel is used to show a simple scale indicating the actual physical location of the SNPs in terms of number of base pairs, and can be used to highlight additional datasets like gene density tracks. In the region view, the panel shows individual genes located near the loci of the SNPs. The genes are visualized as horizontal arrows, with the direction of the arrow indicating the orientation of the gene (see Figure 4). Clicking on a gene arrow shows the gene ID and additional information (e.g., the function of the gene or the protein it encodes).
166
+
167
+ § 7 INTERACTION FEATURE DESIGN
168
+
169
+ Here we outline the different interactive design features in our visualization that address the six major requirements.
170
+
171
+ § 7.1 DYNAMIC ORDERING OF LINES
172
+
173
+ Users are given several option to order the different parental/variety lines. By default, lines are ordered according to a dendogram tree based on an input file provided by the user. This mechanism clusters lines that are evolutionarily similar. If a dendrogram file is not available, the lines are sorted automatically based on the SNP similarity with the reference line. This approach ensures that matching SNP clusters get pushed to the top of the main view while the lines that differ the most are pushed towards the bottom. Additionally users are also given the option of manually selecting a subset of lines through a multi select dropdown list. The order of lines in this case is determined based on the order in which the lines are selected. This gives researchers the option to investigate specific patterns that they might have observed in the dataset in greater detail by only comparing those lines.
174
+
175
+ If a file containing phenotype trait values is provided by the user, then the lines can also be ordered based on these traits. Users are first given an option to select the traits they are interested in mapping for all available traits in the file (the order of selection determines the placing of the trait columns from left to right). Then users are given an option to order the lines based on a specific trait value. This ordering can also be changed by users by clicking the column head of any phenotype trait in the trait map.
176
+
177
+ § 7.2 NAVIGATING MULTIPLE GENOMIC RESOLUTIONS
178
+
179
+ When investigating large-scale datasets, users need to be able to navigate quickly while still maintaining contextual information regarding their position in the dataset. We provide location context through the three coordinated views described above (genome, chromosome, region). Navigating from genome to chromosome involves clicking on the desired chromosome, and then selecting a region involves positioning the viewfinder window. The viewfinder is translucent by default to ensure that it does not occlude the view of the chromosome, and has a darker border at the bottom indicating the region that has been selected (Figure 3). The user can drag the viewfinder and adjust its left and right extents with the mouse.
180
+
181
+ In scenarios where SNP density is high in a chromosome, it might be difficult to use the viewfinder to zoom into a small enough region due to the limited size of the window. To address this issue, a navigation panel is available in the region view to aid users in controlling the region of interest. It contains two input boxes to enter genomic start and end position (base pair locations) from the start of the chromosome. This allows researchers to look at all SNPs near a specific gene loci (e.g., that corresponds to a particular protein). The view also included navigation buttons that let the user move the region in small incremental steps, and a slider provides additional control over the zoom level of the region view. As the user interacts with the navigation panel, the corresponding changes are reflected in the viewfinder in the chromosome view, maintaining location awareness between the views and from the table to the genome.
182
+
183
+ § 7.3 DYNAMIC COLOR SCHEME
184
+
185
+ Apart from the four basic color schemes discussed above, we also offer users a novel way to compare a small subset of lines through a cascading waterfall color pattern. When users manually select fewer than ten varieties for comparison, the visualization changes into a dynamic color scheme for visualizing accumulating differences, instead of the standard blue and red scheme for match and mismatches. In the cascade color scheme, every line is compared with all the lines above it instead of just the one reference line at the top. To encode the similarity pattern, every line is first assigned a unique color. Then all the SNPs in that line are compared with
186
+
187
+ Online Submission ID: 0 the lines vertically above it starting from the top. If a SNP markers matches with any of the lines above it, the color of the topmost matching line is assigned to the SNP. If the marker doesn't match any of the lines above it, it is considered novel and is painted in the unique color assigned to the line. This ensure a cascading waterflow style of coloring, such that all the SNPs in the first line have the same color because there are no SNPs above them. In the second line, all the SNPs that match the first line are painted in the color of the first line and all the SNPs that do not match are painted in the color of the second line. This flow continues until the final line is a mixture of different colors of all the lines above it depending on the precedence of SNPs markers present in it. This offers researchers a insight into the origin of a unique cluster of SNPs. However this color scheme only works for ten lines or fewer, due to the limitations on the number of colours that users can reliably distinguish.
188
+
189
+ < g r a p h i c s >
190
+
191
+ Figure 5: Split View demonstrating comparison of SNP clusters across two different genomic regions in the same chromosome. Each view has a SNP column pinned to the left and the line NAM 25 is selected and highlighted across both views.
192
+
193
+ § 7.4 ROW AND COLUMN HIGHLIGHTING
194
+
195
+ In large datasets with many rows and columns, it can be hard for users to navigate across the SNP view to identify a specific SNP marker and its allele annotations. To support users with this task, we offer a row highlighting option that lets users highlight a specific row by clicking on the line name (this draws white guidelines across the SNP view as shown in Figure 5). To further aid users with this problem, we also offer a tooltip feature that shows all the details such as the corresponding line name and the SNP index of a specific marker when the mouse is hovered over it. The guidelines and tooltips greatly improve the user's ability to trace SNPs along a row - and the highlighting can also act as a temporary landmark that allows visual inspection of rows above and below the guidelines.
196
+
197
+ Another issue that can occur during visual comparison of two SNP columns is the distance between their loci. If the SNPs are far enough apart they cannot be viewed in the region window (without zooming out to the point where too much detail is lost). To solve this problem we introduced a column pinning mechanism that lets users pin a SNP column to the beginning of the region view. The selected SNP column is also highlighted in the region view by changing the color of the allele annotations on each marker in the column from white to black. Users can then pan across the chromosome to a different location and compare the SNP columns in that region with the pinned SNP column (see Figure 5, in which two SNP columns have been pinned in the region view). An additional advantage of this feature is that it also lets users temporarily mark and highlight a SNP column that might have caught their interest for further investigation.
198
+
199
+ § 7.5 MULTI-REGION ANALYSIS
200
+
201
+ Although SNPs are inherited in clusters around a specific gene loci, several SNP clusters across the genome can be related to each other due to gene duplication or dependency. This is a common issue for poylploid plants which may have several duplicated copies of the same gene. The regulation and expression of these genes can vary based on the SNP clusters within or around them, which means that researchers often have to jump between multiple regions in the genome to compare these SNP clusters. While the column-pinning feature discussed above helps in this situation to an extent, it only lets users compare a single column at a time, meaning they lose context of the neighbouring SNPs in the cluster. To address this problem, we implemented a split-screen view that splits the region view into two parts with each part focusing on a different region. All of the other features discussed above such as row and column highlighting get carried over into these split views. This gives users the option to pin two different SNP columns and compare their neighbouring clusters in a side-by-side view (as shown in Figure 5).
202
+
203
+ § 7.6 LIGHTWEIGHT COMPARISON PREVIEW
204
+
205
+ Based on the feedback collected from our collaborators, one of the most commonly used selection feature is the ability to switch the reference row at the top and compare a different line with the other lines. In certain datasets like lentils (Lens culinaris) the number of lines that are being studied are quite high due to the large number of possible cultivars and variants. This is a general problem in most food crops as they are cultivated across the world in a variety of environmental conditions with different outcomes due to the selective breeding process. This means breeders often need a way to carry out lightweight row comparisons across the lines 'on the fly' without switching the reference line of the entire visualization. To address this issue we offer a preview mode along with the row highlighting feature. Users can first select a row in the SNP view by clicking its line name. This highlights the entire row with white guide lines. Users can then hover their mouse over any other row to perform a quick comparison between the two rows and update the coloring in the row that is being hovered. The coloring switches back to its default state once the mouse hover over the row is removed. This way researchers can quickly compare select rows without having to switch the main reference row at the top and update the whole Online Submission ID: 0 SNP view. Similarly, we also offer previews for the interactions in the trait map that re-orders the table: users can hover their mouse over the column head of a trait column to see a quick preview in a small floating window next to the mouse cursor of what the SNP view would look like if the lines were ordered based on the that phenotype. The preview goes away as soon the mouse is moved away from the column head.
206
+
207
+ < g r a p h i c s >
208
+
209
+ Figure 6: A preview of the SNP view is shown in the top left corner above the phenotype trait map when the user hovers their mouse over a particular phenotype. The preview shows what the SNP view would look like if the lines were sorted using that particular phenotype.
210
+
211
+ § 7.7 REVISITATION SUPPORT THROUGH SNAPSHOTS
212
+
213
+ The exploratory nature of our visualization tool can lead to an increase in the spatial cognition demand in users due to the nature of interacting with the dataset at multiple resolutions and complex filtering scenarios. This problem becomes further evident when users have to rely on context switching between different viewpoints for visual comparison when looking at SNP markers in different regions. To address this issue our system maintains an in-memory store of the sequence of actions that led to the current state of the visualization in the interface. Each of these memory states are stored along with a thumbnail image (snapshot) of the visualization at that instant. A floating snapshot panel that is minimized by default is available for users to pull up and explore prior states of the visualization. Users can then click on any of these snapshots to go back to the state of the visualization at that prior point in time. This providers users with a lightweight history tracking mechanism that can help them retrace their steps during data exploration. The snapshots are automatically tagged with a note that indicates the chromosome name and the start and end position of the region view. This note can be edited by users to also include other points of interest if needed. The snapshot feature also provides users a novel way of interacting with the system by creating snapshots of multiple regions of interest and going back and forth between them for quick visual lookup and comparison. The system includes mechanisms for automatic creation of snapshots (e.g., if the user stays in a particular configuration for 30 seconds) as well as manual creation (through a keyboard shortcut).
214
+
215
+ § 8 ITERATIVE REFINEMENT, TESTING, AND CURRENT USE
216
+
217
+ The design of our visualization system was iteratively refined over a period of two years through multiple rounds of feedback from our research collaborators as they used our system to explore their different datasets. During this period our system was also stress tested with larger datasets (e.g., 29,000 SNP markers across 1000 lines of barley, and the first 10,000 SNP markers across 328 lentil varieties). An example of visualization of the large lentil dataset is shown in Figure 8; this demonstrates the usability of our system even with very large tables - even at this scale, the visualization shows genome-level patterns such as lines with an extensive set of missing markers ( 8 (b)) or SNP clusters that are completely different across the majority of lines ( 8 (a)). Our system is also in use by a group of plant breeders to showcase the diversity of agronomically important traits among a population of Canola founder lines, and has been adapted for use with several other use cases including exploration of genotypes for Blackleg, a common oilseed pathogen. Our tool is open-source and freely available [repository removed for review]. and has been integrated into a major North American pulse crop database to visualize the differences among their various cultivars.
218
+
219
+ § 9 DISCUSSION AND FUTURE WORK
220
+
221
+ In the following sections we consider the relationship of our requirements and techniques to previous work on table visualization and visual comparison, discuss ways that our techniques can be applied to datasets outside the domain of genomics, and outline a set of directions for future work.
222
+
223
+ § 9.1 REQUIREMENTS AND TECHNIQUES IN CONTEXT OF PREVI- OUS WORK
224
+
225
+ Working in real-world collaboration with genomic researchers and plant breeders means that our SNP-haplotype viewer implements some interaction techniques that are shared with what has been seen in previous systems - for example, two of our requirements match those identified in Ripken's interviews with biologists [81], although Ripken's research took a broader view and our requirements are thus more focused on the comparison tasks themselves; similarly, several systems have provided techniques for clustering, sorting, and manual row rearrangement (e.g., [46, 52, 66, 94, 99]).
226
+
227
+ However, several techniques and features are novel (or have novel adaptations to fit the scenario of large-scale SNP tables). First, our techniques for row and column comparisons are an advance in terms of user effort: the lightweight row comparisons, column pinning, split-screen views, and visual previews of column sorting substantially reduce the number of steps needed to carry out a visual comparison. Reducing effort in exploratory visual analysis is critical: although there may be ways to achieve the comparison using standard techniques, it is important to provide low-effort mechanisms so that users can follow exploratory paths without needing to think about multiple steps in the UI. Our goals here are similar to those of Tominski's CompaRing system [88], although his approach used juxtaposition whereas ours uses an explicit encoding of difference. Second, techniques such as providing three persistent zoom views that follow the structure of the genome, and visual tracks to indicate genomic location as well as gene commonality, assist the user in maintaining location awareness (since table locations are not well matched to actual locations). Third, providing snapshots to track, compare, share, and revisit table configurations is an extension to the work done previously on visualization provenance (e.g., $\left\lbrack {{34},{40}}\right\rbrack$ that broadens the focus from communication and storytelling to supporting the basic mechanics and processes of navigating through the complex parameter space of table configuration.
228
+
229
+ < g r a p h i c s >
230
+
231
+ Figure 7: A preview of the light weight row comparison feature. When users hover their mouse over a row after highlighting a different row. The hovered row is highlighted with a single guide line and its colouring is update to reflect similarity with the other highlighted row instead of the reference line at the top.
232
+
233
+ < g r a p h i c s >
234
+
235
+ Figure 8: SNP View visualizing the similarities between ${10}\mathrm{\;K}$ SNPs in a reference line across 328 Lentils varieties. a) Cluster of SNPs that dont match with the reference line across most of the varieties. b) One lentil variety line seen as an almost white line across the entire region indicating missing data across its entirety due to possible sequencing error.
236
+
237
+ < g r a p h i c s >
238
+
239
+ Figure 9: A example of the snapshot feature that lets users store their interaction history as a series of snapshots with thumbnails showing the state of the visualization when the snapshot was taken. The panel is minimized by default but can opened up as seen here and contains three snapshots.
240
+
241
+ § 9.2 GENERALIZING TO OTHER TYPES OF WIDE DATASETS
242
+
243
+ Although we have focused on the domain of genomic research and the specific needs of our collaborators, we believe that several of our requirements and interaction techniques will be applicable to other domains as well. Column-based comparison tools will be useful whenever the data has columnar dependencies or links between columns. For example, if columns are used for temporal data, there may be cyclic relationships that need to be brought closer together for investigation (e.g., natural cycles such as seasons, or links created by external phenomena such as temperature data during sunspot years). Flexible row comparison mechanisms will also be important in any dataset where there are many entities, and where comparisons need to be made between rows as well as to an obvious reference row. For example, a dataset of baseball players (e.g., as was used in the Table Lens [79]) does not have a single clear reference, and it is likely that many different pairs of players could be compared for a given task. The idea of multiple flexible encodings can also be useful in other datasets - these allow users to cycle quickly through different perspectives on the comparison, gaining a broader view of differences. In particular, our 'cascading differences' encoding could be useful in showing the accumulation of changes when rows represent successive versions of a complex entity (e.g., a software code base). Finally, a configuration-snapshot mechanism should also be widely applicable in any visualization where users change organization frequently, and where users need to revisit recent configurations that they have previously explored.
244
+
245
+ § 9.3 FUTURE RESEARCH DIRECTIONS
246
+
247
+ Our future research work will involve activities to improve the SNP viewer as part of our ongoing collaboration, and new projects to explore broader visualization issues raised by our experience. In the SNP visualization system, we will add algorithms for pattern mining in the table data (e.g., $\left\lbrack {9,{21},{55},{75}}\right\rbrack$ ) and tools for comparing these patterns to external evidence such as GWAS results. We will also add support for additional context tracks (e.g., to provide GWAS results or other gene-centric measurements such as expression level) - aligning GWAS results such as a Manhattan plot with the table visualization can provide a bridge between algorithmic approaches and visual analysis [72], and gives users a set of starting points for their exploration. We also plan to extend the interactions available with the configuration-snapshot tool (e.g., to provide explicit encoding of differences between two configurations (e.g., $\left\lbrack {{11},{69},{90}}\right\rbrack$ ).
248
+
249
+ In the broader visualization context, our initial goal is to test our new interaction techniques in other types of wide tabular datasets, and broaden our interaction requirements to encompass new tasks and comparison activities: for example, we will work with datasets that use continuous rather than discrete values (requiring new encodings for the table), we will test our tools with large-scale time series that contain cyclic column dependencies, and we will add additional techniques to work with table subsets $\left\lbrack {{33},{81}}\right\rbrack$ . We also plan to follow up on work that has looked at the details of visual comparisons (e.g., $\left\lbrack {{47},{53},{96}}\right\rbrack$ ) and assess the components of visual comparison in table visualizations (and support for these components) at a more fine-grained level.
250
+
251
+ § 10 CONCLUSION
252
+
253
+ Analytics tasks in large-scale table visualizations involve comparisons and identification of patterns across rows and columns, but these tasks become more difficult when tables are large - as is the case for SNP analyses in genomic research. Current SNP visualizations are limited in their support for complex analytic tasks in wide-scale tables - both because they do not focus on interaction, and because they do not address issues raised by tables with thousands or tens of thousands of columns. In collaboration with genomic researchers and plant breeders, we have identified six new interaction requirements that will help to support visual analytics tasks with wide-scale SNP datasets. The requirements cover needs for flexible arrangements of the table, lightweight comparisons of both rows and columns, flexible visual encodings, and the ability to save table configurations. We developed a new SNP-haplotype viewer that implements interaction techniques for each of our proposed requirements; the tool has been in continuous and successful use by our collaborators over several years. Our work contributes both a better understanding of the needs for large-scale visual analysis in table visualizations, and specific interaction techniques that can address those needs.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/ot-dY9S1U-/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,415 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Automatically Grading Rey-Osterrieth Complex Figure Tests using Sketch Recognition
2
+
3
+ Category: Research
4
+
5
+ ## Abstract
6
+
7
+ The Rey-Osterrieth Complex Figure Test (ROCF) is among the most widely used neuropsychological examinations to analyze visual spatial constructional ability and memory skills, but grading the patient's sketched complex figure is subjective in nature and can be time consuming. With increasing demand for tools to help detect cognitive decline, there is a need to leverage sketch recognition research to assist in detecting fine details within an ROCF's inherently abstract figure. We present a series of recognition algorithms to detect all 18 official ROCF details using a top-down sub-shape recognition approach. This automated grader transforms a sketch into an undirected graph, identifies and isolates detail sub-shapes, and validates sub-shape neatness via a point-density matrix template matcher. Experimental results from hand-drawn ROCFs confirm that our approach can automatically grade ROCF Tests on the same 18-item sketch detail checklist used by neuropsychologists with marginal error margin.
8
+
9
+ Index Terms: Applied computing-Health Care information systems; Human-centered computing-Gestural input; Human-centered computing—Mobile devices
10
+
11
+ ## 1 INTRODUCTION
12
+
13
+ ### 1.1 Rey-Osterrieth Complex Figures
14
+
15
+ The Rey-Osterrieth Complex Figure Test (ROCF), developed by Rey [41] in 1941 and refined by Osterrieth [37] in 1944, is a neuropsychological test that evaluates several cognitive functions including visuospatial abilities, memory, attention, planning, working memory and executive functions $\left\lbrack {{28},{46}}\right\rbrack$ . The ROCF is characterized as a complex cognitive task [45], and is known in the field of neuropsychology as a useful metric for the frontal lobe function [44]. A participant is asked to copy the figure into a piece of paper, then copy it again two more times from memory. The shape is specifically designed to be abstract so that participants cannot associate it with any common object or concept. A clinician then grades all three sketches on whether 18 separate sub-shapes (henceforth called "details") exist and, if they do, how neatly they were drawn. A clinician grants up to 2 points for each detail that totals to 36 points, with partial credit given to distorted or misplaced shapes. Points for overall neatness of individual details is subjective and is generally dependant on an expert's intuition, especially for shapes that exist but might be drawn poorly. This results in different ROCF graders potentially producing two different scores. The proliferation of digital sketch recognition techniques and a push to digitize clinical neuropsychological examinations motivated our creation of an automated ROCF that can grade itself on the existing grading scheme.
16
+
17
+ From a digital sketch recognition standpoint, automatically grading an ROCF is non-trivial due to the complexity of the figure and test conditions resulting in inherently fuzzy sketch data. No two completed sketches are drawn in the same order, and very frequently shapes are drawn using portions from other shapes [13]. Bottom-up approaches tend to classify shapes as soon as their constraints are met, but shapes in an ROCF may in fact be only part of a detail or may end up as a portion of an entirely different one. A top-down approach not only more closely resembles a human grading an ROCF, but it also simplifies the recognition process by not needing to re-classify a shape at every step of the hierarchical recognition process.
18
+
19
+ ![01963dfc-98bd-74dc-a111-d8c308e04f59_0_926_408_717_462_0.jpg](images/01963dfc-98bd-74dc-a111-d8c308e04f59_0_926_408_717_462_0.jpg)
20
+
21
+ Figure 1: Our automated grader highlighting Details 2, 3, and 6 in red, green, and blue respectively.
22
+
23
+ ### 1.2 Contribution
24
+
25
+ Significant research has been produced in analyzing the reliability of current rubrics $\left\lbrack {{17},{32},{33},{48}}\right\rbrack$ . Automating the process started over two decades ago [13], but even recently surveys have cited a lack of contributions towards grading all 18 details at once. Moetsum et al. in research published in 2020 [34] specifies that "due to the unconstrained nature, these drawings, localization and segmentation of individual scoring sections become a highly challenging task" and existing work localizes only a "small subset of ROCF scoring sections".
26
+
27
+ Whereas previous efforts in automatically grading the ROCF can identify only a subset of the complex figure's details, we present the first fully automated ROCF grader that does not require user input to point to baseline shapes from which to begin recognition. Our contribution widely expands on Field's truss recognition technique field:2011:mechanix by introducing several graph traversal algorithms in order to isolate specific sub-shapes or regions from a given sketch. In addition to triangles, we also recognize squares, parallel lines, crosses, straight horizontal and vertical lines, and diamonds as well as shapes specific to the ROCF such as detail 6 (Cross with Square), detail 14 (Circle and 3 Dots), and 18 (Square with Line). Our system uses a multi-step recognition process that can identify shapes whether by crawling the resulting graph, by using template-matching shape recognition, or a combination of both, resulting in a more accurate and robust sub-shape recognition system for ROCF grading. Many of our recognition algorithms utilize well-known graph traversal and optimization algorithms (such as Di-jkstra's Shortest Path [15] and Depth-First Search [47]). Our system represents the first fully-automated ROCF grader that recognizes the existence or absence of each of the 18 details and checks individual shapes for distortion.
28
+
29
+ To test our recognizer's performance, the system graded 141 digitized Rey-Osterrieth tests from participants and we compare how closely our system's grades correlate with those of two expert graders. The experimental results demonstrates the proposed approach is successful in identifying the existence of sub-shapes within a large abstract shape.
30
+
31
+ ![01963dfc-98bd-74dc-a111-d8c308e04f59_1_149_154_721_427_0.jpg](images/01963dfc-98bd-74dc-a111-d8c308e04f59_1_149_154_721_427_0.jpg)
32
+
33
+ Figure 2: A Rey-Osterrieth Complex Figure Test, with all 18 Details listed.
34
+
35
+ ## 2 RELATED WORK
36
+
37
+ ### 2.1 Sketch Recognition Systems
38
+
39
+ Digital sketch recognition techniques favor bottom-up approaches that employ computational geometry to classify shapes $\lbrack 9,{11},{22},{27}$ , 42]. Hierarchical sketch recognition systems such as LADDER [21], Sketchread [3], Chemink [38] and Mechanix [18, 18] generate composite figures by re-classifying shapes into more complex shapes in every step of the sketching process. In early bottom-up sketching approaches "steps" were typically separated by a UI button that explicitly instructed the system to create a recognition step. More modern systems, however, automatically separate "steps" by single-stroke actions and usually triggered when the user lifts their pen. This allows the system to continuously check the sketch to see whether the user is drawing a composite sketch made up of shape primitives.
40
+
41
+ Popular applications for bottom-up recognition of composite shapes using geometric primitives is especially popular in digital recognition of hand-drawn diagrams $\left\lbrack {1,2,6,8,{10},{26},{53}}\right\rbrack$ . In these projects researchers seek to digitize hand-drawn flowchart and system design diagrams, interpreting diagram structure, flow of information, and preservation of variable and state checks through digital sketch recognition techniques [2,26]. Circles, rectangles, diamonds, rhombi, and directional arrows [9] are used in diagrams to denote specific system or algorithm states or commands $\left\lbrack {1,6}\right\rbrack$ . Indeed, these projects originally served as the basis of Auto ReyO's recognition due to the emphasis in recognizing primitives as part of a larger composite system of shapes. However, a chief difference between these projects and an ROCF sketch is that the ROCF by design has a large number of overlapping shapes, and specific details can be as granular as a single line within a specific area of other shapes. Diagrams and flowcharts, by contrast, are required to have clear spacing between its components and recognizing missing or distorted shapes is not a focus of these automated systems. While some form of composite figure recognition is necessary for automatically grading the ROCF, a top-down approach as explored in other systems [23] proved ultimately the most viable for Auto ReyO.
42
+
43
+ Corner detection also helps characterize digital shapes, with lightweight systems such as ShortStraw [55] and iStraw [56] being among the most efficient. Auto ReyO uses the open-source ShortStraw library in its recognition of corners and endpoints to generate the vertices during the graph creation stage. This is used in tandem with line-intersection algorithms to segment the sketch lines such that individual shapes can be recognized. A frequent use case of this is recognizing details 4 and 6 of the ROCF (see Fig. 2). A user typically draws a single long line at once across the ROCF shape, so we are unable to use individual stroke order to recognize details, but rather need the segmentation that a line-intersection algorithm combined with ShortStraw is able to provide.
44
+
45
+ ![01963dfc-98bd-74dc-a111-d8c308e04f59_1_922_148_726_419_0.jpg](images/01963dfc-98bd-74dc-a111-d8c308e04f59_1_922_148_726_419_0.jpg)
46
+
47
+ Figure 3: Auto ReyO's recognition hierarchy, designed to have as few dependencies as possible.
48
+
49
+ ### 2.2 Template Matching Shape Classification Systems
50
+
51
+ The "Dollar" family of recognition systems $\left\lbrack {4,5,{50},{51},{54}}\right\rbrack$ remains among the most well known single and multi-stroke gesture classification algorithms, and serve as the basis for our own template-matching recognition algorithm presented as part of our system. While most techniques rely on stroke order, geometric properties, and physical characteristics such as speed, acceleration, etc., the "\$P+" recognizer calculates similarity via "point cloud" approximation [49]. A point cloud is generated by resampling both a template shape and an input shape on the same resampling parameters, overlaying the input shape on top of the template sketch matching its shape, centering, and orientation as close as possible, and iterating through every point finding the closest match between template points and input points. The distance between the points that are closest together are added cumulatively and are presented as the overall "distance" metric between the template shape and the input shape. The "\$P+" recognizer returns the closest template match, identifying what kind of shape the user has drawn. This is especially flexible when the application in question necessitates recognition that is agnostic to stroke order. Our technique for shape recognition as described in Section 3.4 is based on the "\$P+" recognizer, particularly the technique of calculating a "distance".
52
+
53
+ Our technique differs, however, in that rather than calculating distance via point-for-point comparison, we generate a fixed-resolution matrix of point density for both the template and the input shapes and calculate distance between cells of both matrices. This allows us to generate a more accurate grader for shape neatness. Indeed, "\$P+" only focuses on finding the closest match to a template since it is a shape classifier, but its internal distance metric value does not perform well to gauge whether an input shape is poorly drawn next to its provided "ideal" template shape.
54
+
55
+ ### 2.3 Hierarchical Sketch Recognition
56
+
57
+ Hierarchical sketch recognition approaches generally check drawn lines to see if they meet requirements for a composite shape $\left\lbrack {{29},{31}}\right\rbrack$ . Layered hierarchical systems for graph creation have been applied to both bottom-up and top-down systems [23], and involve the decomposition of a drawn sketch to specific broad categories by analyzing
58
+
59
+ ![01963dfc-98bd-74dc-a111-d8c308e04f59_2_142_98_1508_861_0.jpg](images/01963dfc-98bd-74dc-a111-d8c308e04f59_2_142_98_1508_861_0.jpg)
60
+
61
+ Figure 4: Description of ROCF sub-shape recognition system. Stages 2 and 3 shown in the figure are repeated for each of the 18 details of a Rey-Osterrieth complex figure.
62
+
63
+ Online Submission ID: 0 sub-graphs $\left\lbrack {{14},{30},{57}}\right\rbrack$ . This is typically used in the fields of computer vision to help decompose a system to primitive parts and represent them as a tiered graph. We envisioned a similar hierarchical tiered approach to the recognition of an ROCF due to the nature of the drawn details. To draw detail 10 in an ROCF, for example, the user needs to have drawn both details 2 and 3 to be able to connect the line properly (see Figure 2). Similarly, detail 14 requires the existence of detail 13 to receive full marks for both correct placement and shape neatness. Rather than represent the entirety of the sub-shape as a single vertex in a graph, however, we envisioned the vertices of a graph being represented by intersecting lines and endpoints, and applied the concepts behind sub-graph composite object recognition to identify the ROCF details themselves. The cited foundational work on graph implementations to supplement computer vision and object recognition informed our own approach to automatically grade ROCFs using a graph itself as the vehicle for tiered object recognition.
64
+
65
+ ### 2.4 Efforts to Automate Neuropsychological Examina- tion Analysis
66
+
67
+ Efforts to automate other neuropsychological tests has renewed interest in sketch sub-object detection $\left\lbrack {7,{16},{35},{36}}\right\rbrack$ . Object recognition ranges across various neuropsychological examinations including clocks $\left\lbrack {{24},{25}}\right\rbrack$ and general handwriting tasks $\left\lbrack {{20},{39}}\right\rbrack$ . However, whereas recognized objects for these tests tend to have heavily distinct characteristics, ROCF details are mostly composed of simple primitives that appear frequently. For example, detail 5 shown in Figure 2 is defined not only as any vertical line, but rather a specific vertical line within the sketch. Work presented by Prange et al. [40] cites Rey-Osterrieth figures as a motivating factor in the need to identify geometric shapes inside complex abstract figures. Existing attempts to automatically grade ROCFs are semi-automated or do not implement detection of all 18 details [12, 13]. The most recent attempt automates grading using a deep-learning neural network but
68
+
69
+ ![01963dfc-98bd-74dc-a111-d8c308e04f59_2_1011_1111_550_199_0.jpg](images/01963dfc-98bd-74dc-a111-d8c308e04f59_2_1011_1111_550_199_0.jpg)
70
+
71
+ Figure 5: Finding path $p$ for the top horizontal side of detail 2’s rectangle. Dotted area on right indicates dist radius. In this example ${v}_{m2} = {c}_{1}$ , dir $=$ Right and next dir $=$ Down (See Algorithms 1 and 2)
72
+
73
+ leaves ample room for improvement of individual segment detection, most notably single-line details [52]. Additionally, our system is able to produce a recognizer from only five training sketches to serve as templates, whereas neural networks require exponentially higher amounts of training data to function properly.
74
+
75
+ ## 3 Automated Rey-Osterrieth Complex Figure Test GRADER (AUTO REY-O)
76
+
77
+ Auto Rey- $O$ is an application written on the Universal Windows Platform (UWP) that connects to a Neo SmartPen device via blue-tooth for data collection. The same app is used to perform the fully automated grading process. Auto Rey-O’s top-down sub-shape recognizer divides the ROCF grading process into three distinct stages as shown in Figure 4.
78
+
79
+ ### 3.1 Recognizer Generalizability
80
+
81
+ An important consideration of novel recognition and automation techniques in sketch recognition lies in articulating the generalizability and defining the constraints under which a presented technique aims to perform well.
82
+
83
+ Automating the ROCF motivates a brief discussion on generalizability due to the inherently "hard-coded" nature of its automation. Indeed, the complexity of the ROCF shape coupled with the requirements of detecting very specific lines necessitates a certain specificity of location and shape composition requirements. Some details, for example, are a single horizontal or vertical line, but of most importance is the location of the line relative to other details and the starting and stopping points. It is, in fact, this specificity in requirements that allows our method to recognize all 18 details, opposing previous work that only detects a subset of them.
84
+
85
+ At the same time, however, generalizability was taken into account when designing the recognizers that will be described in the following section. Generalizability was considered for two primary reasons. Firstly, our algorithm must be generalizable to recognize details despite a varying list of imperfections including but not limited to: crooked lines, shapes not entirely closed, various lines intersecting at different points, sharp angles accidentally being curved, the same line being drawn over several times, etc. The algorithm must also be able to, within reason, identify as many shapes as possible even in the absence of other shapes. Unless the shapes are directly dependent on each other for recognition, the absence or heavy distortion of one unrelated detail should not prevent the recognition of the other.
86
+
87
+ Secondly, as many recognition techniques as possible should be easily adaptable for other complex figure tests. As per the Compendium of Neuropsychological Examinations [46], seven complex figures are recognized as valid and tested figures for this purpose, and the Rey-Osterrieth Complex Figure test is the most popular. New variants with small changes are uncommon. The remaining six figures are: Taylor Alternate Version, Modified Taylor Complex Figure, and four Medical College of Georgia Complex Figures. All have similar size, complexity, and are a combination of straight lines, triangles, and simple geometric shapes. All contain a "detail 2": a large rectangle that serves as an anchor for the rest of the shapes. Our system was designed to be adaptable to recognize the 18 details of the remaining six complex figure tests by applying variations on the pathfinding algorithms on Table 1. Our three-stage method detailed in Fig. 3 can be adapted for all six remaining complex figure tests, so that extent we consider this approach generalizable for other complex figure tests of this type. Location heuristics need to be tailored for each detail, since the rules themselves are inherently specific and unique to the ROCF. We believe our three-step approach can be usable for any hierarchical sketch recognition problem involving complex figures where multiple sub-shapes must be discretely recognized but may share any number of lines.
88
+
89
+ ### 3.2 Stage 1: Graph Creation
90
+
91
+ The graph creation stage is divided into four distinct steps. First, we prepare the sketch for corner detection by resampling to a uniform interspace length $S$ as follows:
92
+
93
+ $$
94
+ S = \frac{\sqrt{{\left( {x}_{m} - {x}_{n}\right) }^{2} + {\left( {y}_{m} - {y}_{n}\right) }^{2}}}{c} \tag{1}
95
+ $$
96
+
97
+ where $\left( {{x}_{m},{y}_{m}}\right)$ is the lower-right corner of the sketch, $\left( {{x}_{n},{y}_{n}}\right)$ is the upper-left corner of the sketch, and $\mathrm{c}$ is a constant $c = {40}$ .
98
+
99
+ The second step utilizes the corner-finding algorithm from Wolin [55] to identify any "corner" from drawn strokes. To detect line intersections, two straight-line segments are compared with the target segment ${y}_{a} = {a}_{2}x + {a}_{1}$ checked for intersection against comparison segment ${y}_{b} = {b}_{2}x + {b}_{1}$ with equation 2 .
100
+
101
+ $$
102
+ \frac{{a}_{1} + {b}_{1}}{{a}_{2} + {b}_{2}} \in \left( {{x}_{1} - \frac{{\left( {0.15}l\right) }^{2}}{1 + {a}_{2}^{2}},{x}_{n} + \frac{{\left( {0.15}l\right) }^{2}}{1 + {a}_{2}^{2}}}\right) \tag{2}
103
+ $$
104
+
105
+ where ${x}_{1}$ and ${x}_{n}$ represent the $\mathrm{x}$ values of the less and greater vertices of the target segment respectively with $l$ being its segment length.
106
+
107
+ The third step creates undirected graph $G$ , where every vertex $v$ is a line endpoint, corner, or intersection, and every edge $e$ is a drawn line connecting each $v$ . Each $v$ contains a point from the sketch, and each $e$ contains the sampled points that connect the two vertices.
108
+
109
+ The fourth and final step performs vertex contraction on the created graph. Each vertex is iterated over and checked for near vertices that fall below a predetermined distance threshold. If two vertices are joined, their respective sampled points ${s}_{i}$ including points from an edge that might fall between them are combined into a single vertex $v$ containing all the sampled points. The distance measure is determined through complete distance used in hierarchical clustering, taking the maximum of the set in equation 3 .
110
+
111
+ $$
112
+ \left\{ {{\begin{Vmatrix}{s}_{i} - {s}_{j}\end{Vmatrix}}_{2} \mid {s}_{i} \in {v}_{1},{s}_{j} \in {v}_{2}}\right\} \tag{3}
113
+ $$
114
+
115
+ This serves both to connect segmented or near vertices and to reduce the overall complexity of the graph by eliminating edges. Finally, the vertices are iterated over a second time, checking if near vertices fall below a distance threshold, where the distance is determined through taking the minimum of the set in equation 3 referred to as single distance used in hierarchical clustering. If the distance falls below a predetermined threshold, the points are then are linked through an edge.
116
+
117
+ Algorithm 1 Detail 2: Largest Rectangle
118
+
119
+ ---
120
+
121
+ Input: Sketch Graph's vertex adjacency list
122
+
123
+ Output: Largest rectangle vertices
124
+
125
+ for all node $n$ in Graph do
126
+
127
+ corner ${c}_{1} =$ SideAndCorner(n, Right, Down)
128
+
129
+ corner ${c}_{2} =$ SideAndCorner $\left( {{c}_{1},\text{Down, Left}}\right)$
130
+
131
+ corner ${c}_{3} =$ SideAndCorner $\left( {{c}_{2},\text{Left, Up}}\right)$
132
+
133
+ corner ${c}_{4} =$ SideAndCorner $\left( {{c}_{3},{Up},{Right}}\right)$
134
+
135
+ if $n = {c4}$ then
136
+
137
+ add all SideAndCorner $b$ sides to rectangle $q$
138
+
139
+ end if
140
+
141
+ end for
142
+
143
+ return largest $q$
144
+
145
+ ---
146
+
147
+ ### 3.3 Stage 2: Detail Recognition
148
+
149
+ All 18 ROCF details are recognized by applying a graph traversal algorithm to identify a "shape" within the graph. Each detail has an associated algorithm that is called in the hierarchical order defined by Figure 3. Every algorithm is designed to accommodate inherent graph imperfections from both the graph creation stage and the participant's hand sketch. For example, if the algorithm checks for a horizontal edge, there we allow a slope between 0.3 and -0.3 since participants are not expected to produce a perfectly horizontal line.
150
+
151
+ Every algorithm was designed to strike a balance between leniency to accommodate the imperfect nature of a hand-drawn shape and precision to find the expected shape if it exists. We are unable to thoroughly explain every detail's graph traversal algorithm, but we have chosen to explain detail 2 as Algorithms 2 and 1 since it is the most sophisticated of our recognizers and best illustrates our graph traversal approach.
152
+
153
+ #### 3.3.1 Recognizing Detail 2
154
+
155
+ detail 2's recognition algorithm defined in Algorithm 1 and 2 is a greedy graph traversal algorithm tasked with finding the large rectangle that serves as the anchor for all other shapes in the graph. Our pathfinder finds one rectangle side $b$ and the corner at its end $c$ at a time. For each side dir is the intended path direction, and dirnext is the next path direction once we find our corner.
156
+
157
+ We define $N$ as single agents where every agent $n \in N$ has a start location ${s}_{n} \in G$ and a goal location ${g}_{n} \in G$ . The path $p$ of $n$ consists of one side $b$ of our rectangle where ${g}_{n} = c$ . Path $p$ is of length $k$ that is a sequence of vertices $p = \left\{ {{v}_{0},{v}_{1},{v}_{2},\ldots ,{v}_{k}}\right\}$ such that each consecutive vertex is either in a defined direction dir (up, down, left, right, or diagonals) or Eucledian distance $r < {25}$ . At the end of our sequence ${v}_{k}$ one of two conditions is true:
158
+
159
+ 1. ${v}_{k}$ is connected to a vertex ${v}_{x}$ such that direction of $\left( {{v}_{k},{v}_{x}}\right) =$ dirnext
160
+
161
+ 2. ${v}_{k}$ is connected to other vertices ${v}_{m}$ such that $r$ of $\left( {{vk},{vx}}\right) < {25}$ .
162
+
163
+ The conditions describe that we have either (1) found a corner characterized by the start of the next side of the rectangle, or (2) the end of our sequence consists of various vertices very close together. This creates a set of "dead-end" nodes $Q$ . For condition $1,{v}_{k} \in Q$ and $c = {v}_{k}$ . For any ${v}_{m}$ that satisifes condition 2, $\left\{ {{v}_{m1},{v}_{m2},\ldots ,{v}_{mn}}\right\} \in Q$ . We check all vertices in $Q$ and return the vertex ${v}_{e}$ that satisfies condition 1. The final sequence is $p = \left\{ {{v}_{0},{v}_{1},{v}_{2},\ldots ,{v}_{e}}\right\}$ , and $c = {v}_{e}$ . This is repeated four times to find the four sides of our rectangle, and return the largest such rectangle as detail 2.
164
+
165
+ #### 3.3.2 Examples of Other Details
166
+
167
+ The rest of our graph traversal algorithms can be divided into two distinct categories: graph-crawling algorithms that identify shapes from the graph itself, and algorithms that use vertices as boundaries and then isolates all pen strokes within a specified region.
168
+
169
+ Algorithm 2 SideAndCorner
170
+
171
+ ---
172
+
173
+ Input: Start node $n$ , directions dir, dirnext
174
+
175
+ Output: Side $d$ of rectangle, corner $c$
176
+
177
+ push $n$ to stack $s$
178
+
179
+ while $s$ not empty do
180
+
181
+ pop $s$ to $p$ , mark as visited
182
+
183
+ for all adjacent pairs $\left( {{p}_{1},{p}_{2}}\right)$ of $p$ do
184
+
185
+ if direction of $\left( {{p}_{1},{p}_{2}}\right) =$ dir or distance $e\left( {{p}_{1},{p}_{2}}\right) < {25}$
186
+
187
+ then
188
+
189
+ push ${p}_{2}$ to $s$
190
+
191
+ $p = {p}_{2}$ , repeat from line 5
192
+
193
+ end if
194
+
195
+ if dead end ${p}_{n}$ reached then
196
+
197
+ add shortest path as a stack from $p$ to ${p}_{n}$ to set $c$
198
+
199
+ end if
200
+
201
+ end for
202
+
203
+ end while
204
+
205
+ for all current longest path $a$ in $c$ do
206
+
207
+ for all adjacency pairs $\left( {{p}_{a1},{p}_{a2}}\right)$ of leaf ${p}_{an}$ in $a$ do
208
+
209
+ if ${p}_{a2}$ is dirnext of ${p}_{a1}$ then
210
+
211
+ return $c = {p}_{a1}, d = a$
212
+
213
+ else
214
+
215
+ pop ${p}_{an}$ from $a$
216
+
217
+ repeat from line 15
218
+
219
+ end if
220
+
221
+ end for
222
+
223
+ end for
224
+
225
+ ---
226
+
227
+ The former category is best for detail 2 as described previously, as well as simple shapes and lines like details3,4,5,7,9,10,15,16. The latter category is appropriate in cases when our graph generator may create highly variable graphs from imperfectly-drawn shapes, making it difficult for us to determine what the graph may look like. This is the case for details1,6,11,12,14, and 17 . In these instances we identify specific regions where we expect the detail to exist, and save all edges that are found. We isolate specific regions and run a bounded Depth-First-Search algorithm that returns all edges and vertices within the given region.
228
+
229
+ ![01963dfc-98bd-74dc-a111-d8c308e04f59_4_926_148_722_247_0.jpg](images/01963dfc-98bd-74dc-a111-d8c308e04f59_4_926_148_722_247_0.jpg)
230
+
231
+ Figure 6: Comparing two point-density matrices to asses detail distortion: our sample, and a saved template. The process is repeated for all templates.
232
+
233
+ <table><tr><td>$\mathbf{{Det}.}$</td><td>Method of Recognition</td></tr><tr><td>1</td><td>Isolate region, then DFS to fill</td></tr><tr><td>2</td><td>Greedy pathfinding, repeated per side</td></tr><tr><td>3</td><td>Dijkstra's between diagonal corners of #2</td></tr><tr><td>4</td><td>Greedy single-direction pathfinding</td></tr><tr><td>5</td><td>Greedy single-direction pathfinding</td></tr><tr><td>6</td><td>Direction path for top/bottom, Dijkstra's</td></tr><tr><td>7</td><td>Greedy single-direction pathfinding</td></tr><tr><td>8</td><td>Connect horizontal lines bet. #3 and #5</td></tr><tr><td>9</td><td>Find vertical, diagonal line above #2</td></tr><tr><td>10</td><td>Greedy single-direction pathfinding</td></tr><tr><td>11</td><td>Isolate triangle, then DFS to fill</td></tr><tr><td>12</td><td>Isolate lower-right region, DFS</td></tr><tr><td>13</td><td>Find upward, downward diagonals</td></tr><tr><td>14</td><td>DFS to find all edges on tip of 13</td></tr><tr><td>15</td><td>Greedy single-direction pathfinding</td></tr><tr><td>16</td><td>Greedy single-direction pathfinding</td></tr><tr><td>17</td><td>Isolate region, then DFS to fill</td></tr><tr><td>18</td><td>#2's technique, then single diagonal</td></tr></table>
234
+
235
+ Table 1: General recognition method types for all 18 details. DFS is the Depth-First Search pathfinding algorithm. Dijkstra's is the Dijkstra Shortest-Path Algorithm
236
+
237
+ For detail 6, for example, our "region" is defined by the area inside the detail 2 rectangle and detail 3 cross, and we do not include the detail 4 horizontal line or detail 7 Small Segment in our DFS search. This returns the remaining subset of vertices and edges as seen in bottom portion of Figure 4. Table 1 describes the general method of recognition we applied to detect all 18 details in an ROCF, and the order of recognition is tiered as shown in Fig. 3.
238
+
239
+ The "region finding" category allows us to isolate some details, but if the shape is poorly drawn or missing entirely then isolating regions alone could not confirm shape neatness. This motivated the implementation of our third processing stage, which grades the isolated shape for correctness.
240
+
241
+ ### 3.4 Stage 3: Detail Validation
242
+
243
+ Stage 3 compares only the isolated sample of the recognized detail to a set of template details to score the sample's quality. The system begins by centering, scaling, and resampling the isolated sample points so that they lie in a $\left\lbrack {-1,1}\right\rbrack$ range on the $x$ and $y$ plane. We then create a map of some provided resolution $n$ such that a detail is represented as a $n \times n$ matrix where space in the figure is mapped to a cell of the matrix, each cell being some range $\left\lbrack {{x}_{i},{x}_{j}}\right\rbrack ,\left\lbrack {{y}_{i},{y}_{j}}\right\rbrack$ . Each cell is then given a p-value based on the number of points that lie within the range for each cell. A visualization of this is shown in figure 6. The same system is applied to each of the templates, then each template matrix is then averaged cell-wise to form a template mapping. The template matrix $Q$ is compared sampled detail matrix $P$ with equation 4 to determine how closely the two matched. The best match is then found by shifting the the sample matrix by row and column to find the best possible position when compared to the template, given that some details will match in terms of their stroke and dimension but have somewhat different centers relative to the the template.
244
+
245
+ $$
246
+ \mathop{\sum }\limits_{{i = 1}}^{n}\mathop{\sum }\limits_{{j = 1}}^{n}\max \left( {0,{Q}_{i, j} - {P}_{i, j}}\right) \tag{4}
247
+ $$
248
+
249
+ <table><tr><td>Det. #</td><td>${\Delta }_{a,{g1}}$</td><td>${\Delta }_{a,{g2}}$</td><td>${\Delta }_{{g1},{g2}}$</td><td>F1-Score</td><td>Det. #</td><td>${\Delta }_{a,{g1}}$</td><td>${\Delta }_{a,{g2}}$</td><td>${\Delta }_{{g1},{g2}}$</td><td>F1-Score</td></tr><tr><td>1</td><td>0.69</td><td>0.68</td><td>0.37</td><td>0.787</td><td>10</td><td>0.21</td><td>0.20</td><td>0.10</td><td>0.962</td></tr><tr><td>2</td><td>0.18</td><td>0.44</td><td>0.33</td><td>0.857</td><td>11</td><td>0.47</td><td>0.45</td><td>0.21</td><td>0.878</td></tr><tr><td>3</td><td>0.07</td><td>0.11</td><td>0.16</td><td>0.978</td><td>12</td><td>0.46</td><td>0.44</td><td>0.11</td><td>0.872</td></tr><tr><td>4</td><td>0.27</td><td>0.30</td><td>0.12</td><td>0.927</td><td>13</td><td>0.13</td><td>0.11</td><td>0.11</td><td>0.966</td></tr><tr><td>5</td><td>0.08</td><td>0.49</td><td>0.49</td><td>0.978</td><td>14</td><td>0.42</td><td>0.41</td><td>0.15</td><td>0.881</td></tr><tr><td>6</td><td>0.31</td><td>0.56</td><td>0.26</td><td>0.958</td><td>15</td><td>0.50</td><td>0.49</td><td>0.09</td><td>0.770</td></tr><tr><td>7</td><td>0.47</td><td>0.26</td><td>0.13</td><td>0.855</td><td>16</td><td>0.21</td><td>0.19</td><td>0.08</td><td>0.919</td></tr><tr><td>8</td><td>0.28</td><td>0.33</td><td>0.13</td><td>0.966</td><td>17</td><td>0.57</td><td>0.66</td><td>0.34</td><td>0.788</td></tr><tr><td>9</td><td>0.35</td><td>0.20</td><td>0.07</td><td>0.904</td><td>18</td><td>0.45</td><td>0.49</td><td>0.21</td><td>0.925</td></tr></table>
250
+
251
+ Table 2: Classification results and average scoring differences for each detail across all graded tests. n=141 for all details except for Detail 2, where $\mathrm{n} = {185}.{\Delta }_{a,{g1}}$ denotes the average point score difference between Auto Rey-O and Grader 1, ${\Delta }_{a,{g2}}$ is the difference between Auto Rey-O and Grader 2, and ${\Delta }_{{g1},{g2}}$ between Grader 1 and Grader 2.
252
+
253
+ The value of "distance" between template and sample is between 0 and 1 , with 0 being the best. A shape that receives full credit for neatness is characterized as how close the sample is to the templates. Any value below 0.5 assigned to a sample is given full credit of 2 points. A value between 0.5 and 0.9 is given partial credit of 1 point. A value above 0.9 is given 0 points.
254
+
255
+ ## 4 DATA COLLECTION AND RESULTS
256
+
257
+ We conducted a study with 68 cognitively healthy participants to complete a Rey-Osterrieth Complex Figure Test between the ages of 19-32. Although this test is meant to assess constructional ability and memory loss, healthy participants do not always score full marks on an ROCF [19], and indeed our testing corpus reflects a wide range of scores that conform to established normative data for our participants. All participants took the test in a simulated neuropsychologist's test environment and completed all three conditions (Copy, Recall, Delayed Recall). Participants were given a Neo SmartPen N2 and completed tests on pre-printed "blank" canvas pages that tracked the pen's location and instantaneously digitized all stroke data, allowing a more authentic testing experience since the ROCF is typically administered via pen and paper.
258
+
259
+ A total of 204 sketches from the 68 participants were collected. Of these, 5 perfect-score tests were set aside to be used as templates for Stage 3 validation. 14 were not gradable or their sketch data was corrupted, bringing the total graded to 185 . All tests were also graded by two field experts whose grades we consider "ground truth" in this context. The first grader is a practicing clincial neuropsychologist and the second is a professor specializing in cognitive and visual perceptual rehabilitation in older adults. We measure our system's success in two ways: the F1-Score of our recognition algorithm for each detail, and the comparison between our system's total grade and the expert graders' total grade. For the latter, both our system's and the grades are on the 36-point scale as defined in Section 1.1.
260
+
261
+ A key factor considered when calculating F1-score was the subjectivity of distortion thresholds. While we implemented our own thresholds for distortion in Stage 3, instructions for the ROCF in the literature leave the definition of "distortion" at the discretion of the grader [46]. For recognition purposes we are interested in gauging whether our system can successfully either find a detail or confirm its absence. The F1-score reports our system's ability to recognize the existence of a detail. Since we are still interested in comparing 36-point grades that also integrate distortion as partial credit, we also calculate Spearman’s rank coefficient $\left( {\rho = {0.767}}\right)$ between our automatically-graded tests and those of our expert grader.
262
+
263
+ ## 5 DISCUSSION AND LIMITATIONS
264
+
265
+ ### 5.1 Results Discussion
266
+
267
+ In clinical neuropsychology, grading Rey Osterrieth Complex Figure tests has been the subject of constant iteration and is an active research topic, with numerous methods of interpretation being proposed and refined. As such, analyzing the process of grading these ROCFs automatically is not a trivial subject. Our analysis centered on simulating the perception of an detail since the granular differences in distortion are frequently attributed to grader subjectivity. Our aim, then, was the provide evidence the system perceived the details correctly, even if they might have been slightly distorted, or in the case of severe distortion the Stage 3 validation stage would be able to separate those clear cases. In terms of overall score comparisons, we sought to analyze how far apart individual test grades our system was from those of expert graders. Although the grades from individual graders were closer to each other than between each grader and Auto ReyO, our system compares favorably due to the high F1-score of the vast majority of details, and the average difference between Auto ReyO and our system being around 3 points out of 36 possible points for an ROCF test. We believe these results are significant in light of the fact that a fully automated ROCF grader that grades all 18 details has yet been proposed.
268
+
269
+ Also of note is the performance of detail 2, the large rectangle that serves as the anchor for the rest of the sketch. The organizational strategy score of the Rey-Osterrieth Complex Figure test places the highest priority on the existence of Detail 2 in a sketch due to its importance to the overall figure structure [43]. For the purposes of our system, this resulted in calculating F1-Score for recognition of the 18 details being conditional on whether detail 2 could be successfully recognized within a sketch. Exceptionally poor figures that lack a discernible detail 2 almost always result in very low or ungraded scores when hand-graded. Similarly, in very rare cases a poorly drawn ROCF could be graded by Auto ReyO if detail 2 could be recognized, while another ROCF that would score higher might not be graded due to a detail 2 that could not be recognized. For this reason, we have designed our recognition hierarchy such that the test is not graded if it cannot automatically recognize detail 2. This provides the most consistent application of grading requirements that is still consistent with the grading rubric as presented in the Compendium of Neuropsychological Examinations [46].
270
+
271
+ A total score of 0 , however, is not necessarily due to a true negative. For 44 sketches, our algorithm was unable to find detail 2 due to a sloppy or unconnected drawing, but other details would exist. If we flatly calculated F1-Score of all Details for every sketch included the ungraded ones, this would assign incorrect false negatives to the rest of the details. For that reason we tier F1-score calculations; for detail 2 we calculate it for all sketches $\left( {\mathrm{n} = {185}}\right)$ , and for all other Details we calculate it where detail 2 was correctly detected $\left( {\mathrm{n} = {141}}\right)$ .
272
+
273
+ ![01963dfc-98bd-74dc-a111-d8c308e04f59_6_334_149_1133_433_0.jpg](images/01963dfc-98bd-74dc-a111-d8c308e04f59_6_334_149_1133_433_0.jpg)
274
+
275
+ Figure 7: Grade plots for all 36-point scores, compared between Auto Rey-O and expert graders (n=141). (a) p=0.799, (b) p=0.829, (c) p=0.948
276
+
277
+ The F1-Score results in Table 2 and the graph in Figure 7 demonstrate the effectiveness of Auto Rey-O. Our top-down system correctly identifies and validates the details with a high enough F1-score that shows the system working for typical test-takers. Table 2 also shows the average differences in scores (in points, maximum of 2) assigned between our Auto Rey-O system and our expert graders. All of the average differences in scores for each detail are well below 1 point and the vast majority below half a point, indicating a marginal difference in scoring between expert graders and our Auto Rey-O system. The system also works successfully for ROCFs with higher amounts of distortion. Such instances display the flexibility of the system in still identifying present details even if the participant has heavy lapses in memory.
278
+
279
+ The lowest-performing details are 1, 17 and 15 . These had the highest amount of false negatives, although our manual review of these false negatives showed our system did recognize the details but chose to grant a score of 0 due to our threshold for distortion. Further refinements of distortion threshold values for these two details would improve their recognition quality.
280
+
281
+ Figure 7 compares the scores assigned by our automated grader and the two expert graders. Between the two expert graders, the correlation was $p = {0.948}$ , a Spearman’s rank coefficient of $\rho = {0.942}$ , and an average difference in scores of ${\Delta }_{{g}_{1},{g}_{2}} = {1.68}$ . Between our system and grader $1, p = {0.799},\rho = {0.765}$ , and average ${\Delta }_{\text{auto },{g}_{1}} = {3.21}$ . Between our system and grader $2, p = {0.829}$ , $\rho = {0.802}$ , and average ${\Delta }_{\text{auto },{g}_{2}} = {2.78}$ . Our automated system produced grades with a generally high correlation with those of the graders, although the grades from the experts were more similar to each other. In all three cases, low-scoring tests somewhat deviate across all graders, even between the expert graders. This is likely due to the aforementioned ambiguity in interpreting detail distortion. Our automated grader can also be observed to be consistently too strict on grading that produces consistently lower scores, which is partially attributed to the fact that it does not recognize details that were placed in the wrong location. In addition, at the suggestion of the expert graders who also served as domain experts, we chose to prioritize consistency in grading over leniency when deciding on partial credit thresholds since consistency is one of the key advantages of an automated recognition system.
282
+
283
+ ### 5.2 Limitations
284
+
285
+ The main limitation of this graph-based approach to top-down sketch recognition is the reliance on line connections. Our vertex-contraction algorithm in Step 1 of the system's process does connect lines with corners within a certain radius. We found this technique worked very well if sketches were drawn with reasonable neatness. If the lines are disconnected by more than half an inch, however, these lines will remain disconnected. This was a conscious design choice since vertex contraction cannot be too aggressive; otherwise, regions where any correct sketch would have high numbers of vertices would all get incorrectly contracted into one. This is the case such as the area where detail6,7,3, and 8 all converge-even neatly-drawn sketches have a high concentration of vertices here. We intend to improve refine the recognition system to "jump" gaps and close disconnected lines only where appropriate.
286
+
287
+ ## 6 Future Work and Conclusion
288
+
289
+ Refinements can be made to help recognize specific kinds of poorly-drawn details. As previously mentioned, most sources of grading inaccuracies for our system came from poorly connected graphs due to sketch sloppiness. For healthy participants taking this test, our expert graders attributed sloppiness as a lack of effort rather than genuine memory loss if the patient has no hand motor issues. Still, there would be an interest in supplementing our graph traversal with connecting otherwise unconnected vertices to improve recognition performance.
290
+
291
+ Additionally, improvements to our Stage validation approach could be made to recognize finer details. Our validation method sometimes may not properly distinguish between small changes, such as an extra stray mark or one line missing. Identifying missing lines is important for details 8 and 12 , where the number of parallel lines drawn is relevant to its grading. Our validation method is able to find these discrepancies somewhat frequently, but potential for improvement exists.
292
+
293
+ Lastly, we aim to work with clinical neuropsychologists to administer their test to willing clients to evaluate system usability in a clinical setting. This would produce additional sketch data taken from actual patients, and would allow us to perform UI/UX usability studies for clinicians. The ultimate aim of the system is to aid diagnosis process by automating the grading of an ROCF, so evaluating the user experience of clinicians as they collect the digital data and use the Auto Rey-O application for themselves is the next step to further this project.
294
+
295
+ Our Auto ReyO automatic Rey-Osterrieth Complex Figure test grader demonstrates the validity of a top-down sketch recognition approach using graph traversal algorithms. This significantly simplifies the recognition process where a bottom-up approach would need to take into consideration a prohibitively wide array of possible shape interpretations and re-interpretations. By employing graph crawling, classical vertex search, and optimization algorithms we are able to identify key sub-shapes of geometric shapes.
296
+
297
+ ## REFERENCES
298
+
299
+ [1] M. Agrawal, A. Zotov, M. Ye, and S. Raghupathy. Context aware online diagramming recognition. In 2010 12th International Conference on Frontiers in Handwriting Recognition, pp. 682-687. IEEE, 2010.
300
+
301
+ [2] O. Altun and O. Nooruldeen. Sketrack: stroke-based recognition of online hand-drawn sketches of arrow-connected diagrams and digital logic circuit diagrams. Scientific Programming, 2019, 2019.
302
+
303
+ [3] C. Alvarado and R. Davis. Sketchread: a multi-domain sketch recognition engine. In ACM SIGGRAPH 2007 courses, p. 34. ACM, 2007.
304
+
305
+ [4] L. Anthony and J. O. Wobbrock. A lightweight multistroke recognizer for user interface prototypes. In Proceedings of Graphics Interface 2010, pp. 245-252. 2010.
306
+
307
+ [5] L. Anthony and J. O. Wobbrock. \$ n-protractor: A fast and accurate multistroke recognizer. In Proceedings of Graphics Interface 2012, pp. 117-120. 2012.
308
+
309
+ [6] A.-M. Awal, G. Feng, H. Mouchere, and C. Viard-Gaudin. First experiments on a new online handwritten flowchart database. In Document Recognition and Retrieval XVIII, vol. 7874, p. 78740A. International Society for Optics and Photonics, 2011.
310
+
311
+ [7] M. Bennasar, R. Setchi, Y. Hicks, and A. Bayer. Cascade classification for diagnosing dementia. In 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 2535-2540. IEEE, 2014.
312
+
313
+ [8] M. Bresler, D. Prša, and V. Hlaváč. Online recognition of sketched arrow-connected diagrams. International Journal on Document Analysis and Recognition (IJDAR), 19(3):253-267, 2016.
314
+
315
+ [9] M. Bresler, D. Prusa, and V. Hlavác. Detection of arrows in on-line sketched diagrams using relative stroke positioning. In 2015 IEEE Winter Conference on Applications of Computer Vision, pp. 610-617. IEEE, 2015.
316
+
317
+ [10] M. Bresler, T. Van Phan, D. Prusa, M. Nakagawa, and V. Hlavác. Recognition system for on-line sketched diagrams. In 2014 14th International Conference on Frontiers in Handwriting Recognition, pp. 563-568. IEEE, 2014.
318
+
319
+ [11] C. Calhoun, T. F. Stahovich, T. Kurtoglu, and L. B. Kara. Recognizing multi-stroke symbols. In AAAI Spring Symposium on Sketch Understanding, pp. 15-23. Stanford University, AAAI Technical Report SS-02-08, AAAI Press, 2002.
320
+
321
+ [12] R. Canham, S. Smith, and A. Tyrrell. Location of structural sections from within a highly distorted complex line drawing. IEE Proceedings-Vision, Image and Signal Processing, 152(6):741-749, 2005.
322
+
323
+ [13] R. Canham, S. L. Smith, and A. M. Tyrrell. Automated scoring of a neuropsychological test: the rey osterrieth complex figure. In Proceedings of the 26th Euromicro Conference. EUROMICRO 2000. Informatics: Inventing the Future, vol. 2, pp. 406-413. IEEE, 2000.
324
+
325
+ [14] H. Chen, Z. J. Xu, Z. Q. Liu, and S. C. Zhu. Composite templates for cloth modeling and sketching. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), vol. 1, pp. 943-950. IEEE, 2006.
326
+
327
+ [15] E. W. Dijkstra et al. A note on two problems in connexion with graphs. Numerische mathematik, 1(1):269-271, 1959.
328
+
329
+ [16] M. C. Fairhurst, T. Linnell, S. Glenat, R. Guest, L. Heutte, and T. Paquet. Developing a generic approach to online automated analysis of writing and drawing tests in clinical patient profiling. Behavior Research Methods, 40(1):290-303, 2008.
330
+
331
+ [17] P. S. Fastenau, J. M. Bennett, and N. L. Denburg. Application of psychometric standards to scoring system evaluation: is "new" necessarily "improved"? Journal of Clinical and Experimental Neuropsychology, 18(3):462-472, 1996.
332
+
333
+ [18] M. Field, S. Valentine, J. Linsey, and T. Hammond. Sketch recognition algorithms for comparing complex and unpredictable shapes. In Proceedings of the Twenty-Second international Joint Conference on Artificial Intelligence (IJCAI), vol. 3, pp. 2436-2441. AAAI Press, Barcelona, Spain, Spain, July 16-22, 2011.
334
+
335
+ [19] C. Gallagher and T. Burke. Age, gender and iq effects on the rey-osterrieth complex figure test. British Journal of Clinical Psychology, 46(1):35-45, 2007.
336
+
337
+ [20] S. Glenat, L. Heutte, T. Paquet, R. Guest, M. Fairhurst, and T. Linnell. The development of a computer-assisted tool for the assessment of neu-
338
+
339
+ ropsychological drawing tasks. International Journal of Information Technology & Decision Making, 7(04):751-767, 2008.
340
+
341
+ [21] T. Hammond and R. Davis. Ladder: A language to describe drawing, display, and editing in sketch recognition. In Proceedings of the
342
+
343
+ International Joint Conference on Aritificial Intelligence (IJCAI), pp. 461-467. AAAI, Alcapulco, Mexico, Mexico, 2003.
344
+
345
+ [22] T. Hammond and B. Paulson. Recognizing sketched multistroke primitives. ACM Transactions on Interactive Intelligent Systems (TiiS), 1(1):1-34, 2011.
346
+
347
+ [23] F. Han and S.-C. Zhu. Bottom-up/top-down image parsing by attribute graph grammar. In Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1, vol. 2, pp. 1778-1785. IEEE, 2005.
348
+
349
+ [24] Z. Harbi, Y. Hicks, and R. Setchi. Clock drawing test digit recognition using static and dynamic features. Procedia Computer Science, 96:1221-1230, 2016.
350
+
351
+ [25] Z. Harbi, Y. Hicks, and R. Setchi. Clock drawing test interpretation system. Procedia computer science, 112:1641-1650, 2017.
352
+
353
+ [26] J.-I. Herrera-Camara and T. Hammond. Flow2code: From hand-drawn flowcharts to code execution. In Proceedings of the Symposium on Sketch-Based Interfaces and Modeling, pp. 1-13, 2017.
354
+
355
+ [27] L. B. Kara and T. F. Stahovich. Hierarchical parsing and recognition of hand-sketched diagrams. In Proceedings of the 17th annual ACM symposium on User interface software and technology, pp. 13-22, 2004.
356
+
357
+ [28] M. D. Lezak, D. B. Howieson, D. W. Loring, J. S. Fischer, et al. Neuropsychological assessment. Oxford University Press, USA, 2004.
358
+
359
+ [29] L. Lin, X. Liu, and S.-C. Zhu. Layered graph matching with composite cluster sampling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(8):1426-1442, 2009.
360
+
361
+ [30] L. Lin, S. Peng, J. Porway, S.-C. Zhu, and Y. Wang. An empirical study of object category recognition: Sequential testing with generalized samples. In 2007 IEEE 11th International Conference on Computer Vision, pp. 1-8. IEEE, 2007.
362
+
363
+ [31] L. Lin, T. Wu, J. Porway, and Z. Xu. A stochastic graph grammar for compositional object representation and recognition. Pattern Recognition, 42(7):1297-1307, 2009.
364
+
365
+ [32] D. W. Loring, R. C. Martin, K. J. Meador, and G. P. Lee. Psychometric construction of the rey-osterrieth complex figure: methodological considerations and interrater reliability. Archives of Clinical Neuropsychology, 5(1):1-14, 1990.
366
+
367
+ [33] J. E. Meyers and K. R. Meyers. Rey complex figure test under four different administration procedures. The Clinical Neuropsychologist, 9(1):63-67, 1995.
368
+
369
+ [34] M. Moetesum, I. Siddiqi, S. Ehsan, and N. Vincent. Deformation modeling and classification using deep convolutional neural networks for computerized analysis of neuropsychological drawings. Neural Computing and Applications, pp. 1-25, 2020.
370
+
371
+ [35] M. Moetesum, I. Siddiqi, U. Masroor, and C. Djeddi. Automated scoring of bender gestalt test using image analysis techniques. In 2015 13th International Conference on Document Analysis and Recognition (ICDAR), pp. 666-670. IEEE, 2015.
372
+
373
+ [36] M. Moetesum, O. Zeeshan, and I. Siddiqi. Multi-object sketch segmentation using convolutional object detectors. In Tenth International Conference on Graphics and Image Processing (ICGIP 2018), vol. 11069. International Society for Optics and Photonics, 2019.
374
+
375
+ [37] P. Osterrieth. Le test de copie d'une figure complexe. Archives de Psychologie, 30:205-550, 1944.
376
+
377
+ [38] T. Y. Ouyang and R. Davis. Chemink: a natural real-time recognition system for chemical drawings. In Proceedings of the 16th international conference on Intelligent user interfaces, pp. 267-276. ACM, 2011.
378
+
379
+ [39] C. R. Pereira, D. R. Pereira, F. A. Da Silva, C. Hook, S. A. Weber, L. A. Pereira, and J. P. Papa. A step towards the automated diagnosis of parkinson's disease: Analyzing handwriting movements. In 2015 IEEE 28th international symposium on computer-based medical systems, pp. 171-176. IEEE, 2015.
380
+
381
+ [40] A. Prange, M. Barz, and D. Sonntag. A categorisation and implementation of digital pen features for behaviour characterisation. arXiv preprint arXiv:1810.03970, 2018.
382
+
383
+ [41] A. Rey. L'examen psychologique dans les cas d'encéphalopathie trau-matique.(les problems.). Archives de psychologie, 1941.
384
+
385
+ [42] D. Rubine. Specifying gestures by example, vol. 25. ACM, 1991.
386
+
387
+ [43] C. R. Savage, L. Baer, N. J. Keuthen, H. D. Brown, S. L. Rauch, and M. A. Jenike. Organizational strategies mediate nonverbal memory impairment in obsessive-compulsive disorder. Biological psychiatry, 45(7):905-916, 1999.
388
+
389
+ [44] M.-S. Shin, Y.-H. Kim, S.-C. Cho, and B.-N. Kim. Neuropsychologic characteristics of children with attention-deficit hyperactivity disorder (adhd), learning disorder, and tic disorder on the rey-osterreith complex figure. Journal of Child Neurology, 18(12):835-844, 2003.
390
+
391
+ [45] M.-S. Shin, S.-Y. Park, S.-R. Park, S.-H. Seol, and J. S. Kwon. Clinical and empirical applications of the rey-osterrieth complex figure test. Nature protocols, 1(2):892, 2006.
392
+
393
+ [46] E. Strauss, E. M. Sherman, O. Spreen, et al. A compendium of neuropsychological tests: Administration, norms, and commentary. American Chemical Society, 2006.
394
+
395
+ [47] R. Tarjan. Depth-first search and linear graph algorithms. SIAM journal on computing, 1(2):146-160, 1972.
396
+
397
+ [48] L. A. Tupler, K. A. Welsh, Y. Asare-Aboagye, and D. V. Dawson. Reliability of the rey-osterrieth complex figure in use with memory-impaired patients. Journal of clinical and experimental neuropsychology, 17(4):566-579, 1995.
398
+
399
+ [49] R.-D. Vatavu. Improving gesture recognition accuracy on touch screens for users with low vision. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 4667-4679. ACM, 2017.
400
+
401
+ [50] R.-D. Vatavu, L. Anthony, and J. O. Wobbrock. Gestures as point clouds: a $\$$ p recognizer for user interface prototypes. In Proceedings of the 14th ACM international conference on Multimodal interaction, pp. 273-280, 2012.
402
+
403
+ [51] R.-D. Vatavu, L. Anthony, and J. O. Wobbrock. \$ q: A super-quick, articulation-invariant stroke-gesture recognizer for low-resource devices. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services, pp. 1-12, 2018.
404
+
405
+ [52] J. Vogt, H. Kloosterman, S. Vermeent, G. Van Elswijk, R. Dotsch, and B. Schmand. Automated scoring of the rey-osterrieth complex figure test using a deep-learning algorithm. Archives of Clinical Neuropsychology, 34(6):836-836, 2019.
406
+
407
+ [53] C. Wang, H. Mouchere, C. Viard-Gaudin, and L. Jin. Combined segmentation and recognition of online handwritten diagrams with high order markov random field. In 2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR), pp. 252-257. IEEE, 2016.
408
+
409
+ [54] J. O. Wobbrock, A. D. Wilson, and Y. Li. Gestures without libraries, toolkits or training: a $\$ 1$ recognizer for user interface prototypes. In Proceedings of the 20th annual ACM symposium on User interface software and technology, pp. 159-168. ACM, 2007.
410
+
411
+ [55] A. Wolin, B. Eoff, and T. Hammond. Shortstraw: A simple and effective corner finder for polylines. In ${SBM}$ , pp. 33-40,2008.
412
+
413
+ [56] Y. Xiong and J. J. LaViola Jr. Revisiting shortstraw: improving corner finding in sketch-based interfaces. In Proceedings of the 6th Euro-graphics Symposium on Sketch-Based Interfaces and Modeling, pp. 101-108. ACM, 2009.
414
+
415
+ [57] L. Zhu, Y. Chen, and A. Yuille. Unsupervised learning of a probabilistic grammar for object detection and parsing. 2007.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/ot-dY9S1U-/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,376 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § AUTOMATICALLY GRADING REY-OSTERRIETH COMPLEX FIGURE TESTS USING SKETCH RECOGNITION
2
+
3
+ Category: Research
4
+
5
+ § ABSTRACT
6
+
7
+ The Rey-Osterrieth Complex Figure Test (ROCF) is among the most widely used neuropsychological examinations to analyze visual spatial constructional ability and memory skills, but grading the patient's sketched complex figure is subjective in nature and can be time consuming. With increasing demand for tools to help detect cognitive decline, there is a need to leverage sketch recognition research to assist in detecting fine details within an ROCF's inherently abstract figure. We present a series of recognition algorithms to detect all 18 official ROCF details using a top-down sub-shape recognition approach. This automated grader transforms a sketch into an undirected graph, identifies and isolates detail sub-shapes, and validates sub-shape neatness via a point-density matrix template matcher. Experimental results from hand-drawn ROCFs confirm that our approach can automatically grade ROCF Tests on the same 18-item sketch detail checklist used by neuropsychologists with marginal error margin.
8
+
9
+ Index Terms: Applied computing-Health Care information systems; Human-centered computing-Gestural input; Human-centered computing—Mobile devices
10
+
11
+ § 1 INTRODUCTION
12
+
13
+ § 1.1 REY-OSTERRIETH COMPLEX FIGURES
14
+
15
+ The Rey-Osterrieth Complex Figure Test (ROCF), developed by Rey [41] in 1941 and refined by Osterrieth [37] in 1944, is a neuropsychological test that evaluates several cognitive functions including visuospatial abilities, memory, attention, planning, working memory and executive functions $\left\lbrack {{28},{46}}\right\rbrack$ . The ROCF is characterized as a complex cognitive task [45], and is known in the field of neuropsychology as a useful metric for the frontal lobe function [44]. A participant is asked to copy the figure into a piece of paper, then copy it again two more times from memory. The shape is specifically designed to be abstract so that participants cannot associate it with any common object or concept. A clinician then grades all three sketches on whether 18 separate sub-shapes (henceforth called "details") exist and, if they do, how neatly they were drawn. A clinician grants up to 2 points for each detail that totals to 36 points, with partial credit given to distorted or misplaced shapes. Points for overall neatness of individual details is subjective and is generally dependant on an expert's intuition, especially for shapes that exist but might be drawn poorly. This results in different ROCF graders potentially producing two different scores. The proliferation of digital sketch recognition techniques and a push to digitize clinical neuropsychological examinations motivated our creation of an automated ROCF that can grade itself on the existing grading scheme.
16
+
17
+ From a digital sketch recognition standpoint, automatically grading an ROCF is non-trivial due to the complexity of the figure and test conditions resulting in inherently fuzzy sketch data. No two completed sketches are drawn in the same order, and very frequently shapes are drawn using portions from other shapes [13]. Bottom-up approaches tend to classify shapes as soon as their constraints are met, but shapes in an ROCF may in fact be only part of a detail or may end up as a portion of an entirely different one. A top-down approach not only more closely resembles a human grading an ROCF, but it also simplifies the recognition process by not needing to re-classify a shape at every step of the hierarchical recognition process.
18
+
19
+ < g r a p h i c s >
20
+
21
+ Figure 1: Our automated grader highlighting Details 2, 3, and 6 in red, green, and blue respectively.
22
+
23
+ § 1.2 CONTRIBUTION
24
+
25
+ Significant research has been produced in analyzing the reliability of current rubrics $\left\lbrack {{17},{32},{33},{48}}\right\rbrack$ . Automating the process started over two decades ago [13], but even recently surveys have cited a lack of contributions towards grading all 18 details at once. Moetsum et al. in research published in 2020 [34] specifies that "due to the unconstrained nature, these drawings, localization and segmentation of individual scoring sections become a highly challenging task" and existing work localizes only a "small subset of ROCF scoring sections".
26
+
27
+ Whereas previous efforts in automatically grading the ROCF can identify only a subset of the complex figure's details, we present the first fully automated ROCF grader that does not require user input to point to baseline shapes from which to begin recognition. Our contribution widely expands on Field's truss recognition technique field:2011:mechanix by introducing several graph traversal algorithms in order to isolate specific sub-shapes or regions from a given sketch. In addition to triangles, we also recognize squares, parallel lines, crosses, straight horizontal and vertical lines, and diamonds as well as shapes specific to the ROCF such as detail 6 (Cross with Square), detail 14 (Circle and 3 Dots), and 18 (Square with Line). Our system uses a multi-step recognition process that can identify shapes whether by crawling the resulting graph, by using template-matching shape recognition, or a combination of both, resulting in a more accurate and robust sub-shape recognition system for ROCF grading. Many of our recognition algorithms utilize well-known graph traversal and optimization algorithms (such as Di-jkstra's Shortest Path [15] and Depth-First Search [47]). Our system represents the first fully-automated ROCF grader that recognizes the existence or absence of each of the 18 details and checks individual shapes for distortion.
28
+
29
+ To test our recognizer's performance, the system graded 141 digitized Rey-Osterrieth tests from participants and we compare how closely our system's grades correlate with those of two expert graders. The experimental results demonstrates the proposed approach is successful in identifying the existence of sub-shapes within a large abstract shape.
30
+
31
+ < g r a p h i c s >
32
+
33
+ Figure 2: A Rey-Osterrieth Complex Figure Test, with all 18 Details listed.
34
+
35
+ § 2 RELATED WORK
36
+
37
+ § 2.1 SKETCH RECOGNITION SYSTEMS
38
+
39
+ Digital sketch recognition techniques favor bottom-up approaches that employ computational geometry to classify shapes $\lbrack 9,{11},{22},{27}$ , 42]. Hierarchical sketch recognition systems such as LADDER [21], Sketchread [3], Chemink [38] and Mechanix [18, 18] generate composite figures by re-classifying shapes into more complex shapes in every step of the sketching process. In early bottom-up sketching approaches "steps" were typically separated by a UI button that explicitly instructed the system to create a recognition step. More modern systems, however, automatically separate "steps" by single-stroke actions and usually triggered when the user lifts their pen. This allows the system to continuously check the sketch to see whether the user is drawing a composite sketch made up of shape primitives.
40
+
41
+ Popular applications for bottom-up recognition of composite shapes using geometric primitives is especially popular in digital recognition of hand-drawn diagrams $\left\lbrack {1,2,6,8,{10},{26},{53}}\right\rbrack$ . In these projects researchers seek to digitize hand-drawn flowchart and system design diagrams, interpreting diagram structure, flow of information, and preservation of variable and state checks through digital sketch recognition techniques [2,26]. Circles, rectangles, diamonds, rhombi, and directional arrows [9] are used in diagrams to denote specific system or algorithm states or commands $\left\lbrack {1,6}\right\rbrack$ . Indeed, these projects originally served as the basis of Auto ReyO's recognition due to the emphasis in recognizing primitives as part of a larger composite system of shapes. However, a chief difference between these projects and an ROCF sketch is that the ROCF by design has a large number of overlapping shapes, and specific details can be as granular as a single line within a specific area of other shapes. Diagrams and flowcharts, by contrast, are required to have clear spacing between its components and recognizing missing or distorted shapes is not a focus of these automated systems. While some form of composite figure recognition is necessary for automatically grading the ROCF, a top-down approach as explored in other systems [23] proved ultimately the most viable for Auto ReyO.
42
+
43
+ Corner detection also helps characterize digital shapes, with lightweight systems such as ShortStraw [55] and iStraw [56] being among the most efficient. Auto ReyO uses the open-source ShortStraw library in its recognition of corners and endpoints to generate the vertices during the graph creation stage. This is used in tandem with line-intersection algorithms to segment the sketch lines such that individual shapes can be recognized. A frequent use case of this is recognizing details 4 and 6 of the ROCF (see Fig. 2). A user typically draws a single long line at once across the ROCF shape, so we are unable to use individual stroke order to recognize details, but rather need the segmentation that a line-intersection algorithm combined with ShortStraw is able to provide.
44
+
45
+ < g r a p h i c s >
46
+
47
+ Figure 3: Auto ReyO's recognition hierarchy, designed to have as few dependencies as possible.
48
+
49
+ § 2.2 TEMPLATE MATCHING SHAPE CLASSIFICATION SYSTEMS
50
+
51
+ The "Dollar" family of recognition systems $\left\lbrack {4,5,{50},{51},{54}}\right\rbrack$ remains among the most well known single and multi-stroke gesture classification algorithms, and serve as the basis for our own template-matching recognition algorithm presented as part of our system. While most techniques rely on stroke order, geometric properties, and physical characteristics such as speed, acceleration, etc., the "$P+" recognizer calculates similarity via "point cloud" approximation [49]. A point cloud is generated by resampling both a template shape and an input shape on the same resampling parameters, overlaying the input shape on top of the template sketch matching its shape, centering, and orientation as close as possible, and iterating through every point finding the closest match between template points and input points. The distance between the points that are closest together are added cumulatively and are presented as the overall "distance" metric between the template shape and the input shape. The "$P+" recognizer returns the closest template match, identifying what kind of shape the user has drawn. This is especially flexible when the application in question necessitates recognition that is agnostic to stroke order. Our technique for shape recognition as described in Section 3.4 is based on the "$P+" recognizer, particularly the technique of calculating a "distance".
52
+
53
+ Our technique differs, however, in that rather than calculating distance via point-for-point comparison, we generate a fixed-resolution matrix of point density for both the template and the input shapes and calculate distance between cells of both matrices. This allows us to generate a more accurate grader for shape neatness. Indeed, "$P+" only focuses on finding the closest match to a template since it is a shape classifier, but its internal distance metric value does not perform well to gauge whether an input shape is poorly drawn next to its provided "ideal" template shape.
54
+
55
+ § 2.3 HIERARCHICAL SKETCH RECOGNITION
56
+
57
+ Hierarchical sketch recognition approaches generally check drawn lines to see if they meet requirements for a composite shape $\left\lbrack {{29},{31}}\right\rbrack$ . Layered hierarchical systems for graph creation have been applied to both bottom-up and top-down systems [23], and involve the decomposition of a drawn sketch to specific broad categories by analyzing
58
+
59
+ < g r a p h i c s >
60
+
61
+ Figure 4: Description of ROCF sub-shape recognition system. Stages 2 and 3 shown in the figure are repeated for each of the 18 details of a Rey-Osterrieth complex figure.
62
+
63
+ Online Submission ID: 0 sub-graphs $\left\lbrack {{14},{30},{57}}\right\rbrack$ . This is typically used in the fields of computer vision to help decompose a system to primitive parts and represent them as a tiered graph. We envisioned a similar hierarchical tiered approach to the recognition of an ROCF due to the nature of the drawn details. To draw detail 10 in an ROCF, for example, the user needs to have drawn both details 2 and 3 to be able to connect the line properly (see Figure 2). Similarly, detail 14 requires the existence of detail 13 to receive full marks for both correct placement and shape neatness. Rather than represent the entirety of the sub-shape as a single vertex in a graph, however, we envisioned the vertices of a graph being represented by intersecting lines and endpoints, and applied the concepts behind sub-graph composite object recognition to identify the ROCF details themselves. The cited foundational work on graph implementations to supplement computer vision and object recognition informed our own approach to automatically grade ROCFs using a graph itself as the vehicle for tiered object recognition.
64
+
65
+ § 2.4 EFFORTS TO AUTOMATE NEUROPSYCHOLOGICAL EXAMINA- TION ANALYSIS
66
+
67
+ Efforts to automate other neuropsychological tests has renewed interest in sketch sub-object detection $\left\lbrack {7,{16},{35},{36}}\right\rbrack$ . Object recognition ranges across various neuropsychological examinations including clocks $\left\lbrack {{24},{25}}\right\rbrack$ and general handwriting tasks $\left\lbrack {{20},{39}}\right\rbrack$ . However, whereas recognized objects for these tests tend to have heavily distinct characteristics, ROCF details are mostly composed of simple primitives that appear frequently. For example, detail 5 shown in Figure 2 is defined not only as any vertical line, but rather a specific vertical line within the sketch. Work presented by Prange et al. [40] cites Rey-Osterrieth figures as a motivating factor in the need to identify geometric shapes inside complex abstract figures. Existing attempts to automatically grade ROCFs are semi-automated or do not implement detection of all 18 details [12, 13]. The most recent attempt automates grading using a deep-learning neural network but
68
+
69
+ < g r a p h i c s >
70
+
71
+ Figure 5: Finding path $p$ for the top horizontal side of detail 2’s rectangle. Dotted area on right indicates dist radius. In this example ${v}_{m2} = {c}_{1}$ , dir $=$ Right and next dir $=$ Down (See Algorithms 1 and 2)
72
+
73
+ leaves ample room for improvement of individual segment detection, most notably single-line details [52]. Additionally, our system is able to produce a recognizer from only five training sketches to serve as templates, whereas neural networks require exponentially higher amounts of training data to function properly.
74
+
75
+ § 3 AUTOMATED REY-OSTERRIETH COMPLEX FIGURE TEST GRADER (AUTO REY-O)
76
+
77
+ Auto Rey- $O$ is an application written on the Universal Windows Platform (UWP) that connects to a Neo SmartPen device via blue-tooth for data collection. The same app is used to perform the fully automated grading process. Auto Rey-O’s top-down sub-shape recognizer divides the ROCF grading process into three distinct stages as shown in Figure 4.
78
+
79
+ § 3.1 RECOGNIZER GENERALIZABILITY
80
+
81
+ An important consideration of novel recognition and automation techniques in sketch recognition lies in articulating the generalizability and defining the constraints under which a presented technique aims to perform well.
82
+
83
+ Automating the ROCF motivates a brief discussion on generalizability due to the inherently "hard-coded" nature of its automation. Indeed, the complexity of the ROCF shape coupled with the requirements of detecting very specific lines necessitates a certain specificity of location and shape composition requirements. Some details, for example, are a single horizontal or vertical line, but of most importance is the location of the line relative to other details and the starting and stopping points. It is, in fact, this specificity in requirements that allows our method to recognize all 18 details, opposing previous work that only detects a subset of them.
84
+
85
+ At the same time, however, generalizability was taken into account when designing the recognizers that will be described in the following section. Generalizability was considered for two primary reasons. Firstly, our algorithm must be generalizable to recognize details despite a varying list of imperfections including but not limited to: crooked lines, shapes not entirely closed, various lines intersecting at different points, sharp angles accidentally being curved, the same line being drawn over several times, etc. The algorithm must also be able to, within reason, identify as many shapes as possible even in the absence of other shapes. Unless the shapes are directly dependent on each other for recognition, the absence or heavy distortion of one unrelated detail should not prevent the recognition of the other.
86
+
87
+ Secondly, as many recognition techniques as possible should be easily adaptable for other complex figure tests. As per the Compendium of Neuropsychological Examinations [46], seven complex figures are recognized as valid and tested figures for this purpose, and the Rey-Osterrieth Complex Figure test is the most popular. New variants with small changes are uncommon. The remaining six figures are: Taylor Alternate Version, Modified Taylor Complex Figure, and four Medical College of Georgia Complex Figures. All have similar size, complexity, and are a combination of straight lines, triangles, and simple geometric shapes. All contain a "detail 2": a large rectangle that serves as an anchor for the rest of the shapes. Our system was designed to be adaptable to recognize the 18 details of the remaining six complex figure tests by applying variations on the pathfinding algorithms on Table 1. Our three-stage method detailed in Fig. 3 can be adapted for all six remaining complex figure tests, so that extent we consider this approach generalizable for other complex figure tests of this type. Location heuristics need to be tailored for each detail, since the rules themselves are inherently specific and unique to the ROCF. We believe our three-step approach can be usable for any hierarchical sketch recognition problem involving complex figures where multiple sub-shapes must be discretely recognized but may share any number of lines.
88
+
89
+ § 3.2 STAGE 1: GRAPH CREATION
90
+
91
+ The graph creation stage is divided into four distinct steps. First, we prepare the sketch for corner detection by resampling to a uniform interspace length $S$ as follows:
92
+
93
+ $$
94
+ S = \frac{\sqrt{{\left( {x}_{m} - {x}_{n}\right) }^{2} + {\left( {y}_{m} - {y}_{n}\right) }^{2}}}{c} \tag{1}
95
+ $$
96
+
97
+ where $\left( {{x}_{m},{y}_{m}}\right)$ is the lower-right corner of the sketch, $\left( {{x}_{n},{y}_{n}}\right)$ is the upper-left corner of the sketch, and $\mathrm{c}$ is a constant $c = {40}$ .
98
+
99
+ The second step utilizes the corner-finding algorithm from Wolin [55] to identify any "corner" from drawn strokes. To detect line intersections, two straight-line segments are compared with the target segment ${y}_{a} = {a}_{2}x + {a}_{1}$ checked for intersection against comparison segment ${y}_{b} = {b}_{2}x + {b}_{1}$ with equation 2 .
100
+
101
+ $$
102
+ \frac{{a}_{1} + {b}_{1}}{{a}_{2} + {b}_{2}} \in \left( {{x}_{1} - \frac{{\left( {0.15}l\right) }^{2}}{1 + {a}_{2}^{2}},{x}_{n} + \frac{{\left( {0.15}l\right) }^{2}}{1 + {a}_{2}^{2}}}\right) \tag{2}
103
+ $$
104
+
105
+ where ${x}_{1}$ and ${x}_{n}$ represent the $\mathrm{x}$ values of the less and greater vertices of the target segment respectively with $l$ being its segment length.
106
+
107
+ The third step creates undirected graph $G$ , where every vertex $v$ is a line endpoint, corner, or intersection, and every edge $e$ is a drawn line connecting each $v$ . Each $v$ contains a point from the sketch, and each $e$ contains the sampled points that connect the two vertices.
108
+
109
+ The fourth and final step performs vertex contraction on the created graph. Each vertex is iterated over and checked for near vertices that fall below a predetermined distance threshold. If two vertices are joined, their respective sampled points ${s}_{i}$ including points from an edge that might fall between them are combined into a single vertex $v$ containing all the sampled points. The distance measure is determined through complete distance used in hierarchical clustering, taking the maximum of the set in equation 3 .
110
+
111
+ $$
112
+ \left\{ {{\begin{Vmatrix}{s}_{i} - {s}_{j}\end{Vmatrix}}_{2} \mid {s}_{i} \in {v}_{1},{s}_{j} \in {v}_{2}}\right\} \tag{3}
113
+ $$
114
+
115
+ This serves both to connect segmented or near vertices and to reduce the overall complexity of the graph by eliminating edges. Finally, the vertices are iterated over a second time, checking if near vertices fall below a distance threshold, where the distance is determined through taking the minimum of the set in equation 3 referred to as single distance used in hierarchical clustering. If the distance falls below a predetermined threshold, the points are then are linked through an edge.
116
+
117
+ Algorithm 1 Detail 2: Largest Rectangle
118
+
119
+ Input: Sketch Graph's vertex adjacency list
120
+
121
+ Output: Largest rectangle vertices
122
+
123
+ for all node $n$ in Graph do
124
+
125
+ corner ${c}_{1} =$ SideAndCorner(n, Right, Down)
126
+
127
+ corner ${c}_{2} =$ SideAndCorner $\left( {{c}_{1},\text{ Down, Left }}\right)$
128
+
129
+ corner ${c}_{3} =$ SideAndCorner $\left( {{c}_{2},\text{ Left, Up }}\right)$
130
+
131
+ corner ${c}_{4} =$ SideAndCorner $\left( {{c}_{3},{Up},{Right}}\right)$
132
+
133
+ if $n = {c4}$ then
134
+
135
+ add all SideAndCorner $b$ sides to rectangle $q$
136
+
137
+ end if
138
+
139
+ end for
140
+
141
+ return largest $q$
142
+
143
+ § 3.3 STAGE 2: DETAIL RECOGNITION
144
+
145
+ All 18 ROCF details are recognized by applying a graph traversal algorithm to identify a "shape" within the graph. Each detail has an associated algorithm that is called in the hierarchical order defined by Figure 3. Every algorithm is designed to accommodate inherent graph imperfections from both the graph creation stage and the participant's hand sketch. For example, if the algorithm checks for a horizontal edge, there we allow a slope between 0.3 and -0.3 since participants are not expected to produce a perfectly horizontal line.
146
+
147
+ Every algorithm was designed to strike a balance between leniency to accommodate the imperfect nature of a hand-drawn shape and precision to find the expected shape if it exists. We are unable to thoroughly explain every detail's graph traversal algorithm, but we have chosen to explain detail 2 as Algorithms 2 and 1 since it is the most sophisticated of our recognizers and best illustrates our graph traversal approach.
148
+
149
+ § 3.3.1 RECOGNIZING DETAIL 2
150
+
151
+ detail 2's recognition algorithm defined in Algorithm 1 and 2 is a greedy graph traversal algorithm tasked with finding the large rectangle that serves as the anchor for all other shapes in the graph. Our pathfinder finds one rectangle side $b$ and the corner at its end $c$ at a time. For each side dir is the intended path direction, and dirnext is the next path direction once we find our corner.
152
+
153
+ We define $N$ as single agents where every agent $n \in N$ has a start location ${s}_{n} \in G$ and a goal location ${g}_{n} \in G$ . The path $p$ of $n$ consists of one side $b$ of our rectangle where ${g}_{n} = c$ . Path $p$ is of length $k$ that is a sequence of vertices $p = \left\{ {{v}_{0},{v}_{1},{v}_{2},\ldots ,{v}_{k}}\right\}$ such that each consecutive vertex is either in a defined direction dir (up, down, left, right, or diagonals) or Eucledian distance $r < {25}$ . At the end of our sequence ${v}_{k}$ one of two conditions is true:
154
+
155
+ 1. ${v}_{k}$ is connected to a vertex ${v}_{x}$ such that direction of $\left( {{v}_{k},{v}_{x}}\right) =$ dirnext
156
+
157
+ 2. ${v}_{k}$ is connected to other vertices ${v}_{m}$ such that $r$ of $\left( {{vk},{vx}}\right) < {25}$ .
158
+
159
+ The conditions describe that we have either (1) found a corner characterized by the start of the next side of the rectangle, or (2) the end of our sequence consists of various vertices very close together. This creates a set of "dead-end" nodes $Q$ . For condition $1,{v}_{k} \in Q$ and $c = {v}_{k}$ . For any ${v}_{m}$ that satisifes condition 2, $\left\{ {{v}_{m1},{v}_{m2},\ldots ,{v}_{mn}}\right\} \in Q$ . We check all vertices in $Q$ and return the vertex ${v}_{e}$ that satisfies condition 1. The final sequence is $p = \left\{ {{v}_{0},{v}_{1},{v}_{2},\ldots ,{v}_{e}}\right\}$ , and $c = {v}_{e}$ . This is repeated four times to find the four sides of our rectangle, and return the largest such rectangle as detail 2.
160
+
161
+ § 3.3.2 EXAMPLES OF OTHER DETAILS
162
+
163
+ The rest of our graph traversal algorithms can be divided into two distinct categories: graph-crawling algorithms that identify shapes from the graph itself, and algorithms that use vertices as boundaries and then isolates all pen strokes within a specified region.
164
+
165
+ Algorithm 2 SideAndCorner
166
+
167
+ Input: Start node $n$ , directions dir, dirnext
168
+
169
+ Output: Side $d$ of rectangle, corner $c$
170
+
171
+ push $n$ to stack $s$
172
+
173
+ while $s$ not empty do
174
+
175
+ pop $s$ to $p$ , mark as visited
176
+
177
+ for all adjacent pairs $\left( {{p}_{1},{p}_{2}}\right)$ of $p$ do
178
+
179
+ if direction of $\left( {{p}_{1},{p}_{2}}\right) =$ dir or distance $e\left( {{p}_{1},{p}_{2}}\right) < {25}$
180
+
181
+ then
182
+
183
+ push ${p}_{2}$ to $s$
184
+
185
+ $p = {p}_{2}$ , repeat from line 5
186
+
187
+ end if
188
+
189
+ if dead end ${p}_{n}$ reached then
190
+
191
+ add shortest path as a stack from $p$ to ${p}_{n}$ to set $c$
192
+
193
+ end if
194
+
195
+ end for
196
+
197
+ end while
198
+
199
+ for all current longest path $a$ in $c$ do
200
+
201
+ for all adjacency pairs $\left( {{p}_{a1},{p}_{a2}}\right)$ of leaf ${p}_{an}$ in $a$ do
202
+
203
+ if ${p}_{a2}$ is dirnext of ${p}_{a1}$ then
204
+
205
+ return $c = {p}_{a1},d = a$
206
+
207
+ else
208
+
209
+ pop ${p}_{an}$ from $a$
210
+
211
+ repeat from line 15
212
+
213
+ end if
214
+
215
+ end for
216
+
217
+ end for
218
+
219
+ The former category is best for detail 2 as described previously, as well as simple shapes and lines like details3,4,5,7,9,10,15,16. The latter category is appropriate in cases when our graph generator may create highly variable graphs from imperfectly-drawn shapes, making it difficult for us to determine what the graph may look like. This is the case for details1,6,11,12,14, and 17 . In these instances we identify specific regions where we expect the detail to exist, and save all edges that are found. We isolate specific regions and run a bounded Depth-First-Search algorithm that returns all edges and vertices within the given region.
220
+
221
+ < g r a p h i c s >
222
+
223
+ Figure 6: Comparing two point-density matrices to asses detail distortion: our sample, and a saved template. The process is repeated for all templates.
224
+
225
+ max width=
226
+
227
+ $\mathbf{{Det}.}$ Method of Recognition
228
+
229
+ 1-2
230
+ 1 Isolate region, then DFS to fill
231
+
232
+ 1-2
233
+ 2 Greedy pathfinding, repeated per side
234
+
235
+ 1-2
236
+ 3 Dijkstra's between diagonal corners of #2
237
+
238
+ 1-2
239
+ 4 Greedy single-direction pathfinding
240
+
241
+ 1-2
242
+ 5 Greedy single-direction pathfinding
243
+
244
+ 1-2
245
+ 6 Direction path for top/bottom, Dijkstra's
246
+
247
+ 1-2
248
+ 7 Greedy single-direction pathfinding
249
+
250
+ 1-2
251
+ 8 Connect horizontal lines bet. #3 and #5
252
+
253
+ 1-2
254
+ 9 Find vertical, diagonal line above #2
255
+
256
+ 1-2
257
+ 10 Greedy single-direction pathfinding
258
+
259
+ 1-2
260
+ 11 Isolate triangle, then DFS to fill
261
+
262
+ 1-2
263
+ 12 Isolate lower-right region, DFS
264
+
265
+ 1-2
266
+ 13 Find upward, downward diagonals
267
+
268
+ 1-2
269
+ 14 DFS to find all edges on tip of 13
270
+
271
+ 1-2
272
+ 15 Greedy single-direction pathfinding
273
+
274
+ 1-2
275
+ 16 Greedy single-direction pathfinding
276
+
277
+ 1-2
278
+ 17 Isolate region, then DFS to fill
279
+
280
+ 1-2
281
+ 18 #2's technique, then single diagonal
282
+
283
+ 1-2
284
+
285
+ Table 1: General recognition method types for all 18 details. DFS is the Depth-First Search pathfinding algorithm. Dijkstra's is the Dijkstra Shortest-Path Algorithm
286
+
287
+ For detail 6, for example, our "region" is defined by the area inside the detail 2 rectangle and detail 3 cross, and we do not include the detail 4 horizontal line or detail 7 Small Segment in our DFS search. This returns the remaining subset of vertices and edges as seen in bottom portion of Figure 4. Table 1 describes the general method of recognition we applied to detect all 18 details in an ROCF, and the order of recognition is tiered as shown in Fig. 3.
288
+
289
+ The "region finding" category allows us to isolate some details, but if the shape is poorly drawn or missing entirely then isolating regions alone could not confirm shape neatness. This motivated the implementation of our third processing stage, which grades the isolated shape for correctness.
290
+
291
+ § 3.4 STAGE 3: DETAIL VALIDATION
292
+
293
+ Stage 3 compares only the isolated sample of the recognized detail to a set of template details to score the sample's quality. The system begins by centering, scaling, and resampling the isolated sample points so that they lie in a $\left\lbrack {-1,1}\right\rbrack$ range on the $x$ and $y$ plane. We then create a map of some provided resolution $n$ such that a detail is represented as a $n \times n$ matrix where space in the figure is mapped to a cell of the matrix, each cell being some range $\left\lbrack {{x}_{i},{x}_{j}}\right\rbrack ,\left\lbrack {{y}_{i},{y}_{j}}\right\rbrack$ . Each cell is then given a p-value based on the number of points that lie within the range for each cell. A visualization of this is shown in figure 6. The same system is applied to each of the templates, then each template matrix is then averaged cell-wise to form a template mapping. The template matrix $Q$ is compared sampled detail matrix $P$ with equation 4 to determine how closely the two matched. The best match is then found by shifting the the sample matrix by row and column to find the best possible position when compared to the template, given that some details will match in terms of their stroke and dimension but have somewhat different centers relative to the the template.
294
+
295
+ $$
296
+ \mathop{\sum }\limits_{{i = 1}}^{n}\mathop{\sum }\limits_{{j = 1}}^{n}\max \left( {0,{Q}_{i,j} - {P}_{i,j}}\right) \tag{4}
297
+ $$
298
+
299
+ max width=
300
+
301
+ Det. # ${\Delta }_{a,{g1}}$ ${\Delta }_{a,{g2}}$ ${\Delta }_{{g1},{g2}}$ F1-Score Det. # ${\Delta }_{a,{g1}}$ ${\Delta }_{a,{g2}}$ ${\Delta }_{{g1},{g2}}$ F1-Score
302
+
303
+ 1-10
304
+ 1 0.69 0.68 0.37 0.787 10 0.21 0.20 0.10 0.962
305
+
306
+ 1-10
307
+ 2 0.18 0.44 0.33 0.857 11 0.47 0.45 0.21 0.878
308
+
309
+ 1-10
310
+ 3 0.07 0.11 0.16 0.978 12 0.46 0.44 0.11 0.872
311
+
312
+ 1-10
313
+ 4 0.27 0.30 0.12 0.927 13 0.13 0.11 0.11 0.966
314
+
315
+ 1-10
316
+ 5 0.08 0.49 0.49 0.978 14 0.42 0.41 0.15 0.881
317
+
318
+ 1-10
319
+ 6 0.31 0.56 0.26 0.958 15 0.50 0.49 0.09 0.770
320
+
321
+ 1-10
322
+ 7 0.47 0.26 0.13 0.855 16 0.21 0.19 0.08 0.919
323
+
324
+ 1-10
325
+ 8 0.28 0.33 0.13 0.966 17 0.57 0.66 0.34 0.788
326
+
327
+ 1-10
328
+ 9 0.35 0.20 0.07 0.904 18 0.45 0.49 0.21 0.925
329
+
330
+ 1-10
331
+
332
+ Table 2: Classification results and average scoring differences for each detail across all graded tests. n=141 for all details except for Detail 2, where $\mathrm{n} = {185}.{\Delta }_{a,{g1}}$ denotes the average point score difference between Auto Rey-O and Grader 1, ${\Delta }_{a,{g2}}$ is the difference between Auto Rey-O and Grader 2, and ${\Delta }_{{g1},{g2}}$ between Grader 1 and Grader 2.
333
+
334
+ The value of "distance" between template and sample is between 0 and 1, with 0 being the best. A shape that receives full credit for neatness is characterized as how close the sample is to the templates. Any value below 0.5 assigned to a sample is given full credit of 2 points. A value between 0.5 and 0.9 is given partial credit of 1 point. A value above 0.9 is given 0 points.
335
+
336
+ § 4 DATA COLLECTION AND RESULTS
337
+
338
+ We conducted a study with 68 cognitively healthy participants to complete a Rey-Osterrieth Complex Figure Test between the ages of 19-32. Although this test is meant to assess constructional ability and memory loss, healthy participants do not always score full marks on an ROCF [19], and indeed our testing corpus reflects a wide range of scores that conform to established normative data for our participants. All participants took the test in a simulated neuropsychologist's test environment and completed all three conditions (Copy, Recall, Delayed Recall). Participants were given a Neo SmartPen N2 and completed tests on pre-printed "blank" canvas pages that tracked the pen's location and instantaneously digitized all stroke data, allowing a more authentic testing experience since the ROCF is typically administered via pen and paper.
339
+
340
+ A total of 204 sketches from the 68 participants were collected. Of these, 5 perfect-score tests were set aside to be used as templates for Stage 3 validation. 14 were not gradable or their sketch data was corrupted, bringing the total graded to 185 . All tests were also graded by two field experts whose grades we consider "ground truth" in this context. The first grader is a practicing clincial neuropsychologist and the second is a professor specializing in cognitive and visual perceptual rehabilitation in older adults. We measure our system's success in two ways: the F1-Score of our recognition algorithm for each detail, and the comparison between our system's total grade and the expert graders' total grade. For the latter, both our system's and the grades are on the 36-point scale as defined in Section 1.1.
341
+
342
+ A key factor considered when calculating F1-score was the subjectivity of distortion thresholds. While we implemented our own thresholds for distortion in Stage 3, instructions for the ROCF in the literature leave the definition of "distortion" at the discretion of the grader [46]. For recognition purposes we are interested in gauging whether our system can successfully either find a detail or confirm its absence. The F1-score reports our system's ability to recognize the existence of a detail. Since we are still interested in comparing 36-point grades that also integrate distortion as partial credit, we also calculate Spearman’s rank coefficient $\left( {\rho = {0.767}}\right)$ between our automatically-graded tests and those of our expert grader.
343
+
344
+ § 5 DISCUSSION AND LIMITATIONS
345
+
346
+ § 5.1 RESULTS DISCUSSION
347
+
348
+ In clinical neuropsychology, grading Rey Osterrieth Complex Figure tests has been the subject of constant iteration and is an active research topic, with numerous methods of interpretation being proposed and refined. As such, analyzing the process of grading these ROCFs automatically is not a trivial subject. Our analysis centered on simulating the perception of an detail since the granular differences in distortion are frequently attributed to grader subjectivity. Our aim, then, was the provide evidence the system perceived the details correctly, even if they might have been slightly distorted, or in the case of severe distortion the Stage 3 validation stage would be able to separate those clear cases. In terms of overall score comparisons, we sought to analyze how far apart individual test grades our system was from those of expert graders. Although the grades from individual graders were closer to each other than between each grader and Auto ReyO, our system compares favorably due to the high F1-score of the vast majority of details, and the average difference between Auto ReyO and our system being around 3 points out of 36 possible points for an ROCF test. We believe these results are significant in light of the fact that a fully automated ROCF grader that grades all 18 details has yet been proposed.
349
+
350
+ Also of note is the performance of detail 2, the large rectangle that serves as the anchor for the rest of the sketch. The organizational strategy score of the Rey-Osterrieth Complex Figure test places the highest priority on the existence of Detail 2 in a sketch due to its importance to the overall figure structure [43]. For the purposes of our system, this resulted in calculating F1-Score for recognition of the 18 details being conditional on whether detail 2 could be successfully recognized within a sketch. Exceptionally poor figures that lack a discernible detail 2 almost always result in very low or ungraded scores when hand-graded. Similarly, in very rare cases a poorly drawn ROCF could be graded by Auto ReyO if detail 2 could be recognized, while another ROCF that would score higher might not be graded due to a detail 2 that could not be recognized. For this reason, we have designed our recognition hierarchy such that the test is not graded if it cannot automatically recognize detail 2. This provides the most consistent application of grading requirements that is still consistent with the grading rubric as presented in the Compendium of Neuropsychological Examinations [46].
351
+
352
+ A total score of 0, however, is not necessarily due to a true negative. For 44 sketches, our algorithm was unable to find detail 2 due to a sloppy or unconnected drawing, but other details would exist. If we flatly calculated F1-Score of all Details for every sketch included the ungraded ones, this would assign incorrect false negatives to the rest of the details. For that reason we tier F1-score calculations; for detail 2 we calculate it for all sketches $\left( {\mathrm{n} = {185}}\right)$ , and for all other Details we calculate it where detail 2 was correctly detected $\left( {\mathrm{n} = {141}}\right)$ .
353
+
354
+ < g r a p h i c s >
355
+
356
+ Figure 7: Grade plots for all 36-point scores, compared between Auto Rey-O and expert graders (n=141). (a) p=0.799, (b) p=0.829, (c) p=0.948
357
+
358
+ The F1-Score results in Table 2 and the graph in Figure 7 demonstrate the effectiveness of Auto Rey-O. Our top-down system correctly identifies and validates the details with a high enough F1-score that shows the system working for typical test-takers. Table 2 also shows the average differences in scores (in points, maximum of 2) assigned between our Auto Rey-O system and our expert graders. All of the average differences in scores for each detail are well below 1 point and the vast majority below half a point, indicating a marginal difference in scoring between expert graders and our Auto Rey-O system. The system also works successfully for ROCFs with higher amounts of distortion. Such instances display the flexibility of the system in still identifying present details even if the participant has heavy lapses in memory.
359
+
360
+ The lowest-performing details are 1, 17 and 15 . These had the highest amount of false negatives, although our manual review of these false negatives showed our system did recognize the details but chose to grant a score of 0 due to our threshold for distortion. Further refinements of distortion threshold values for these two details would improve their recognition quality.
361
+
362
+ Figure 7 compares the scores assigned by our automated grader and the two expert graders. Between the two expert graders, the correlation was $p = {0.948}$ , a Spearman’s rank coefficient of $\rho = {0.942}$ , and an average difference in scores of ${\Delta }_{{g}_{1},{g}_{2}} = {1.68}$ . Between our system and grader $1,p = {0.799},\rho = {0.765}$ , and average ${\Delta }_{\text{ auto },{g}_{1}} = {3.21}$ . Between our system and grader $2,p = {0.829}$ , $\rho = {0.802}$ , and average ${\Delta }_{\text{ auto },{g}_{2}} = {2.78}$ . Our automated system produced grades with a generally high correlation with those of the graders, although the grades from the experts were more similar to each other. In all three cases, low-scoring tests somewhat deviate across all graders, even between the expert graders. This is likely due to the aforementioned ambiguity in interpreting detail distortion. Our automated grader can also be observed to be consistently too strict on grading that produces consistently lower scores, which is partially attributed to the fact that it does not recognize details that were placed in the wrong location. In addition, at the suggestion of the expert graders who also served as domain experts, we chose to prioritize consistency in grading over leniency when deciding on partial credit thresholds since consistency is one of the key advantages of an automated recognition system.
363
+
364
+ § 5.2 LIMITATIONS
365
+
366
+ The main limitation of this graph-based approach to top-down sketch recognition is the reliance on line connections. Our vertex-contraction algorithm in Step 1 of the system's process does connect lines with corners within a certain radius. We found this technique worked very well if sketches were drawn with reasonable neatness. If the lines are disconnected by more than half an inch, however, these lines will remain disconnected. This was a conscious design choice since vertex contraction cannot be too aggressive; otherwise, regions where any correct sketch would have high numbers of vertices would all get incorrectly contracted into one. This is the case such as the area where detail6,7,3, and 8 all converge-even neatly-drawn sketches have a high concentration of vertices here. We intend to improve refine the recognition system to "jump" gaps and close disconnected lines only where appropriate.
367
+
368
+ § 6 FUTURE WORK AND CONCLUSION
369
+
370
+ Refinements can be made to help recognize specific kinds of poorly-drawn details. As previously mentioned, most sources of grading inaccuracies for our system came from poorly connected graphs due to sketch sloppiness. For healthy participants taking this test, our expert graders attributed sloppiness as a lack of effort rather than genuine memory loss if the patient has no hand motor issues. Still, there would be an interest in supplementing our graph traversal with connecting otherwise unconnected vertices to improve recognition performance.
371
+
372
+ Additionally, improvements to our Stage validation approach could be made to recognize finer details. Our validation method sometimes may not properly distinguish between small changes, such as an extra stray mark or one line missing. Identifying missing lines is important for details 8 and 12, where the number of parallel lines drawn is relevant to its grading. Our validation method is able to find these discrepancies somewhat frequently, but potential for improvement exists.
373
+
374
+ Lastly, we aim to work with clinical neuropsychologists to administer their test to willing clients to evaluate system usability in a clinical setting. This would produce additional sketch data taken from actual patients, and would allow us to perform UI/UX usability studies for clinicians. The ultimate aim of the system is to aid diagnosis process by automating the grading of an ROCF, so evaluating the user experience of clinicians as they collect the digital data and use the Auto Rey-O application for themselves is the next step to further this project.
375
+
376
+ Our Auto ReyO automatic Rey-Osterrieth Complex Figure test grader demonstrates the validity of a top-down sketch recognition approach using graph traversal algorithms. This significantly simplifies the recognition process where a bottom-up approach would need to take into consideration a prohibitively wide array of possible shape interpretations and re-interpretations. By employing graph crawling, classical vertex search, and optimization algorithms we are able to identify key sub-shapes of geometric shapes.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/sJPz-4Rwghv/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,431 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # "Not Our Target Users": UX Professionals' Perceptions of Designing for Older Adults
2
+
3
+ ## Abstract
4
+
5
+ In this paper, we revisit Jonathan Lazar's early work [24] on understanding designers' perceptions of accessibility for people with disabilities, and follow the same approach to instead contribute similar insights into the current state of designing websites and web applications for seniors. For this, we present a survey investigating how design professionals consider digital accessibility and usability for the ageing population in the UX practice. The survey probed on awareness and application of usability principles for older adults and challenges that hinder the design of senior-friendly products. Findings reveal that many respondents did not incorporate senior-focused usability practices in their work, nor were they familiar with design principles specific to older users. Lack of awareness and knowledge regarding the accessibility and usability needs of older adults were stated to be the main barriers to senior-friendly design. The study identifies several other challenges facing UX professionals when designing for seniors and provides directions for future research.
6
+
7
+ Keywords: Older adults, Inclusive design, UX professionals, User interface design, Senior-friendly design guidelines
8
+
9
+ ## 1 INTRODUCTION
10
+
11
+ In recent years, with the push towards a more inclusive society, there has been an increasing demand for the consideration of diverse user profiles in the design of digital products and services. One such user profile is that of 'older adults,' an important and expanding group of internet users who are typically underrepresented in technology design. According to the United Nations [42], the global population aged 65 years or over is growing faster than all other age groups. With the unparalleled growth of the ageing population, the number of older adults using online technologies also continues to increase. Internet use doubled from 35% to 75% among seniors in the United States between 2007 and 2021 [35], and similar trends are occurring all around the developed world [14],[32],[39].
12
+
13
+ Despite their increasing technology adoption, older adults still struggle to use many online services due to various factors associated with ageing. As people age, they experience limitations in their functional abilities and are gradually afflicted with difficulties in vision, hearing, cognition, and mobility [10],[26]. Generally, the user interfaces of many online products do not take these changing abilities into account, nor do they address the specific design needs of older adults [2],[5],[15],[29]. This compromises their usability and causes increased frustration for senior users, resulting in a lack of self-confidence and motivation to continue using the technology [43]. Older adults' mental models of user interfaces are also often different compared to those of (younger) designers who design these products and services [1]. This puts seniors at a major disadvantage in the digital age as they are unable to access the same services as their younger counterparts. Previous studies confirm that a more accessible web can be instrumental in enabling older adults to maintain an active, community-based lifestyle [8],[9],[18]. Therefore, to facilitate their ageing independently and to provide them equal access to information, technology designers should take into account the needs of the ageing population and ensure the products they design are "senior-friendly," i.e. easy for seniors to use without any additional help.
14
+
15
+ Usability for older adults has been a growing topic of interest and relevance for the human-computer interaction (HCI) community over the years. Studies have provided literature reviews [6], conducted expert reviews of websites [7], and discussed methodologies of user-centered design through participatory design [12] or usability testing [30] with seniors. Several design guidelines [23] and heuristics [26] have also been published to assist in improving the usability of interfaces for older adults. However, there remains a lack of research on how user experience (UX) professionals in the industry approach this topic. With the large number of online services that lack usability for older adults [2],[5],[15],[29], it is important to assess where the design community currently stands in terms of senior-focused design practices. This also raises the need to identify any barriers UX professionals might be facing that are inhibiting the design of user-friendly interfaces for older adults. Prior research has extensively investigated such barriers to (and attitudes toward) the widespread use of accessibility design guidelines, including seminal work on which we methodologically ground our own [24]. However, accessibility guidelines may not provide comprehensive support when designing for older adults [27],[38]. In fact, there are also strong arguments against equating ageing with accessibility (in design and elsewhere) [21],[28],[36].
16
+
17
+ Therefore, to fill this gap, we conducted an online survey with the participation of 130 professionals working in UX design from various industries. The aim of this study was to:
18
+
19
+ 1. investigate the level of understanding and awareness UX professionals have about accessibility and usability for seniors,
20
+
21
+ 2. understand how UX professionals incorporate accessibility and usability for seniors in their design projects, and
22
+
23
+ 3. uncover the motivations for, barriers to, and challenges UX professional face when ensuring senior-focused design usability.
24
+
25
+ This paper presents the findings of the study in detail and provides directions for future research. As a note on terminology, the terms 'older adults' and 'seniors' have been interchangeably used in this paper. While the term 'older adult' is more commonly used in HCI literature, the more prevalent term in our own sociocultural context is that of 'senior' (as indicated by government surveys in our geographical location, in a large urban center in Canada).
26
+
27
+ This study makes a major contribution to research on UX professionals by providing an insight into their current state of awareness and application of design methodologies for older adults. This is noteworthy because, while accessibility practices for people with disabilities have been widely studied, there has been limited focus on professionals' expertise and experience with designing specifically for seniors. Through this research, we bring to surface evidence about several reasons why senior-friendliness is not a focus for UX professionals while fostering a reflection on the transfer of research-based recommendations to the professional environment. The results provide insights into the current resources and attitudes designers have with regards to designing for seniors in comparison to previous studies which have focused on different, yet related domains (e.g. accessibility).
28
+
29
+ While we were inspired by prior work on understanding the challenges designers face when designing for accessibility, our work is not about accessibility. Instead, our study only draws methodologically from Jonathan Lazar's prior work on perceptions of accessibility [24]. We adapt Lazar's approach and extends the scope of their methods to studying the challenges designers have with respect to usability for seniors. Our findings reveal that similar awareness (and work) is now needed in the field of senior-friendly design as it was for accessibility at the time of Lazar et al.'s seminal paper, and through this, we hope that the results of our study will inspire a culture and policy shift in terms of including older adults in design as Lazar's paper did for accessible design.
30
+
31
+ ## 2 BACKGROUND AND RELATED WORK
32
+
33
+ This section provides the theoretical background of the study and situates our work in literature. We begin with an overview of how digital accessibility and usability are relevant to designing for seniors, followed by a description of various design principles and usability methods available to assist UX professionals in the creation of senior-friendly products. We conclude this section with a discussion of how lack of specific support resources within industry may result in designers not being aware or knowledgeable of ways to make their products inclusive - this was revealed by Lazar [24] in their seminal work related to designing for accessibility, which we now aim to replicate with respect to designing for older adults.
34
+
35
+ ### 2.1 Digital Accessibility and Seniors
36
+
37
+ 'Digital accessibility' and 'usability' are two different concepts that are closely related in the context of crafting technologies that work for everyone. Digital accessibility primarily focuses on people with disabilities and ensures that technologies are designed and developed in a way that everyone can use them, regardless of disability type or severity of impairment [44]. This includes auditory, cognitive, neurological, physical, speech, and visual impairments that may affect people's access to, or interaction with online products and services. While digital accessibility predominantly serves people with disabilities, it also benefits people without disabilities, like older adults who face gradual limitation of functional abilities due to ageing [47]. This is because the needs of older adults with changing abilities can be considered to overlap with the accessibility needs of people with disabilities to some extent. For example, one of the accessibility principles focuses on allowing users to incrementally change the size of the text in user interfaces. Although this principle is targeted at people with disabilities, senior users requiring larger text in interfaces due to declining vision can also gain from its implementation. Older adults can therefore be assumed to be beneficiaries of accessible design, which makes it an important consideration for UX professionals when designing digital products.
38
+
39
+ In many countries, accessibility of digital designs is now legislated, and it follows from widely-used industry standards such as the Web Content Accessibility Guidelines (WCAG), published by the World Wide Web Consortium (W3C) Web Accessibility Initiative (WAI) [45]. These guidelines have become the benchmark for creating and evaluating accessible interfaces, and have been set as the minimum requirement in the digital accessibility policy of many countries worldwide [37]. While WCAG has been primarily developed for websites, the success criteria for these guidelines are not technology-specific and, therefore, they apply to all kinds of user interfaces. It is important to note that these guidelines are highly technical and require expert knowledge of web technologies for their comprehension and application [22]. However, there are a variety of software tools available that complement these guidelines and can help professionals determine if their design meets the accessibility standards [46].
40
+
41
+ Usability, on the other hand, refers to the general intuitiveness and ease of use of user interfaces. Usability for seniors ensures that digital products can be used by older adults to achieve their goals in an effective, efficient, and satisfactory manner, and the level of usability is determined by how well the features of the user interface accommodate senior users' needs and contexts [3]. While various projects (such as the Web Accessibility Initiative: Ageing Education and Harmonization (WAI-AGE) [47]) have suggested that designing for accessibility removes some barriers for older adults as well, this often only covers the most basic aspects of how older adults engage with digital designs, leading them to be unsure of their ability and unmotivated to continue trying new technologies [43]. Additionally, significant research within HCI highlights the dangers of conflating ageing with accessibility [21] [28] - which we took significant care to avoid in our own research, especially as we are drawing methodologically from prior research on accessibility.
42
+
43
+ ### 2.2 Senior-friendly Design Guidelines
44
+
45
+ Previous studies show that usability is one of the most important factors affecting older adults' adoption of technology [25]. For example, in a study examining the usage of electronic personal health records [40], it was found that while seniors considered these systems to be valuable, the prevalence of usability problems, such as complex navigation systems and highly technical language, made them challenging to use by older adults. Lack of usability can also result in increased frustration for older adults,. Therefore, in order to avoid usability challenges for older adults, it is important for UX professionals to adhere to a user-centered design approach and consider the needs and pain points of seniors in the design and evaluation of technologies.
46
+
47
+ Design principles explicitly targeted at the needs of older adults have been established to ensure the usability of interfaces for seniors. Based on various research conducted on ageing, the National Institute on Aging (NIA) and the National Library of Medicine (NLM) in the United States published "Making Your Web Site Senior Friendly: A Checklist," consisting of design guidelines that are very specific to older adults [31]. Examples of guidelines from the checklist include providing clear instructions, avoiding jargon, making it easy for users to enlarge text, reducing scrolling, and using high-contrast color combinations. Similarly, Kurniawan and Zaphiris [23] presented a set of "research-derived ageing-centered web design guidelines" for older adults in 2005, which include providing larger targets, having clear navigation, using color and graphics minimally, and reducing demand on the users' memory. In 2013, Lynch, Schwerha, and Johanson [26] developed a weighted heuristic for evaluating the usability of user interfaces for older adults, which included a list of 32 characteristics representing the most important senior-friendly design recommendations. The Nielsen Norman Group also released their third edition of "UX Design for Seniors (Ages 65 and older)" in 2019, which is a commercially available report outlining design guidelines for particular tasks and web components to support usability for seniors [33]. Although these guidelines have been widely used and referenced in academia, there is limited research on how much of these recommendations are transferred to the professional environment and how UX professionals incorporate them into their design practice.
48
+
49
+ ### 2.3 Involving Seniors in the Design Process
50
+
51
+ While following both accessibility guidelines and usability principles are important, they are not sufficient to guide designers toward senior-friendly design. To test the effectiveness of these guidelines and to ensure all needs and pain points of older adults are taken into consideration, senior users should be directly involved in the design process through various usability methods. Yesilada et al. [48] found that designers believe accessibility evaluation should be grounded on user-centered design practices, as opposed to just inspecting source codes, in order to obtain more reliable and valid results. This sentiment was also shared by Hart, Chaparro, and Halcomb [17], who suggested using a combination of design guidelines and usability testing when designing websites for older adults, as well as Milne et al. [27], who recommended designers go beyond WCAG and get firsthand interaction with users to ensure their needs are met.
52
+
53
+ ### 2.4 Understanding Designers' Attitudes and Barriers toward Inclusive Design
54
+
55
+ Design professionals from various interdisciplinary backgrounds participate in the design and development of online products and services, and therefore, their perceptions and practices of accessibility have been an important topic investigated by several research projects. Five relevant surveys conducted with these professionals are summarized below:
56
+
57
+ Lazar, Dudley-Sponaugle, and Greenidge (2004) surveyed 175 webmasters of government and commercial organizations to investigate their knowledge of web accessibility [24]. Most of the participants (74%) reported that they were familiar with government laws on web accessibility, and many (79%) were familiar with automated software tools used for accessibility evaluation. These results indicate that a lack of knowledge or awareness is not the prime reason behind the shortage of accessible interfaces. However, it is also notable that almost one-fourth of the respondents (23%) did not know about web accessibility guidelines at all. Participants cited lack of time, training, managerial and client support, as well as lack of software tools, and confusing accessibility guidelines as the main barriers to web accessibility. They also mentioned concerns regarding maintaining a balance between accessibility and good graphic design, which appears to stem from the misconception that an accessible website may downgrade the experience for visual users [11]. Concerning motivation, participants indicated that the primary reasons for making their websites accessible would be requirements imposed by the government, use of the websites by people with disabilities, external funding, requirements from management or clients, training on accessibility, and access to better accessibility tools.
58
+
59
+ A similar survey was conducted by ENABLED Group (2005) with 269 subjects, which included webmasters, managers, and content editors [13]. Only 36% of the participants responded that they try to make their websites accessible, and very few (13%) had received training on accessibility. The primary reasons behind this were indicated to be a lack of knowledge of web accessibility guidelines, lack of technical knowledge, and time constraints. Nonetheless, many participants (74%) expressed interest in attending training sessions to learn more about accessibility, with the preferred topics being web accessibility guidelines, usability, and accessibility evaluation.
60
+
61
+ Freire, Russo, and Fortes (2008) surveyed 613 professionals in Brazil from diverse backgrounds (academia, industry, and government), who took part in web development projects [16]. The findings showed that only 20% of the participants considered accessibility as critical to their projects. Lack of training on accessibility and lack of knowledge about the Brazilian accessibility law were stated to be the primary reasons behind accessibility not being a priority among participants.
62
+
63
+ Modelling the studies mentioned above, Inal, Rızvanoğlu, and Yesilada (2019) surveyed 113 UX professionals in Turkey regarding their awareness and practice of web accessibility [19]. While most participants (71%) indicated that they had received training on web accessibility, many (69%) still did not consider accessibility in their projects. Moreover, only 17% of the participants reported working directly with people with disabilities for their projects and accessibility evaluations. A similar survey was conducted by Inal et al. (2020) with the participation of 167 UX professionals from Nordic countries [20]. Results show that while digital accessibility was considered to be important by the respondents, they had limited knowledge about accessibility guidelines and standards. Most of the organizations represented in this study included accessibility in their projects, however, the time spent by these organizations on accessibility issues was reported to be very less. The main challenges participants faced in creating accessible systems were lack of training and time and budget constraints.
64
+
65
+ In summary, the studies conducted by the ENABLED Group [13] and Freire et al. [16] confirm that a lack of awareness of accessibility laws and a lack of training on web accessibility can largely hinder the development of accessible interfaces. On the other hand, the studies conducted by Lazar et al. [24] and Inal et al. [19] show that awareness or knowledge about web accessibility does not automatically lead to the development of accessible interfaces. Although design professionals are aware of the needs of people with disabilities, they still do not take these needs into consideration generally.
66
+
67
+ The above-mentioned studies are mostly centered around accessibility for people with disabilities, which is different from that for seniors, as discussed earlier. While WCAG guidelines can be applicable to older people experiencing age-related impairments [47], merely following accessibility guidelines does not necessarily lead to the design being usable, nor do they help overcome the particular challenges facing older adults [27],[38]. There is still much work to be done in ensuring usability for seniors, as can be understood from the results of numerous previous studies [2],[5],[15],[29] which revealed how websites or apps are lacking in this regard. It has also been identified that there appears to be little awareness among designers of the specific requirements of older people compared to their knowledge of WCAG [38]. As a result, they are not considering the particular needs of a growing audience when designing user interfaces.
68
+
69
+ ## 3 Study Rationale and Methods
70
+
71
+ Based on the surveyed literature, we claim that designing digital applications for older adults is today struggling with challenges similar to where designing for accessibility was more than two decades ago. As such, it is imperative to find the reasons behind the lack of senior-friendly interfaces and to fill this gap, research concerning the perceptions of UX professionals in considering accessibility and usability for older adults needs to be done. In this vein, we are inspired by Lazar et al.'s [24] landmark research on web accessibility. We draw methodologically from that seminal research that expose gaps in the design process with respect to accessibility.
72
+
73
+ Aiming to extend Lazar et al.'s [24] work on digital accessibility (by extending this to usability for seniors), we methodologically followed their protocol and adapted it to the emerging context of inclusive design for older adults. As such, we employed a quantitative survey-based methodology for this study. The survey was administered online using SurveyGizmo with 130 respondents. The survey was deployed in 2019, with the bulk of data collection occurring throughout 2020 (with several interruptions due to COVID-19 pandemic's effect on availability of research staff - however we do not consider this extended period of recruitment to have any influence on the quality of survey responses since no time-sensitive information was collected.)
74
+
75
+ ### 3.1 Research Questions
76
+
77
+ Through our research, we attempt to address the identified gaps in the literature by focusing on three main research questions:
78
+
79
+ RQ1: What is the level of understanding and awareness UX professionals have about accessibility and usability for seniors?
80
+
81
+ RQ2: How do UX professionals incorporate accessibility and usability for seniors in their design projects?
82
+
83
+ RQ3: What are the motivations for and challenges of ensuring usability for seniors by UX professionals?
84
+
85
+ ### 3.2 Questionnaire Design
86
+
87
+ The questionnaire was derived from priorly validated research instruments on digital accessibility awareness and practices (see 3.2.1). We opted for this approach due to our assumption that designing for seniors may be at the same stage of awareness and practice as designing for accessibility was when Lazar et al. [24] conducted their seminal research on this topic. Additionally, using a priorly validated instrument (questionnaire) that was used in a similar domain facilitated the collection of more robust data which may not have been possible with an instrument developed from the ground up.
88
+
89
+ The questionnaire was subject to two rounds of pilot testing. The first round was conducted with two participants from academia and one participant from the industry. Questions were revised to address issues of clarity and ambiguity that emerged from the pilot. For the second round, the questionnaire was deployed online via SurveyGizmo, and was validated by two participants from academia. The final questionnaire was comprised of 32 questions, both open-ended and closed-ended, grouped into four sections:
90
+
91
+ 1. Personal Information included eight questions to obtain demographic information, such as geographic location, educational background, and work experience;
92
+
93
+ 2. General Understanding and Awareness included nine questions pertaining to RQ1, to determine knowledge of how seniors use the web, and awareness of assistive technologies, digital accessibility legislation, senior-friendly design guidelines, and tools;
94
+
95
+ 3. Practical Experience included ten questions pertaining to RQ2, to identify consideration of accessibility and usability for seniors in the UX practice, and the use of various research methods and evaluation techniques;
96
+
97
+ 4. Motivations and Challenges included five questions pertaining to RQ3, to understand challenges, and personal and organizational interests in supporting usability for seniors.
98
+
99
+ We clarify here that questions in the General Awareness and Practical Experiences groups were designed to compare 'general accessibility' practices with 'usability for seniors' practices. Questions in Motivations and Challenges focused only on 'usability for seniors'. Since our focus was on attitudes towards designing for seniors in general, we did not hypothesize anything specific about accessibility and usability. As such, the Results section is presented from the responses that emerged from these questions, and not from a preconceived structure.
100
+
101
+ The questionnaire was preceded by a consent form that outlined the purpose of the study, explained the rights of the participants, and assured them of complete anonymity. Following the consent form, participants were taken to a separate web page where they were presented with the questionnaire.
102
+
103
+ #### 3.2.1 Grounding of Questionnaire Design in Prior Work
104
+
105
+ Most questions were closely informed by previously developed and validated surveys, such as Lazar et al. [24] and Freire et al. [16], and extended to inquire about "designing for seniors", as opposed to "designing for people with disabilities". A breakdown of the survey questions by the source is provided in Table 1. The survey instruments are entirely available as supplementary materials included with the submission of this paper. The questions we included from Lazar's and Freire's instruments were selected based on how applicable these were to the process of considering various resources (e.g. guidelines) in making designs inclusive to a specific user group that was typically excluded from design considerations. This has allowed us to easily and objectively adapt their instruments (which were focused on accessibility) to our own domain - designing for older adults.
106
+
107
+ Table 1: Number of survey questions by source
108
+
109
+ <table><tr><td>Survey Section</td><td>No. of questions derived from Lazar et al. [24]</td><td>No. of questions derived from Freire et al. [16]</td><td>No. of questions added by authors</td></tr><tr><td>General Understanding and Awareness</td><td>3</td><td>4</td><td>2</td></tr><tr><td>Practical Experiences</td><td>2</td><td>2</td><td>6</td></tr><tr><td>Motivations and Challenges</td><td>3</td><td>1</td><td>1</td></tr></table>
110
+
111
+ The extra questions we added were also extended versions of questions from Lazar et al. [24] and Freire et al. [16]. These questions were included to help gather data on senior-friendly design practices, which was not addressed in their study. For example, Lazar's question: "Are you familiar with any of the following accessibility guidelines from the Web Accessibility Initiative?" was extended to "Are you familiar with the senior-friendly design guidelines published by the National Institute on Aging and the National Library of Medicine?", while the question: "What do you think is the biggest challenge of making a website accessible for users with visual impairments?" was modified to "What do you think are the challenges of making websites or apps senior-friendly?". Questions regarding visual impairments were asked to examine general accessibility practices (similar to Lazar's), which can be a part of the UX practitioner's design process when designing for seniors. For example, the question: "Have you ever created a website that is accessible for users with visual impairments?" was extended to "Have you ever created a website or app that is accessible for seniors?". Similarly as an example, Freire's survey question asking the respondent to describe "Awareness of problems faced by blind people using the Internet" was adapted to our domain as "Describe your understanding of how seniors use websites". A complete description of our survey, including how questions were derived from the instruments used in Lazar’s and Freire’s research and adapted to our domain (older adults), is included in the supplementary materials submitted together with this paper.
112
+
113
+ Deriving our questionnaire from previously-validated instruments required us to compress some questions and also not inquire about accessibility practices at the same level of granularity. As such, while questions about familiarity with specific accessibility tools were not included, an option was provided for participants to type in the tools they were familiar with. However, given the relevance to designing for older adults [15], we included questions regarding visual impairments, which were asked to examine general accessibility practices (similar to Lazar's), which can be a part of the UX practitioner's design process when designing for seniors. Questions regarding assistive technologies were adopted from Freire et al with very minor modifications, and participants were asked to choose from a very broad range of answers. The same set of options was also used by Inal et al [19]. In the same manner, the question about ethical consideration was not completely removed; it was included as part of the motivation-related questions.
114
+
115
+ #### 3.2.2 Definition of 'Seniors'
116
+
117
+ Studies in literature vary in their definition of 'seniors' and the age range they belong to. Generally, the age range defined for 'seniors' is either over 60 or over 65 . However, some research uses a lower threshold or a flexible threshold by tying it to the typical or legal retirement age. Given that the participants in this study were not older adults, but rather UX professionals of varying ages, it was considered that imposing a standard age range for seniors may have been limiting and also insensitive to the localized and personal socio-cultural norms in which each UX professional may operate. Therefore, when answering the questions, participants were asked to think of their own definition of 'seniors' and the age range they belonged to as relevant to their culture and experiences.
118
+
119
+ ### 3.3 Recruitment
120
+
121
+ Prospective participants were invited to express interest in the study by filling out a short enrollment form which served as a screener to ensure quality of data and to avoid fraudulent responses [41]. The enrollment form was posted on professional UX design groups on various closed-group (member-only) social media channels and promoted through personal contacts and announcements posted on e-newsletters and social media groups informally associated with several design communities. Participants were asked to briefly describe their work experience in the enrollment form, and those considered to be 'legitimate' responders [41] with a background in UX, were emailed a link to the questionnaire.
122
+
123
+ Participation was not restricted to a geographical location, given that this was an online survey. As a token of appreciation for their time and contributions, participants were offered a $\$ {10}$ Amazon gift card in a currency of their choice (US dollars or Canadian dollars) once they signed the consent form.
124
+
125
+ ### 3.4 Participants
126
+
127
+ In total, 130 participants completed the survey. Participants had to meet the following eligibility criteria: be at least 18 years of age and be a design professional.
128
+
129
+ We used the term 'design professional' in the survey instead of 'UX professional' to include people who did not have UX-specific job titles but were still involved in user-centered design processes. For the purpose of this study, a 'design professional' is defined as someone who designs or provides consultancy services in the design of user interfaces for websites or apps that are not for their own personal use. This clarifying definition was provided in the consent form to help prospective participants decide whether they identified as design professionals.
130
+
131
+ Since the survey did not ask for personal information, there was no risk to participants self-identifying as design professionals. Whether the participants actually worked as design professionals was not verified, since requiring formal verification of participants' professions was not in line with the ethical guidelines for requesting excessive personal data. This limitation was mitigated by the recruitment strategy, as the study was only advertised on closed professional UX groups.
132
+
133
+ ### 3.5 Analysis
134
+
135
+ Data obtained from the online survey was exported from SurveyGizmo and collated in a spreadsheet. The responses were then reviewed to ensure completion and consistency and to identify duplicates or outliers. Responses to open-ended questions were reviewed for quality by checking if the answers provided were relevant to the questions asked, in order to avoid any fraudulent responses [41].
136
+
137
+ Following quality assurance checks, data was then processed and coded to carry out the analysis. Descriptive statistics were employed to analyze the quantitative data provided by the multiple-choice questions. Comparative statistical analysis was not conducted due to the exploratory nature of the data and research questions.
138
+
139
+ Free text responses to open-ended questions were subjected to a thematic analysis to identify patterns across the data, and to complement and contextualize the quantitative findings from the survey. The thematic analysis was done using a data-driven inductive approach by the lead author - an experienced UX designer and techno-social researcher. Responses were systematically coded and reviewed for common, emergent themes following the guidelines by Braun and Clarke [4]. Since the responses were short and fact-oriented, and also not the main source of analysis for the predominantly quantitative instrument, a more extensive dual-annotator analysis was not necessary.
140
+
141
+ ## 4 RESULTS
142
+
143
+ This section presents a synthesis of the data collected from the survey. Survey findings have been presented in tabular format to ensure accessibility (see supplementary materials for graphical format). Percentages have been rounded to the closest number in the text to improve readability.
144
+
145
+ ### 4.1 Demographics
146
+
147
+ A summary of the demographic profile of the 130 participants is presented in Table 2. Participants worked in various industries, with the principal organizational areas being information technology (37%), followed by education (18%) and finance (11%). The job titles of the participants included a wide range of UX roles, such as UX designer, product designer, design lead, UX architect, UX researcher, chief design officer, design strategist, UI designer, information architect, and UX consultant. The average length of the participants' work experience as a professional or a consultant in the field of UX was 6.55 years $\left( {\mathrm{{SD}} = {6.09}}\right)$ . Regarding their personal rating of experience in the field, most of the participants described their level of experience as intermediate (44%) or advanced (32%). In terms of education, most of the participants had some form of post-secondary education with either a bachelor's degree (55%) or a master's degree (29%). A large number of participants (74%) also received professional training or education in the fields of UX and/or HCI.
148
+
149
+ Table 2: Demographic profile of participants $\left( {n = {130}}\right)$
150
+
151
+ <table><tr><td>Variables</td><td>Responses</td><td>n</td><td>%</td></tr><tr><td rowspan="3">Geographic location</td><td>Canada</td><td>55</td><td>42.3</td></tr><tr><td>United States</td><td>41</td><td>31.5</td></tr><tr><td>Other</td><td>34</td><td>26.2</td></tr><tr><td rowspan="8">Industry</td><td>Education / Research</td><td>23</td><td>17.7</td></tr><tr><td>Finance / Banking / Insurance</td><td>14</td><td>10.8</td></tr><tr><td>Government / Military</td><td/><td>3.1</td></tr><tr><td>Healthcare / Medical</td><td/><td>2.3</td></tr><tr><td>Information Technology</td><td/><td>36.9</td></tr><tr><td>Telecommunications</td><td/><td>2.3</td></tr><tr><td>Other</td><td/><td>22.3</td></tr><tr><td>Not Sure</td><td/><td>4.6</td></tr></table>
152
+
153
+ <table><tr><td>Variables</td><td>Responses</td><td>n</td><td>%</td></tr><tr><td rowspan="7">Education</td><td>High school degree or equivalent</td><td>5</td><td>3.8</td></tr><tr><td>Some college, no degree</td><td/><td>11 8.5</td></tr><tr><td>Associate degree</td><td/><td>3 2.3</td></tr><tr><td>Bachelor's degree</td><td/><td>72 55.4</td></tr><tr><td>Master’s degree</td><td/><td>37 28.5</td></tr><tr><td>Professional degree</td><td/><td>1 0.8</td></tr><tr><td>Doctorate</td><td/><td>1 0.8</td></tr><tr><td rowspan="4">Professional education in HCl/UX</td><td>Yes</td><td/><td>9673.8</td></tr><tr><td>No</td><td/><td>19 14.6</td></tr><tr><td>Other</td><td/><td>8 6.2</td></tr><tr><td>Not sure</td><td/><td>7 5.4</td></tr><tr><td rowspan="4">Level of experience in HCl/UX</td><td>Expert</td><td/><td>19 14.6</td></tr><tr><td>Advanced</td><td/><td>42 32.3</td></tr><tr><td>Intermediate</td><td/><td>57 43.8</td></tr><tr><td>Basic</td><td/><td>12 9.2</td></tr><tr><td rowspan="6">Web accessibility training/education</td><td>Undergraduate courses</td><td/><td>30 23.1</td></tr><tr><td>Graduate courses</td><td/><td>107.7</td></tr><tr><td>Online courses</td><td/><td>56 43.1</td></tr><tr><td>Training in the workplace (current or past)</td><td/><td>54 41.5</td></tr><tr><td>Other</td><td/><td>14 10.8</td></tr><tr><td>No training or education in web accessibility</td><td/><td>31 23.8</td></tr></table>
154
+
155
+ ### 4.2 General Understanding and Awareness
156
+
157
+ #### 4.2.1 Digital accessibility training and education
158
+
159
+ Participants were asked what kind of professional training or education they received in web accessibility through a multiple selection question. Most participants (76%) received some form of web accessibility education with the most common sources being online courses and workplace training programs, followed by undergraduate and graduate courses. Some of the other sources mentioned by participants include conferences, meetups, webinars, bootcamps, and personal research. It is notable that almost one-fourth of the participants did not receive any professional training or education in web accessibility (Table 2).
160
+
161
+ #### 4.2.2 Understanding senior user needs
162
+
163
+ Concerning understanding of senior user needs, 58% of the participants stated that they knew how seniors use websites and how to design for them. The remaining 42% did not know how to design for seniors, and among them, 15% had no knowledge of how seniors use the web. Since many older adults use assistive technologies to access digital services, participants were also asked to specify all the assistive technologies they were familiar with through a multiple selection question. Almost all participants (96%) were familiar with assistive technologies, with the most popular selections being speech recognition tools, screen magnifiers, and screen readers (Table 3).
164
+
165
+ #### 4.2.3 Web accessibility legislation and guidelines
166
+
167
+ Only 49% of the participants reported that they were familiar with government laws on digital accessibility. Responses from a follow-up question regarding the level of familiarity they had with the accessibility laws have been summarized in Table 3. 36% of the participants reported understanding and following digital accessibility laws. On the other hand, 47% of the participants barely knew or never heard of any accessibility laws.
168
+
169
+ Table 3: General understanding and awareness
170
+
171
+ <table><tr><td>Responses</td><td>n</td><td>%</td></tr><tr><td colspan="3">Level of understanding of how seniors use websites</td></tr><tr><td>I am aware that seniors can use websites, but I don't know how they use them</td><td>19</td><td>14.6</td></tr><tr><td>I know how seniors use websites, but I don't know how to design for them</td><td>36</td><td>27.7</td></tr><tr><td>I know how seniors use websites and how to design for them, but I haven't designed for them</td><td>40</td><td>30.8</td></tr><tr><td>I know how seniors use websites and I have designed for them</td><td>35</td><td>26.9</td></tr><tr><td colspan="3">Familiarity with assistive technologies</td></tr><tr><td>Screen reader</td><td>109</td><td>83.8</td></tr><tr><td>Screen magnifier</td><td>110</td><td>84.6</td></tr><tr><td>Braille-based tools (e.g. printers, embossed printers)</td><td>55</td><td>42.3</td></tr><tr><td>Text-only browser</td><td>74</td><td>56.9</td></tr><tr><td>Alternative keyboard</td><td>53</td><td>40.8</td></tr><tr><td>Alternative mouse and joystick</td><td>44</td><td>33.8</td></tr><tr><td>Speech recognition tools (e.g. Siri)</td><td>110</td><td>84.6</td></tr><tr><td>Other</td><td>4</td><td>3.1</td></tr><tr><td>I am not familiar with any assistive technology</td><td>5</td><td>3.8</td></tr><tr><td colspan="3">Level of familiarity with accessibility laws</td></tr><tr><td>I know the relevant law(s) and its web-related implications, and follow it</td><td>47</td><td>36.2</td></tr><tr><td>I know the relevant law(s) and its web-related implications, but don't follow it</td><td>11</td><td>8.5</td></tr><tr><td>I know the relevant law(s), but not its web-related implications</td><td>11</td><td>8.5</td></tr><tr><td>I have heard about it / I barely know about it</td><td>29</td><td>22.3</td></tr><tr><td>I have never heard about it</td><td>32</td><td>24.6</td></tr><tr><td colspan="3">Familiarity with accessibility guidelines</td></tr><tr><td>Web Content Accessibility Guidelines (WCAG)</td><td>88</td><td>67.7</td></tr><tr><td>Authoring Tool Accessibility Guidelines (ATAG)</td><td>8</td><td>6.2</td></tr><tr><td>User Agent Accessibility Guidelines (UAAG)</td><td>12</td><td>9.2</td></tr><tr><td>I am not familiar with any accessibility guidelines</td><td>39</td><td>30</td></tr></table>
172
+
173
+ Participants were asked which accessibility guidelines they were familiar with through a multiple selection question, and many reported being familiar with the WCAG (68%). It is also worth mentioning that ${30}\%$ of the participants were not familiar with any accessibility guidelines. Regarding knowledge of accessibility checking tools, 64% reported being familiar with these tools with specific mentions of WAVE, AChecker, Axe, Google Lighthouse, Contrast Analyzer, Siteimprove, etc.
174
+
175
+ #### 4.2.4 Senior-friendly design guidelines
176
+
177
+ Most participants (83%) were not familiar with the senior-friendly design guidelines published by the NIA and NLM. Only 12 participants (9%) reported knowing about these guidelines. When inquired about familiarity with other senior-friendly design guidelines, most participants (73%) did not know of any other senior-friendly design guidelines either.
178
+
179
+ ### 4.3 Practical Experiences
180
+
181
+ #### 4.3.1 Web accessibility and usability for seniors as part of projects
182
+
183
+ Concerning previous experience, 54% of the participants reported to having designed accessible interfaces for users with visual impairments. On the other hand, 41% reported to never having created any website or app that was accessible to users with visual impairments. Likewise, in terms of designing for older adults, 43% of the participants previously created websites or apps that were accessible to seniors, while 40% had no previous experience designing for seniors.
184
+
185
+ Majority of the participants $\left( {{81}\% ,\mathrm{n} = {105}}\right)$ reported considering accessibility in the design projects they were involved in. Few of them $\left( {6\% ,\mathrm{n} = 8}\right)$ did not consider accessibility in their projects and they were asked to explain their reasons for doing so through an open-ended question. All eight participants responded to the question and the reasons stated by them can be classified under the following themes: accessibility not being included in project scope $\left( {\mathrm{n} = 4}\right)$ , accessibility not being a requirement for the target group/customer $\left( {\mathrm{n} = 2}\right)$ , time and budget constraints $\left( {\mathrm{n} = 1}\right)$ , lack of client support $\left( {\mathrm{n} = 1}\right)$ , and lack of information and tools for accessibility $\left( {\mathrm{n} = 1}\right)$ .
186
+
187
+ In terms of senior-friendliness, only ${40}\% \left( {\mathrm{n} = {52}}\right)$ of the participants considered usability for seniors in the design projects they were involved in, while ${34}\% \left( {\mathrm{n} = {44}}\right)$ stated that they did not consider designing for seniors. The reasons given by 43 of these participants for not considering senior-friendliness in their projects can be classified under the following themes: senior-friendliness not being a requirement for the target group/customer (n=32), lack of awareness of how seniors use the Internet (n=4), senior-friendliness not required by clients/stakeholders $\left( {\mathrm{n} = 4}\right)$ , senior-friendliness not being a priority $\left( {\mathrm{n} = 3}\right)$ , no opportunity to interact with seniors $\left( {\mathrm{n} = 2}\right)$ , lack of knowledge about designing for seniors $\left( {\mathrm{n} = 1}\right)$ , limited project scope $\left( {\mathrm{n} = 1}\right)$ , and time and budget constraints $\left( {\mathrm{n} = 1}\right)$ .
188
+
189
+ #### 4.3.2 Research methods for accessible and senior- friendly designs
190
+
191
+ The 105 participants who reported considering accessibility in their design projects were asked which research methods they used to design for users with disabilities through a multiple selection question. A similar question regarding research methods was asked to the 52 participants who reported considering senior-friendliness in their design projects. According to the responses, the most widely used method for both designing for users with disabilities and designing for seniors was following accessibility guidelines (Table 4).
192
+
193
+ #### 4.3.3 Evaluation techniques for accessible and senior- friendly designs
194
+
195
+ The participants who reported considering accessibility and/or senior-friendliness in their design projects were also asked about the evaluation techniques they used when designing for people with disabilities or seniors. For designing for people with disabilities, the most prominent evaluation technique among the 105 participants was checking for compliance with accessibility guidelines. On the other hand, for evaluation techniques for designing for seniors, the most widely used technique among the 52 participants was conducting usability tests with seniors (Table 4).
196
+
197
+ Table 4: Use of research methods and evaluation techniques sessions
198
+
199
+ <table><tr><td>Research methods</td><td colspan="2">Designing for users with disabilities (n=105)</td><td colspan="2">Designing for senior users (n=52)</td></tr><tr><td/><td>n</td><td>%</td><td>n</td><td>%</td></tr><tr><td>Follow accessibility guidelines</td><td>80</td><td>76.9</td><td>32</td><td>61.5</td></tr><tr><td>Follow senior-friendly design guidelines</td><td>20</td><td>19.2</td><td>12</td><td>23.1</td></tr><tr><td>Conduct interviews</td><td>34</td><td>32.7</td><td>16</td><td>30.8</td></tr><tr><td>Conduct surveys</td><td>24</td><td>23.1</td><td>12</td><td>23.1</td></tr><tr><td>Generate personas</td><td>32</td><td>30.8</td><td>20</td><td>38.5</td></tr><tr><td>Conduct usability tests</td><td>31</td><td>29.8</td><td>21</td><td>40.4</td></tr><tr><td>Conduct participatory design</td><td>15</td><td>14.4</td><td>7</td><td>13.5</td></tr></table>
200
+
201
+ <table><tr><td>Research methods</td><td colspan="2">Designing for users with disabilities (n=105)</td><td colspan="2">Designing for senior users (n=52)</td></tr><tr><td/><td>n</td><td>%</td><td>n</td><td>%</td></tr><tr><td>Conduct heuristic evaluations</td><td>48</td><td>46.2</td><td>20</td><td>38.5</td></tr><tr><td>Other</td><td>8</td><td>7.7</td><td>3</td><td>5.8</td></tr><tr><td>I don't use any research methods</td><td>8</td><td>7.7</td><td>6</td><td>11.5</td></tr><tr><td colspan="5">Evaluation techniques</td></tr><tr><td>Conduct usability tests with users with disabilities</td><td>33</td><td>31.4</td><td>14</td><td>26.9</td></tr><tr><td>Conduct usability tests with seniors</td><td>35</td><td>33.3</td><td>28</td><td>53.8</td></tr><tr><td>Test with automatic accessibility assessment tools</td><td>55</td><td>52.4</td><td>19</td><td>36.5</td></tr><tr><td>Check compliance according to accessibility guidelines</td><td>60</td><td>57.1</td><td>25</td><td>48.1</td></tr><tr><td>HTML validation</td><td>47</td><td>44.8</td><td>16</td><td>30.8</td></tr><tr><td>CSS validation</td><td>40</td><td>38.1</td><td>15</td><td>28.8</td></tr><tr><td>Test with assistive technologies</td><td>32</td><td>30.5</td><td>14</td><td>26.9</td></tr><tr><td>Other</td><td>5</td><td>4.8</td><td>3</td><td>5.8</td></tr><tr><td>I don't evaluate my designs</td><td>7</td><td>6.7</td><td>9</td><td>17.3</td></tr></table>
202
+
203
+ ### 4.4 Motivations
204
+
205
+ #### 4.4.1 Perceptions of usability for seniors in organizations
206
+
207
+ Participants were asked to rate the importance given to accessibility for seniors by their organizations or independent practices. While there were varied responses to the question, accessibility for seniors was deemed to be less important for many organizations $\left( {{31}\% ,\mathrm{n} = {40}}\right)$ . The distribution of the other responses by participants were as follows: 12% very important 18% fairly important, 15% important, 15% not important.
208
+
209
+ #### 4.4.2 Motivations for usability for seniors
210
+
211
+ Participants were asked about their organizational and personal motivations in ensuring usability for seniors through two separate multiple selection questions. The most cited motivational factor for organizations was customer requirements (80%), followed by being inclusive (69%) and abiding by the laws (66%). Concerning personal motivations, most of the participants stated being inclusive (82%), followed by being ethical (78%) and developing better products (76%), to be the primary motivations for ensuring usability for seniors (Table 5).
212
+
213
+ Table 5: Motivations for ensuring usability for seniors
214
+
215
+ <table><tr><td/><td colspan="2">Organizational</td><td colspan="2">Personal</td></tr><tr><td>Motivations</td><td>n</td><td>%</td><td>n</td><td>%</td></tr><tr><td>Abiding by the laws</td><td>86</td><td>66.2%</td><td>60</td><td>46.2%</td></tr><tr><td>Being ethical</td><td>75</td><td>57.7%</td><td>101</td><td>77.7%</td></tr><tr><td>Being inclusive</td><td>89</td><td>68.5%</td><td>107</td><td>82.3%</td></tr><tr><td>Customer requirements</td><td>103</td><td>79.2%</td><td>75</td><td>57.7%</td></tr><tr><td>Developing better products</td><td>78</td><td>60%</td><td>99</td><td>76.2%</td></tr><tr><td>Finding research opportunities</td><td>39</td><td>30%</td><td>50</td><td>38.5%</td></tr><tr><td>Increasing income</td><td>55</td><td>42.3%</td><td>40</td><td>30.8%</td></tr><tr><td>Organizational requirements</td><td>55</td><td>42.3%</td><td>35</td><td>26.9%</td></tr><tr><td>Search engine optimization</td><td>24</td><td>18.5%</td><td>18</td><td>13.8%</td></tr><tr><td>Other</td><td>4</td><td>3.1%</td><td>2</td><td>1.5%</td></tr><tr><td>Not sure</td><td>1</td><td>0.8%</td><td>1</td><td>0.8%</td></tr></table>
216
+
217
+ ### 4.5 Challenges
218
+
219
+ #### 4.5.1 Challenges of ensuring usability for seniors
220
+
221
+ All participants were asked what the challenges of making websites or apps senior-friendly were through a multiple selection question. The most cited challenges by participants were lack of awareness regarding accessibility for seniors (75%), lack of training/knowledge (74%), time constraints (62%), budget restrictions (60%), and accessibility for seniors not being a requirement for the organization (Table 6).
222
+
223
+ Table 6: Challenges of ensuring usability for seniors
224
+
225
+ <table><tr><td>Challenges</td><td>n</td><td>%</td></tr><tr><td>Lack of awareness regarding accessibility for seniors</td><td>98</td><td>75.4</td></tr><tr><td>Lack of training/knowledge</td><td>96</td><td>73.8</td></tr><tr><td>Time restrictions</td><td>81</td><td>62.3</td></tr><tr><td>Budget restrictions</td><td>78</td><td>60</td></tr><tr><td>Accessibility for seniors is not a requirement for the organization</td><td>77</td><td>59.2</td></tr><tr><td>Lack of senior-friendly design guidelines</td><td/><td>56.9</td></tr><tr><td>Accessibility for seniors is not a requirement for the target group/customers</td><td>72</td><td>55.4</td></tr><tr><td>Lack of support from management</td><td/><td>48.5</td></tr><tr><td>Lack of human resources</td><td>41</td><td>31.5</td></tr><tr><td>No legal repercussions</td><td/><td>31.5</td></tr><tr><td>Accessibility for seniors is not seen as a personal responsibility</td><td/><td>25.4</td></tr><tr><td>Accessibility for seniors is outside the job description</td><td>26 20</td><td/></tr><tr><td>Other</td><td/><td>3.1</td></tr></table>
226
+
227
+ ## 5 Discussion
228
+
229
+ This section revisits the findings from the survey and discusses key themes regarding challenges that affect the design of senior-friendly interfaces. The research questions asked in this study were exploratory in nature and were aimed at bringing to light the current practices of UX professionals in the context of designing for seniors. Formulating hypotheses was, therefore, not suitable for the type of research questions asked.
230
+
231
+ The key contribution of our study is the quantitative data from the survey which we presented in the previous section and interpret here in more detail. In addition to such quantitative data, we are using statements from participants to reflect on the interpretation of the data, which we are bringing into the discussion here. We have not reported the qualitative survey data in the Results section since most of our data was from quantitative surveys, with the free-text answers providing only a small addition to this. These answers were subject to thematic analysis, with the insights gained from this providing nuance and interpretation to the main results.
232
+
233
+ ### 5.1 General Understanding and Awareness
234
+
235
+ The level of understanding and awareness among UX professionals about digital accessibility and usability for seniors was examined through the following dimensions: participation in web accessibility training, understanding of how senior users use the web, and familiarity with assistive technologies, digital accessibility legislation, standards, and tools, and senior-friendly design guidelines.
236
+
237
+ Although the survey was focused on senior-friendly design practices, the results suggest some parallels and connections to web accessibility frameworks which are worth discussing. Various trainings are provided on web accessibility in both industry and academia for design professionals to develop a practical understanding of the accessibility legislation, standards, and guidelines. It is evident from the responses that most of the participants (76%) received education on web accessibility, largely through online courses and training programs at their workplace, while thirty-one participants (24%) did not go through any formal accessibility education. Although it is concerning that one-fourth of the participants did not undergo any accessibility training, these numbers have improved a lot over the years as evident from previous studies [13],[16], which imply that web accessibility training has gained more popularity over time and more professionals are able to access these programs. This distribution of attendance in digital accessibility training was found to be similar to other recent studies on UX professionals in Turkey [19] and the Nordic countries [20].
238
+
239
+ Regarding familiarity with accessibility legislation, half the participants were not familiar with any government laws on web accessibility. Even among the 99 participants who went through web accessibility training, only 53 were aware of these policies which was surprising. In contrast, most participants in Lazar et al. [24] (74%) were familiar with accessibility legislation. This is important to consider since one of the most important factors influencing organizations to prioritize accessibility is governments enforcing legal compliance with accessibility standards [19],[20],[24]. On the other hand, in line with previous research [16],[19],[24], participants were mostly familiar with accessibility guidelines from the Web Accessibility Initiative (WAI), with WCAG (68%) being the most well-known set of guidelines and ATAG or UAAG being the least-known. Most participants were also aware of automated accessibility tools similar to Lazar et al. [24]. The level of awareness of accessibility guidelines and tools reported by participants in this study was higher compared to Inal et al. [20], which showed that very few UX professionals were familiar with web accessibility guidelines and accessibility assessment tools in the Nordic countries.
240
+
241
+ Although participants were generally familiar with different aspects of accessibility, there was a notable lack of awareness among participants regarding designing for seniors. Concerning understanding of senior user needs, 75 of the 130 participants (58%) stated that they knew how seniors use websites and how to design for them. The remaining 55 participants (42%) did not know how to design for seniors, and among them, 19 had no knowledge of how seniors used the web. A large number of participants were also not familiar with the senior-friendly design guidelines published by the National Institute on Aging (NIA) and the National Library of Medicine (NLM), which are the most cited set of design guidelines accommodating older adults' needs. Most participants were not aware of other senior-friendly guidelines either, which raises questions and contributes to the discussion regarding the transferability of HCI research-based recommendations from academia to practitioners in the technology design industry [34].
242
+
243
+ ### 5.2 Practical Experiences
244
+
245
+ The current practices of UX professionals in the context of designing for accessibility and usability were examined through the following dimensions: consideration of digital accessibility in projects, consideration of usability for seniors in projects, and research methods and evaluation techniques used in both cases.
246
+
247
+ Findings reveal that most participants (81%) reported considering digital accessibility in the design projects they were involved in, which shows a greater rate of adoption compared to previous studies [13],[16],[19],[24]. This could possibly be a resulting factor of their increased awareness of web accessibility guidelines and tools. Only eight participants mentioned not considering accessibility in their projects, and the reasons stated by them were project scope not including accessibility, target group/customers not requiring accessibility, time and budget constraints, lack of client support, and lack of information and tools available for accessibility. Most of these reasons for not considering accessibility have also been observed in other studies [19],[20]. However, lack of awareness regarding accessibility was not considered to be a reason for participants, unlike previous research [19], where it played a significant role in the nonconsideration of accessibility in projects.
248
+
249
+ In terms of incorporating senior-friendliness, 60% of the participants did not consider usability for seniors in their projects. The most prominent reason behind the lack of consideration of senior-friendliness in their work was that seniors were not their target demographic. Other reasons stated by participants included lack of awareness of how seniors use the Internet, senior-friendliness not being required by clients or stakeholders, senior-friendliness not being a priority, lack of interaction opportunities with seniors, lack of knowledge about designing for seniors, limited project scope, and time and budget constraints. In comparison to their consideration of digital accessibility, while there are a few overlaps in the reasons especially in terms of project characteristics, what stands out are the reasons related to their awareness or expertise in terms of designing for seniors which did not seem to be an issue in the case of accessibility. This is also supported by earlier findings on general awareness (see 5.1), where participants were observed to be more familiar with accessibility compared to usability for seniors.
250
+
251
+ On comparing the HCI methods used for designing for people with disabilities and those used for designing for seniors, it was found that participants mostly followed an accessibility guidelines-based approach for both demographics. Among the participants who considered accessibility in their projects, the most common method applied to ensure their design met the requirements of users with disabilities was adhering to accessibility guidelines, followed by conducting heuristic evaluations. It is worth noting here that both these methodologies do not involve the target users and can be conducted without their participation. When designing for seniors, participants again primarily focused on accessibility guidelines, followed by usability tests with seniors, heuristic evaluations, and persona generation based on seniors. In this case, participants considered involving target users to some extent through usability testing, but still focused majorly on HCI methods that did not require user involvement.
252
+
253
+ Given the high preference for accessibility guidelines, the most common evaluation technique for accessibility among participants was to check for compliance with the said guidelines. Other evaluation techniques used by participants include testing with automated accessibility assessment tools and HTML validation. The same methodologies have also been observed in other studies on UX professionals [19],[20]. Only seven participants (7%) reported not evaluating their designs for accessibility, compared to 48% in older studies [16], which again shows the increase in accessibility practices adoption in the industry.
254
+
255
+ Regarding evaluating designs for seniors-friendliness, usability testing was the most common technique used to ensure their designs met the needs of senior users, followed by checking for compliance with accessibility guidelines and testing with automated accessibility assessment tools. Usability principles specific to seniors were barely used in the design of user interfaces for older adults, and this could be attributed to the earlier finding regarding the lack of familiarity with senior-friendly design guidelines (see 5.1).
256
+
257
+ ### 5.3 Motivations and Challenges
258
+
259
+ UX professionals' motivations for ensuring usability for seniors and the challenges they face in the process were examined through the following dimensions: perceptions of usability for seniors in organizations, motivations for usability for seniors at the organizational level and at an individual level, and challenges of ensuring usability for seniors.
260
+
261
+ Most organizations represented in this study deemed usability for seniors to be 'less important', in contrast to Inal et al.'s [20] findings on organizational perspectives, where digital accessibility was perceived to be an important asset to many organizations. The main drivers to ensure usability for seniors for these organizations were customer requirements, inclusion of all users, and legal repercussions. Participants believed that their organizations would be more interested in ensuring usability for seniors if it was required by their customers. They also thought that their organizations would be motivated to incorporate senior-friendliness if they realized the need to be inclusive to all user groups and if they were obligated by law. These findings are similar to Lazar et al. [24], where government regulations and knowing that people with disabilities are using their websites were the biggest motivators for participants to make their websites accessible, and can be observed in other more recent studies as well [16],[19],[20]. From a personal perspective, inclusivity, ethics, and the desire to develop better products were reported to be the main drivers for taking usability for seniors into account. The concept of ethics was also discussed by Lazar et al. [24] as most participants in their study reportedly considered ethics to be important in the development of accessible websites.
262
+
263
+ Regarding challenges of ensuring usability for seniors, the most important challenges stated by the participants were lack of awareness regarding accessibility for seniors, lack of training or knowledge, time and budget restrictions, and accessibility for seniors not being a requirement for the organizations. Other challenges cited by participants, in descending order of frequency, include lack of support from management, lack of human resources, no legal repercussions, accessibility for seniors not being seen as a personal responsibility, and accessibility for seniors being outside the job description. Some of the key themes that emerged from participants' responses regarding challenges that affect the design of senior-friendly interfaces are discussed below:
264
+
265
+ #### 5.3.1 Seniors are not the target users
266
+
267
+ Generally, the design requirements of products and services are based on the needs and pain points of the target user group. Based on responses from the participants, it is evident that seniors are barely considered as part of the main target demographic, even for applications that are generic in nature. One of the main reasons behind this is the common misconception that seniors are not tech-savvy or they are not using such online services. As a result, designing for them is often overlooked in favor of target user groups that are perceived to be more profitable, thus contributing to "digital ageism". Complementing several market and government census reports, research data from across the globe show that the percentage of older adults that use the Internet is increasing [14],[32],[35],[39]. Due to their perceived lack of senior users, many organizations are losing out on customers by not putting in the required effort to meet the needs of a considerable segment of their audience.
268
+
269
+ #### 5.3.2 Lack of standardized senior-friendly design guidelines
270
+
271
+ Another challenge mentioned by participants was the lack of design guidelines that focused specifically on the needs of senior users. This was expected as very few design professionals were familiar with the guidelines published by NIA and NLM, or other guidelines. Of the 52 participants who reported considering usability for seniors in their projects, only 8 were familiar with these guidelines. This implies that these guidelines are barely used when designing for seniors. It is also evident from responses to other questions in the survey that participants were more familiar with the web accessibility guidelines and preferred using them, as opposed to the senior-friendly design guidelines, when designing for seniors. This lack of familiarity with senior-friendly guidelines can be attributed to the fact that they are not as universal or standardized as the web accessibility guidelines.
272
+
273
+ #### 5.3.3 Lack of support from stakeholders
274
+
275
+ Another common barrier to senior-friendly design as cited by participants was the lack of support from stakeholders or clients who commissioned the designers' services. Most clients are not aware nor knowledgeable about the need for senior-friendly designs, and as a result, the project briefs provided by them barely include accessibility for seniors as a crucial requirement. In order to consider accessibility for seniors in projects, UX professionals need additional time and resources, although budgets for these processes are often too restricted. Unless the client is on board, it is difficult for UX professionals to get the budget or the time to incorporate the needs of senior users, or to convince them why certain design choices must be made to accommodate related concerns. One participant stated:
276
+
277
+ "Once the client realizes this is a target market, there is no longer a question about UX for seniors. It all begins with the client."
278
+
279
+ If usability for seniors is not listed as a client requirement, it comes down to the time and cost budgeted for the project, and then accessibility for seniors is no longer a priority.
280
+
281
+ #### 5.3.4 Aesthetics vs accessibility
282
+
283
+ An important aspect that was brought up by a few participants was the prioritization of aesthetics over accessibility for seniors. Participants mentioned that the stakeholders did not care much about accessibility because the elegant design is what attracted new business, as also evidenced from Lazar et al. [24]. As a result, they would rarely budget for accessibility. Many designers also had a similar approach to this, assuming that in order to design for seniors, the trade-off would be a generic, less attractive, and less engaging product. For example, one participant mentioned:
284
+
285
+ "Sometimes we let design overrule contrast warnings and text size warnings since these don't affect the vast majority of our non-senior, non-consumer audience".
286
+
287
+ However, as evident from previous studies [48], when user interfaces are designed to be accessible, they render a positive user experience for both users with and without disabilities.
288
+
289
+ ## 6 KEY INSIGHTS
290
+
291
+ This study highlighted several key issues that UX professionals face with respect to making their products more usable and more accessible to seniors. A summary of these issues has been included below. Uncovering these, in our view, is an essential step toward addressing the lack of senior-centered focus within the UX practice. Some of these insights are similar to those exposed by Lazar et al. [24] with respect to accessibility, which suggest that, (a) designing for seniors is yet to "catch up" to the gains made with respect to designing for accessibility, and (b) the issues uncovered here are not intractable, as Lazar et al.'s work [24] acted as the spark for numerous changes in accessible design. Further research is needed to determine the appropriate course of action to address the issues and gaps that our study exposed (which is outside of the scope of this paper and would be too speculative to include here). Meanwhile, we invite the broader research and design practice community to use these as starting points in reflecting on approaches to address the many issues identified by our survey.
292
+
293
+ 1. While UX professionals are generally aware of web accessibility guidelines, tools, and assistive technologies, their level of awareness regarding how to design for seniors and the availability of senior-friendly design principles is notably low.
294
+
295
+ 2. Very few UX professionals consider usability for seniors in the design projects they are involved in, primarily due to senior-friendliness not being a requirement of the target user group and lack of knowledge regarding designing for seniors.
296
+
297
+ 3. The main methodologies used by UX professionals when designing for senior users are to follow accessibility guidelines and to conduct usability tests with older adults.
298
+
299
+ 4. The familiarity with, and the use of senior-focused usability principles among UX professionals is minimal despite the availability of a wide variety of research-based recommendations.
300
+
301
+ 5. Organizations are motivated to ensure usability for seniors in their products when their customers require it, when they want to be inclusive to all user groups, and when it is required by law.
302
+
303
+ 6. At a personal level, UX professionals are motivated to design for seniors due to inclusiveness, ethics, and the desire to develop better products.
304
+
305
+ 7. Older adults are generally not considered to be the target demographic by most organizations, which leads to stakeholders not budgeting for the time and resources required to ensure usability for seniors.
306
+
307
+ 8. Higher emphasis is placed on visual design and aesthetics compared to accessibility features and usability needs for seniors.
308
+
309
+ ## 1 LIMITATIONS AND FUTURE WORK
310
+
311
+ While our study draws methodologically from prior research, including following similar sample sizes, and using validated instruments, there are inherent limitations to our findings. Primarily, these limitations come from the exclusive use of Internet-based surveys - the only research method available to us during significant periods of pandemic-related lockdowns and restrictions to research activities. In coordination with our university's ethics and research office we have implemented various mechanisms to ensure that survey responses are completed in good faith; however, these mechanisms are not able to verify the specific accuracy of responses (e.g. time spent in industry, or number of projects worked on).
312
+
313
+ There are additional limitations inherent to surveys as a data collection method, such as not answering "Why" questions and gaining a deeper understanding of the respondents' challenges they face in their design practice. We plan to conduct follow-up in-person contextual inquiry sessions with some of our survey's respondents (most have provided us with their contact for followup), which will be situated in the context of their work or practice.
314
+
315
+ ## 7 CONCLUSION
316
+
317
+ This research focused on investigating the perspectives and practices of design professionals in the context of designing for seniors. The study was conducted using an online survey, and 130 design professionals from various industries participated in this research. The results of the study show that most UX professionals are familiar with web accessibility guidelines and assistive technologies. However, there is a considerable lack of awareness regarding how to design for seniors, and a large number of design professionals are also not familiar with any senior-friendly design guidelines. Results also suggest that only few UX professionals consider usability for seniors in the design projects they are involved in. The primary reasons cited for this are senior-friendliness not being a requirement for the target group/customer, lack of awareness of how seniors use the Internet, senior-friendliness not required by clients/stakeholders, and senior-friendliness not being a priority.
318
+
319
+ This study opens the door for future investigations that may explore and validate approaches to improving UX professionals' awareness of designing for seniors. A follow-up study will focus on larger scale surveys that refine our understanding gained in this research, and which will allow for more complex factor analysis. Further research will also include in-person guided interviews with participants. The primary goal of this study was to bring to light the lack of awareness and understanding that UX professionals have in terms of designing for seniors, and to identify some of the very specific causes of this issue. The knowledge obtained about these causes is a first, and a very important step toward addressing the overarching lack of consideration of seniors in the design of user interfaces. Similar to Lazar et al. [24], this study lays the groundwork for other researchers to propose ways to address this issue and improve the state of usability for seniors in the UX practice. Overall, it is a valuable account of the current state of awareness and activity in the field of technology design with regards to usability for older adults, and a reminder that there is much work to be done to promote the how and why of designing for an older audience.
320
+
321
+ ## REFERENCES
322
+
323
+ [1] Benett Axtell and Cosmin Munteanu. 2019. Back to real pictures: A cross-generational understanding of users' mental models of photo cloud storage. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 3 (2019), 1-24. https://doi.org/10.1145/3351232
324
+
325
+ [2] Shirley Ann Becker. 2004. A study of web usability for older adults seeking online health resources. ACM Transactions on Computer-Human Interaction (TOCHI) 11, 4 (2004), 387-406. https://doi.org/10.1145/1035575.1035578
326
+
327
+ [3] Nigel Bevan, James Carter, and Susan Harker. 2015. ISO 9241-11 revised: What have we learnt about usability since 1998?. In International Conference on Human-Computer Interaction, 143- 151. https://doi.org/10.1007/978-3-319-20901-2_13
328
+
329
+ [4] Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3, 2 (2006), 77-101. https://doi.org/10.1191/1478088706qp063oa
330
+
331
+ [5] Deborah H. Charbonneau. 2014. Public library websites and adherence to senior-friendly guidelines. Public Library Quarterly 33, 2 (2014), 121-130. https://doi.org/10.1080/01616846.2014.910722
332
+
333
+ [6] Dana Chisnell and Janice Redish. 2004. Designing web sites for older adults: A review of recent research. Retrieved from https://assets.aarp.org/www.aarp.org_/articles/research/oww/AAR P-LitReview2004.pdf
334
+
335
+ [7] Dana Chisnell and Janice Redish. 2005. Designing web sites for older adults: Expert review of usability for older adults at 50 web sites. Retrieved from
336
+
337
+ https://assets.aarp.org/www.aarp.org_/articles/research/oww/AAR P-50Sites.pdf
338
+
339
+ [8] Shelia R. Cotten, William A. Anderson, and Brandi M. McCullough. 2013. Impact of internet use on loneliness and contact with others among older adults: Cross-sectional analysis. Journal of Medical Internet Research 15, 2 (2013). https://doi.org/10.2196/jmir.2306
340
+
341
+ [9] Sara J. Czaja. 2017. The potential role of technology in supporting older adults. Public Policy & Aging Report 27, 2 (2017), 44-48. https://doi.org/10.1093/ppar/prx006
342
+
343
+ [10] José-Manuel Díaz-Bossini and Lourdes Moreno. 2014. Accessibility to mobile interfaces for older people. Procedia Computer Science 27, (2014), 57-66.
344
+
345
+ https://doi.org/10.1016/j.procs.2014.02.008
346
+
347
+ [11] Elizabeth Ellcessor. 2014. (ALT="Textbooks"): Web accessibility myths as negotiated industrial lore. Critical Studies in Media Communication 31, 5 (2014), 448-463. https://doi.org/10.1080/15295036.2014.919660
348
+
349
+ [12] R. Darin Ellis and Sri H. Kurniawan. 2000. Increasing the usability of online information for older users: A case study in participatory design. International Journal of Human-Computer Interaction 12, 2 (2000), 263-276. https://doi.org/10.1207/S15327590IJHC1202_6
350
+
351
+ [13] ENABLED. 2005. Analysis of the ENABLED web developer survey. Retrieved from
352
+
353
+ http://www.enabledweb.org/public_results/survey_results/analysis.html
354
+
355
+ [14] Eurostat. 2020. Ageing Europe - Looking at the lives of older people in the EU. Retrieved from https://ec.europa.eu/eurostat/documents/3217494/11478057/KS- 02-20-655-EN-N.pdf
356
+
357
+ [15] Kate Finn and Jeff Johnson. 2013. A usability study of websites for older travelers. In International Conference on Universal Access in Human-Computer Interaction, (2013), 59-67. https://doi.org/10.1007/978-3-642-39191-0_7
358
+
359
+ [16] Andre P. Freire, Cibele M. Russo, and Renata P. M. Fortes. 2008. A survey on the accessibility awareness of people involved in web development projects in Brazil. In Proceedings of the 2008 international cross-disciplinary conference on Web accessibility (W4A (08), (2008), 87-96.
360
+
361
+ https://doi.org/10.1145/1368044.1368064
362
+
363
+ [17] T. A. Hart, B. S. Chaparro, and C. G. Halcomb. 2008. Evaluating websites for older adults: Adherence to 'senior-friendly' guidelines and end-user performance. Behaviour & Information Technology 27, 3 (2008), 191-199. https://doi.org/10.1080/01449290600802031
364
+
365
+ [18] Jinmoo Heo, Sanghee Chun, Sunwoo Lee, Kyung Hee Lee, and Junhyoung Kim. 2015. Internet use and well-being in older adults. Cyberpsychology, Behavior, and Social Networking 18, 5 (2015), 268-272. https://doi.org/10.1089/cyber.2014.0549
366
+
367
+ [19] Yavuz Inal, Kerem Rızvanoğlu, and Yeliz Yesilada. 2019. Web accessibility in Turkey: Awareness, understanding and practices of user experience professionals. Universal Access in the Information Society 18, (2019), 387-398. https://doi.org/10.1007/s10209-017- 0603-3
368
+
369
+ [20] Yavuz Inal, Frode Guribye, Dorina Rajanen, Mikko Rajanen, and Mattias Rost. 2020. Perspectives and practices of digital accessibility: A survey of user experience professionals in Nordic countries. In Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society, (2020), 1-11. https://doi.org/10.1145/3419249.3420119
370
+
371
+ [21] Bran Knowles, Vicki L. Hanson, Yvonne Rogers, Anne Marie Piper, Jenny Waycott, Nigel Davies, Aloha Ambe, Robin N. Brewer, Debaleena Chattopadhyay, Marianne Dee, David Frohlich, Marisela Gutierrez-Lopez, Ben Jelen, Amanda Lazar, Radoslaw Nielek, Belén Barros Pena, Abi Roper, Mark Schlager, Britta Schulte, and Irene Ye Yuan. 2020. The harm in conflating aging with accessibility. Communications of the ACM. Retrieved from https://eprints.lancs.ac.uk/id/eprint/148395/1/CHI_workshop_write up_Final.pdf
372
+
373
+ [22] Panayiotis Koutsabasis, Evangelos Vlachogiannis, and Jenny S. Darzentas. 2010. Beyond specifications: Towards a practical methodology for evaluating web accessibility. Journal of Usability Studies 5, 4 (2010), 157-171. https://dl.acm.org/doi/10.5555/2019116.2019120
374
+
375
+ [23] Sri Kumiawan and Panayiotis Zaphiris. 2005. Research-derived web design guidelines for older people. In Proceedings of the 7th international ACM SIGACCESS conference on Computers and accessibility (Assets ‘05), 129-135. https://doi.org/10.1145/1090785.1090810
376
+
377
+ [24] Jonathan Lazar, Alfreda Dudley-Sponaugle, and Kisha-Dawn Greenidge. 2004. Improving web accessibility: A study of
378
+
379
+ webmaster perceptions. Computers in Human Behavior 20, 2 (2004), 269-288. https://doi.org/10.1016/j.chb.2003.10.018
380
+
381
+ [25] Chaiwoo Lee and Joseph F. Coughlin. 2015. PERSPECTIVE: Older adults' adoption of technology: An integrated approach to identifying determinants and barriers. Journal of Product Innovation Management 32, 5 (2015), 747-759. https://doi.org/10.1111/jpim.12176
382
+
383
+ [26] Kyle R. Lynch, Diana J. Schwerha, and George A. Johanson. 2013. Development of a weighted heuristic for website evaluation for older adults. International Journal of Human-Computer Interaction 29, 6 (2013), 404-418. https://doi.org/10.1080/10447318.2012.715277
384
+
385
+ [27] S. Milne, A. Dickinson, A. Carmichael, D. Sloan, R. Eisma, and P. Gregor. 2005. Are guidelines enough? An introduction to designing web sites accessible to older people. IBM Systems Journal 44, 3 (2005), 557-571. https://doi.org/10.1147/sj.443.0557
386
+
387
+ [28] Karyn Moffatt. 2013. Older-adult HCI: Why should we care? Interactions 20, 4 (2013), 72-75. https://doi.org/10.1145/2486227.2486242
388
+
389
+ [29] Stephanie A. Morey, Rachel E. Stuck, Amy W. Chong, Laura H. Barg-Walkow, Tracy L. Mitzner, and Wendy A. Rogers. 2019. Mobile health apps: Improving usability for older adult users. Ergonomics in Design 27, 4 (2019), 4-13. https://doi.org/10.1177/1064804619840731
390
+
391
+ [30] Eun-Shim Nahm, Jennifer Preece, Barbara Resnick, and Mary Etta Mills. 2004. Usability of health web sites for older adults: A preliminary study. CIN: Computers, Informatics, Nursing 22, 6 (2004), 326-334. https://doi.org/10.1097/00024665-200411000- 00007
392
+
393
+ [31] National Library of Medicine. Making your website senior-friendly. Retrieved from https://nnlm.gov/mar/guides/making-your-website-senior-friendly
394
+
395
+ [32] National Seniors Australia. 2019. Senior surfers: Diverse levels of digital literacy among older Australians. Retrieved from https://nationalseniors.com.au/uploads/NationalSeniorsAustralia-SeniorSurfer-ResearchReport-2019.pdf
396
+
397
+ [33] Nielsen Norman Group. 2019. UX design for seniors (ages 65 and older). Retrieved from https://www.nngroup.com/reports/senior-citizens-on-the-web/
398
+
399
+ [34] Abiodun Afolayan Ogunyemi, David Lamas, Marta Kristin Lárusdóttir, and Fernando Loizides. 2019. A systematic mapping study of HCI practice research. International Journal of Human-Computer Interaction 35, 16 (2019), 1461-1486. https://doi.org/10.1080/10447318.2018.1541544
400
+
401
+ [35] Pew Research Center. 2021. Internet/broadband fact sheet. Retrieved from https://www.pewresearch.org/internet/fact-sheet/internet-broadband/
402
+
403
+ [36] Valeria Righi, Sergio Sayago, and Josep Blat. 2017. When we talk about older people in HCI, who are we talking about? Towards a 'turn to community' in the design of technologies for a growing ageing population. International Journal of Human-Computer Studies 108 (2017), 15-31. https://doi.org/10.1016/j.ijhcs.2017.06.005
404
+
405
+ [37] Sven Schmutz, Andreas Sonderegger, and Juergen Sauer. 2018. Effects of accessible website design on nondisabled users: Age and device as moderating factors. Ergonomics 61, 5 (2018), 697-709. https://doi.org/10.1080/00140139.2017.1405080
406
+
407
+ [38] D. Sloan. 2006. Two cultures? The disconnect between the web standards movement and research-based web design guidelines for older people. Gerontechnology 5, 2 (2006), 106-112. https://doi.org/10.4017/gt.2006.05.02.007.00
408
+
409
+ [39] Statistics Canada. 2019. Evolving internet use among Canadian seniors. Retrieved from
410
+
411
+ https://www150.statcan.gc.ca/n1/pub/11f0019m/11f0019m201901 5-eng.pdf
412
+
413
+ [40] Jessica Taha, Sara J. Czaja, Joseph Sharit, and Daniel G. Morrow. 2013. Factors affecting usage of a personal health record (PHR) to manage health. Psychology and Aging 28, 4 (2013), 1124-1139. https://doi.org/10.1037/a0033911
414
+
415
+ [41] Jennifer E. F. Teitcher, Walter O. Bockting, José A. Bauermeister, Chris J. Hoefer, Michael H. Miner, and Robert L. Klitzman. 2015. Detecting, preventing, and responding to "fraudsters" in internet research: Ethics and tradeoffs. The Journal of Law, Medicine & Ethics 43, 1 (2015), 116-133. https://doi.org/10.1111/jlme.12200
416
+
417
+ [42] United Nations. 2020. World population ageing 2020 highlights. Retrieved from
418
+
419
+ https://www.un.org/development/desa/pd/sites/www.un.org.develo pment.desa.pd/files/undesa_pd- 2020_world_population_ageing_highlights.pdf
420
+
421
+ [43] Shengzhi Wang, Khalisa Bolling, Wenlin Mao, Jennifer Reichstadt, Dilip Jeste, Ho-Cheol Kim, and Camille Nebeker. 2019. Technology to support aging in place: Older adults' perspectives. Healthcare 7, 2 (2019), 60. https://doi.org/10.3390/healthcare7020060
422
+
423
+ [44] World Wide Web Consortium. 2005. Introduction to web accessibility. Retrieved from https://www.w3.org/WAI/fundamentals/accessibility-intro/
424
+
425
+ [45] World Wide Web Consortium. 2005. Web content accessibility guidelines (WCAG) overview. Retrieved from https://www.w3.org/WAI/standards-guidelines/wcag/
426
+
427
+ [46] World Wide Web Consortium. 2006. Web accessibility evaluation tools list. Retrieved from https://www.w3.org/WAI/ER/tools/
428
+
429
+ [47] World Wide Web Consortium. 2010. Developing websites for older people: How web content accessibility guidelines (WCAG) 2.0 applies. Retrieved from https://www.w3.org/WAI/older-users/developing/
430
+
431
+ [48] Yeliz Yesilada, Giorgio Brajnik, Markel Vigo, and Simon Harper. 2015. Exploring perceptions of web accessibility: A survey approach. Behaviour & Information Technology 34, 2 (2015), 119-134. https://doi.org/10.1080/0144929X.2013.848238
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/sJPz-4Rwghv/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,675 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § "NOT OUR TARGET USERS": UX PROFESSIONALS' PERCEPTIONS OF DESIGNING FOR OLDER ADULTS
2
+
3
+ § ABSTRACT
4
+
5
+ In this paper, we revisit Jonathan Lazar's early work [24] on understanding designers' perceptions of accessibility for people with disabilities, and follow the same approach to instead contribute similar insights into the current state of designing websites and web applications for seniors. For this, we present a survey investigating how design professionals consider digital accessibility and usability for the ageing population in the UX practice. The survey probed on awareness and application of usability principles for older adults and challenges that hinder the design of senior-friendly products. Findings reveal that many respondents did not incorporate senior-focused usability practices in their work, nor were they familiar with design principles specific to older users. Lack of awareness and knowledge regarding the accessibility and usability needs of older adults were stated to be the main barriers to senior-friendly design. The study identifies several other challenges facing UX professionals when designing for seniors and provides directions for future research.
6
+
7
+ Keywords: Older adults, Inclusive design, UX professionals, User interface design, Senior-friendly design guidelines
8
+
9
+ § 1 INTRODUCTION
10
+
11
+ In recent years, with the push towards a more inclusive society, there has been an increasing demand for the consideration of diverse user profiles in the design of digital products and services. One such user profile is that of 'older adults,' an important and expanding group of internet users who are typically underrepresented in technology design. According to the United Nations [42], the global population aged 65 years or over is growing faster than all other age groups. With the unparalleled growth of the ageing population, the number of older adults using online technologies also continues to increase. Internet use doubled from 35% to 75% among seniors in the United States between 2007 and 2021 [35], and similar trends are occurring all around the developed world [14],[32],[39].
12
+
13
+ Despite their increasing technology adoption, older adults still struggle to use many online services due to various factors associated with ageing. As people age, they experience limitations in their functional abilities and are gradually afflicted with difficulties in vision, hearing, cognition, and mobility [10],[26]. Generally, the user interfaces of many online products do not take these changing abilities into account, nor do they address the specific design needs of older adults [2],[5],[15],[29]. This compromises their usability and causes increased frustration for senior users, resulting in a lack of self-confidence and motivation to continue using the technology [43]. Older adults' mental models of user interfaces are also often different compared to those of (younger) designers who design these products and services [1]. This puts seniors at a major disadvantage in the digital age as they are unable to access the same services as their younger counterparts. Previous studies confirm that a more accessible web can be instrumental in enabling older adults to maintain an active, community-based lifestyle [8],[9],[18]. Therefore, to facilitate their ageing independently and to provide them equal access to information, technology designers should take into account the needs of the ageing population and ensure the products they design are "senior-friendly," i.e. easy for seniors to use without any additional help.
14
+
15
+ Usability for older adults has been a growing topic of interest and relevance for the human-computer interaction (HCI) community over the years. Studies have provided literature reviews [6], conducted expert reviews of websites [7], and discussed methodologies of user-centered design through participatory design [12] or usability testing [30] with seniors. Several design guidelines [23] and heuristics [26] have also been published to assist in improving the usability of interfaces for older adults. However, there remains a lack of research on how user experience (UX) professionals in the industry approach this topic. With the large number of online services that lack usability for older adults [2],[5],[15],[29], it is important to assess where the design community currently stands in terms of senior-focused design practices. This also raises the need to identify any barriers UX professionals might be facing that are inhibiting the design of user-friendly interfaces for older adults. Prior research has extensively investigated such barriers to (and attitudes toward) the widespread use of accessibility design guidelines, including seminal work on which we methodologically ground our own [24]. However, accessibility guidelines may not provide comprehensive support when designing for older adults [27],[38]. In fact, there are also strong arguments against equating ageing with accessibility (in design and elsewhere) [21],[28],[36].
16
+
17
+ Therefore, to fill this gap, we conducted an online survey with the participation of 130 professionals working in UX design from various industries. The aim of this study was to:
18
+
19
+ 1. investigate the level of understanding and awareness UX professionals have about accessibility and usability for seniors,
20
+
21
+ 2. understand how UX professionals incorporate accessibility and usability for seniors in their design projects, and
22
+
23
+ 3. uncover the motivations for, barriers to, and challenges UX professional face when ensuring senior-focused design usability.
24
+
25
+ This paper presents the findings of the study in detail and provides directions for future research. As a note on terminology, the terms 'older adults' and 'seniors' have been interchangeably used in this paper. While the term 'older adult' is more commonly used in HCI literature, the more prevalent term in our own sociocultural context is that of 'senior' (as indicated by government surveys in our geographical location, in a large urban center in Canada).
26
+
27
+ This study makes a major contribution to research on UX professionals by providing an insight into their current state of awareness and application of design methodologies for older adults. This is noteworthy because, while accessibility practices for people with disabilities have been widely studied, there has been limited focus on professionals' expertise and experience with designing specifically for seniors. Through this research, we bring to surface evidence about several reasons why senior-friendliness is not a focus for UX professionals while fostering a reflection on the transfer of research-based recommendations to the professional environment. The results provide insights into the current resources and attitudes designers have with regards to designing for seniors in comparison to previous studies which have focused on different, yet related domains (e.g. accessibility).
28
+
29
+ While we were inspired by prior work on understanding the challenges designers face when designing for accessibility, our work is not about accessibility. Instead, our study only draws methodologically from Jonathan Lazar's prior work on perceptions of accessibility [24]. We adapt Lazar's approach and extends the scope of their methods to studying the challenges designers have with respect to usability for seniors. Our findings reveal that similar awareness (and work) is now needed in the field of senior-friendly design as it was for accessibility at the time of Lazar et al.'s seminal paper, and through this, we hope that the results of our study will inspire a culture and policy shift in terms of including older adults in design as Lazar's paper did for accessible design.
30
+
31
+ § 2 BACKGROUND AND RELATED WORK
32
+
33
+ This section provides the theoretical background of the study and situates our work in literature. We begin with an overview of how digital accessibility and usability are relevant to designing for seniors, followed by a description of various design principles and usability methods available to assist UX professionals in the creation of senior-friendly products. We conclude this section with a discussion of how lack of specific support resources within industry may result in designers not being aware or knowledgeable of ways to make their products inclusive - this was revealed by Lazar [24] in their seminal work related to designing for accessibility, which we now aim to replicate with respect to designing for older adults.
34
+
35
+ § 2.1 DIGITAL ACCESSIBILITY AND SENIORS
36
+
37
+ 'Digital accessibility' and 'usability' are two different concepts that are closely related in the context of crafting technologies that work for everyone. Digital accessibility primarily focuses on people with disabilities and ensures that technologies are designed and developed in a way that everyone can use them, regardless of disability type or severity of impairment [44]. This includes auditory, cognitive, neurological, physical, speech, and visual impairments that may affect people's access to, or interaction with online products and services. While digital accessibility predominantly serves people with disabilities, it also benefits people without disabilities, like older adults who face gradual limitation of functional abilities due to ageing [47]. This is because the needs of older adults with changing abilities can be considered to overlap with the accessibility needs of people with disabilities to some extent. For example, one of the accessibility principles focuses on allowing users to incrementally change the size of the text in user interfaces. Although this principle is targeted at people with disabilities, senior users requiring larger text in interfaces due to declining vision can also gain from its implementation. Older adults can therefore be assumed to be beneficiaries of accessible design, which makes it an important consideration for UX professionals when designing digital products.
38
+
39
+ In many countries, accessibility of digital designs is now legislated, and it follows from widely-used industry standards such as the Web Content Accessibility Guidelines (WCAG), published by the World Wide Web Consortium (W3C) Web Accessibility Initiative (WAI) [45]. These guidelines have become the benchmark for creating and evaluating accessible interfaces, and have been set as the minimum requirement in the digital accessibility policy of many countries worldwide [37]. While WCAG has been primarily developed for websites, the success criteria for these guidelines are not technology-specific and, therefore, they apply to all kinds of user interfaces. It is important to note that these guidelines are highly technical and require expert knowledge of web technologies for their comprehension and application [22]. However, there are a variety of software tools available that complement these guidelines and can help professionals determine if their design meets the accessibility standards [46].
40
+
41
+ Usability, on the other hand, refers to the general intuitiveness and ease of use of user interfaces. Usability for seniors ensures that digital products can be used by older adults to achieve their goals in an effective, efficient, and satisfactory manner, and the level of usability is determined by how well the features of the user interface accommodate senior users' needs and contexts [3]. While various projects (such as the Web Accessibility Initiative: Ageing Education and Harmonization (WAI-AGE) [47]) have suggested that designing for accessibility removes some barriers for older adults as well, this often only covers the most basic aspects of how older adults engage with digital designs, leading them to be unsure of their ability and unmotivated to continue trying new technologies [43]. Additionally, significant research within HCI highlights the dangers of conflating ageing with accessibility [21] [28] - which we took significant care to avoid in our own research, especially as we are drawing methodologically from prior research on accessibility.
42
+
43
+ § 2.2 SENIOR-FRIENDLY DESIGN GUIDELINES
44
+
45
+ Previous studies show that usability is one of the most important factors affecting older adults' adoption of technology [25]. For example, in a study examining the usage of electronic personal health records [40], it was found that while seniors considered these systems to be valuable, the prevalence of usability problems, such as complex navigation systems and highly technical language, made them challenging to use by older adults. Lack of usability can also result in increased frustration for older adults,. Therefore, in order to avoid usability challenges for older adults, it is important for UX professionals to adhere to a user-centered design approach and consider the needs and pain points of seniors in the design and evaluation of technologies.
46
+
47
+ Design principles explicitly targeted at the needs of older adults have been established to ensure the usability of interfaces for seniors. Based on various research conducted on ageing, the National Institute on Aging (NIA) and the National Library of Medicine (NLM) in the United States published "Making Your Web Site Senior Friendly: A Checklist," consisting of design guidelines that are very specific to older adults [31]. Examples of guidelines from the checklist include providing clear instructions, avoiding jargon, making it easy for users to enlarge text, reducing scrolling, and using high-contrast color combinations. Similarly, Kurniawan and Zaphiris [23] presented a set of "research-derived ageing-centered web design guidelines" for older adults in 2005, which include providing larger targets, having clear navigation, using color and graphics minimally, and reducing demand on the users' memory. In 2013, Lynch, Schwerha, and Johanson [26] developed a weighted heuristic for evaluating the usability of user interfaces for older adults, which included a list of 32 characteristics representing the most important senior-friendly design recommendations. The Nielsen Norman Group also released their third edition of "UX Design for Seniors (Ages 65 and older)" in 2019, which is a commercially available report outlining design guidelines for particular tasks and web components to support usability for seniors [33]. Although these guidelines have been widely used and referenced in academia, there is limited research on how much of these recommendations are transferred to the professional environment and how UX professionals incorporate them into their design practice.
48
+
49
+ § 2.3 INVOLVING SENIORS IN THE DESIGN PROCESS
50
+
51
+ While following both accessibility guidelines and usability principles are important, they are not sufficient to guide designers toward senior-friendly design. To test the effectiveness of these guidelines and to ensure all needs and pain points of older adults are taken into consideration, senior users should be directly involved in the design process through various usability methods. Yesilada et al. [48] found that designers believe accessibility evaluation should be grounded on user-centered design practices, as opposed to just inspecting source codes, in order to obtain more reliable and valid results. This sentiment was also shared by Hart, Chaparro, and Halcomb [17], who suggested using a combination of design guidelines and usability testing when designing websites for older adults, as well as Milne et al. [27], who recommended designers go beyond WCAG and get firsthand interaction with users to ensure their needs are met.
52
+
53
+ § 2.4 UNDERSTANDING DESIGNERS' ATTITUDES AND BARRIERS TOWARD INCLUSIVE DESIGN
54
+
55
+ Design professionals from various interdisciplinary backgrounds participate in the design and development of online products and services, and therefore, their perceptions and practices of accessibility have been an important topic investigated by several research projects. Five relevant surveys conducted with these professionals are summarized below:
56
+
57
+ Lazar, Dudley-Sponaugle, and Greenidge (2004) surveyed 175 webmasters of government and commercial organizations to investigate their knowledge of web accessibility [24]. Most of the participants (74%) reported that they were familiar with government laws on web accessibility, and many (79%) were familiar with automated software tools used for accessibility evaluation. These results indicate that a lack of knowledge or awareness is not the prime reason behind the shortage of accessible interfaces. However, it is also notable that almost one-fourth of the respondents (23%) did not know about web accessibility guidelines at all. Participants cited lack of time, training, managerial and client support, as well as lack of software tools, and confusing accessibility guidelines as the main barriers to web accessibility. They also mentioned concerns regarding maintaining a balance between accessibility and good graphic design, which appears to stem from the misconception that an accessible website may downgrade the experience for visual users [11]. Concerning motivation, participants indicated that the primary reasons for making their websites accessible would be requirements imposed by the government, use of the websites by people with disabilities, external funding, requirements from management or clients, training on accessibility, and access to better accessibility tools.
58
+
59
+ A similar survey was conducted by ENABLED Group (2005) with 269 subjects, which included webmasters, managers, and content editors [13]. Only 36% of the participants responded that they try to make their websites accessible, and very few (13%) had received training on accessibility. The primary reasons behind this were indicated to be a lack of knowledge of web accessibility guidelines, lack of technical knowledge, and time constraints. Nonetheless, many participants (74%) expressed interest in attending training sessions to learn more about accessibility, with the preferred topics being web accessibility guidelines, usability, and accessibility evaluation.
60
+
61
+ Freire, Russo, and Fortes (2008) surveyed 613 professionals in Brazil from diverse backgrounds (academia, industry, and government), who took part in web development projects [16]. The findings showed that only 20% of the participants considered accessibility as critical to their projects. Lack of training on accessibility and lack of knowledge about the Brazilian accessibility law were stated to be the primary reasons behind accessibility not being a priority among participants.
62
+
63
+ Modelling the studies mentioned above, Inal, Rızvanoğlu, and Yesilada (2019) surveyed 113 UX professionals in Turkey regarding their awareness and practice of web accessibility [19]. While most participants (71%) indicated that they had received training on web accessibility, many (69%) still did not consider accessibility in their projects. Moreover, only 17% of the participants reported working directly with people with disabilities for their projects and accessibility evaluations. A similar survey was conducted by Inal et al. (2020) with the participation of 167 UX professionals from Nordic countries [20]. Results show that while digital accessibility was considered to be important by the respondents, they had limited knowledge about accessibility guidelines and standards. Most of the organizations represented in this study included accessibility in their projects, however, the time spent by these organizations on accessibility issues was reported to be very less. The main challenges participants faced in creating accessible systems were lack of training and time and budget constraints.
64
+
65
+ In summary, the studies conducted by the ENABLED Group [13] and Freire et al. [16] confirm that a lack of awareness of accessibility laws and a lack of training on web accessibility can largely hinder the development of accessible interfaces. On the other hand, the studies conducted by Lazar et al. [24] and Inal et al. [19] show that awareness or knowledge about web accessibility does not automatically lead to the development of accessible interfaces. Although design professionals are aware of the needs of people with disabilities, they still do not take these needs into consideration generally.
66
+
67
+ The above-mentioned studies are mostly centered around accessibility for people with disabilities, which is different from that for seniors, as discussed earlier. While WCAG guidelines can be applicable to older people experiencing age-related impairments [47], merely following accessibility guidelines does not necessarily lead to the design being usable, nor do they help overcome the particular challenges facing older adults [27],[38]. There is still much work to be done in ensuring usability for seniors, as can be understood from the results of numerous previous studies [2],[5],[15],[29] which revealed how websites or apps are lacking in this regard. It has also been identified that there appears to be little awareness among designers of the specific requirements of older people compared to their knowledge of WCAG [38]. As a result, they are not considering the particular needs of a growing audience when designing user interfaces.
68
+
69
+ § 3 STUDY RATIONALE AND METHODS
70
+
71
+ Based on the surveyed literature, we claim that designing digital applications for older adults is today struggling with challenges similar to where designing for accessibility was more than two decades ago. As such, it is imperative to find the reasons behind the lack of senior-friendly interfaces and to fill this gap, research concerning the perceptions of UX professionals in considering accessibility and usability for older adults needs to be done. In this vein, we are inspired by Lazar et al.'s [24] landmark research on web accessibility. We draw methodologically from that seminal research that expose gaps in the design process with respect to accessibility.
72
+
73
+ Aiming to extend Lazar et al.'s [24] work on digital accessibility (by extending this to usability for seniors), we methodologically followed their protocol and adapted it to the emerging context of inclusive design for older adults. As such, we employed a quantitative survey-based methodology for this study. The survey was administered online using SurveyGizmo with 130 respondents. The survey was deployed in 2019, with the bulk of data collection occurring throughout 2020 (with several interruptions due to COVID-19 pandemic's effect on availability of research staff - however we do not consider this extended period of recruitment to have any influence on the quality of survey responses since no time-sensitive information was collected.)
74
+
75
+ § 3.1 RESEARCH QUESTIONS
76
+
77
+ Through our research, we attempt to address the identified gaps in the literature by focusing on three main research questions:
78
+
79
+ RQ1: What is the level of understanding and awareness UX professionals have about accessibility and usability for seniors?
80
+
81
+ RQ2: How do UX professionals incorporate accessibility and usability for seniors in their design projects?
82
+
83
+ RQ3: What are the motivations for and challenges of ensuring usability for seniors by UX professionals?
84
+
85
+ § 3.2 QUESTIONNAIRE DESIGN
86
+
87
+ The questionnaire was derived from priorly validated research instruments on digital accessibility awareness and practices (see 3.2.1). We opted for this approach due to our assumption that designing for seniors may be at the same stage of awareness and practice as designing for accessibility was when Lazar et al. [24] conducted their seminal research on this topic. Additionally, using a priorly validated instrument (questionnaire) that was used in a similar domain facilitated the collection of more robust data which may not have been possible with an instrument developed from the ground up.
88
+
89
+ The questionnaire was subject to two rounds of pilot testing. The first round was conducted with two participants from academia and one participant from the industry. Questions were revised to address issues of clarity and ambiguity that emerged from the pilot. For the second round, the questionnaire was deployed online via SurveyGizmo, and was validated by two participants from academia. The final questionnaire was comprised of 32 questions, both open-ended and closed-ended, grouped into four sections:
90
+
91
+ 1. Personal Information included eight questions to obtain demographic information, such as geographic location, educational background, and work experience;
92
+
93
+ 2. General Understanding and Awareness included nine questions pertaining to RQ1, to determine knowledge of how seniors use the web, and awareness of assistive technologies, digital accessibility legislation, senior-friendly design guidelines, and tools;
94
+
95
+ 3. Practical Experience included ten questions pertaining to RQ2, to identify consideration of accessibility and usability for seniors in the UX practice, and the use of various research methods and evaluation techniques;
96
+
97
+ 4. Motivations and Challenges included five questions pertaining to RQ3, to understand challenges, and personal and organizational interests in supporting usability for seniors.
98
+
99
+ We clarify here that questions in the General Awareness and Practical Experiences groups were designed to compare 'general accessibility' practices with 'usability for seniors' practices. Questions in Motivations and Challenges focused only on 'usability for seniors'. Since our focus was on attitudes towards designing for seniors in general, we did not hypothesize anything specific about accessibility and usability. As such, the Results section is presented from the responses that emerged from these questions, and not from a preconceived structure.
100
+
101
+ The questionnaire was preceded by a consent form that outlined the purpose of the study, explained the rights of the participants, and assured them of complete anonymity. Following the consent form, participants were taken to a separate web page where they were presented with the questionnaire.
102
+
103
+ § 3.2.1 GROUNDING OF QUESTIONNAIRE DESIGN IN PRIOR WORK
104
+
105
+ Most questions were closely informed by previously developed and validated surveys, such as Lazar et al. [24] and Freire et al. [16], and extended to inquire about "designing for seniors", as opposed to "designing for people with disabilities". A breakdown of the survey questions by the source is provided in Table 1. The survey instruments are entirely available as supplementary materials included with the submission of this paper. The questions we included from Lazar's and Freire's instruments were selected based on how applicable these were to the process of considering various resources (e.g. guidelines) in making designs inclusive to a specific user group that was typically excluded from design considerations. This has allowed us to easily and objectively adapt their instruments (which were focused on accessibility) to our own domain - designing for older adults.
106
+
107
+ Table 1: Number of survey questions by source
108
+
109
+ max width=
110
+
111
+ Survey Section No. of questions derived from Lazar et al. [24] No. of questions derived from Freire et al. [16] No. of questions added by authors
112
+
113
+ 1-4
114
+ General Understanding and Awareness 3 4 2
115
+
116
+ 1-4
117
+ Practical Experiences 2 2 6
118
+
119
+ 1-4
120
+ Motivations and Challenges 3 1 1
121
+
122
+ 1-4
123
+
124
+ The extra questions we added were also extended versions of questions from Lazar et al. [24] and Freire et al. [16]. These questions were included to help gather data on senior-friendly design practices, which was not addressed in their study. For example, Lazar's question: "Are you familiar with any of the following accessibility guidelines from the Web Accessibility Initiative?" was extended to "Are you familiar with the senior-friendly design guidelines published by the National Institute on Aging and the National Library of Medicine?", while the question: "What do you think is the biggest challenge of making a website accessible for users with visual impairments?" was modified to "What do you think are the challenges of making websites or apps senior-friendly?". Questions regarding visual impairments were asked to examine general accessibility practices (similar to Lazar's), which can be a part of the UX practitioner's design process when designing for seniors. For example, the question: "Have you ever created a website that is accessible for users with visual impairments?" was extended to "Have you ever created a website or app that is accessible for seniors?". Similarly as an example, Freire's survey question asking the respondent to describe "Awareness of problems faced by blind people using the Internet" was adapted to our domain as "Describe your understanding of how seniors use websites". A complete description of our survey, including how questions were derived from the instruments used in Lazar’s and Freire’s research and adapted to our domain (older adults), is included in the supplementary materials submitted together with this paper.
125
+
126
+ Deriving our questionnaire from previously-validated instruments required us to compress some questions and also not inquire about accessibility practices at the same level of granularity. As such, while questions about familiarity with specific accessibility tools were not included, an option was provided for participants to type in the tools they were familiar with. However, given the relevance to designing for older adults [15], we included questions regarding visual impairments, which were asked to examine general accessibility practices (similar to Lazar's), which can be a part of the UX practitioner's design process when designing for seniors. Questions regarding assistive technologies were adopted from Freire et al with very minor modifications, and participants were asked to choose from a very broad range of answers. The same set of options was also used by Inal et al [19]. In the same manner, the question about ethical consideration was not completely removed; it was included as part of the motivation-related questions.
127
+
128
+ § 3.2.2 DEFINITION OF 'SENIORS'
129
+
130
+ Studies in literature vary in their definition of 'seniors' and the age range they belong to. Generally, the age range defined for 'seniors' is either over 60 or over 65 . However, some research uses a lower threshold or a flexible threshold by tying it to the typical or legal retirement age. Given that the participants in this study were not older adults, but rather UX professionals of varying ages, it was considered that imposing a standard age range for seniors may have been limiting and also insensitive to the localized and personal socio-cultural norms in which each UX professional may operate. Therefore, when answering the questions, participants were asked to think of their own definition of 'seniors' and the age range they belonged to as relevant to their culture and experiences.
131
+
132
+ § 3.3 RECRUITMENT
133
+
134
+ Prospective participants were invited to express interest in the study by filling out a short enrollment form which served as a screener to ensure quality of data and to avoid fraudulent responses [41]. The enrollment form was posted on professional UX design groups on various closed-group (member-only) social media channels and promoted through personal contacts and announcements posted on e-newsletters and social media groups informally associated with several design communities. Participants were asked to briefly describe their work experience in the enrollment form, and those considered to be 'legitimate' responders [41] with a background in UX, were emailed a link to the questionnaire.
135
+
136
+ Participation was not restricted to a geographical location, given that this was an online survey. As a token of appreciation for their time and contributions, participants were offered a $\$ {10}$ Amazon gift card in a currency of their choice (US dollars or Canadian dollars) once they signed the consent form.
137
+
138
+ § 3.4 PARTICIPANTS
139
+
140
+ In total, 130 participants completed the survey. Participants had to meet the following eligibility criteria: be at least 18 years of age and be a design professional.
141
+
142
+ We used the term 'design professional' in the survey instead of 'UX professional' to include people who did not have UX-specific job titles but were still involved in user-centered design processes. For the purpose of this study, a 'design professional' is defined as someone who designs or provides consultancy services in the design of user interfaces for websites or apps that are not for their own personal use. This clarifying definition was provided in the consent form to help prospective participants decide whether they identified as design professionals.
143
+
144
+ Since the survey did not ask for personal information, there was no risk to participants self-identifying as design professionals. Whether the participants actually worked as design professionals was not verified, since requiring formal verification of participants' professions was not in line with the ethical guidelines for requesting excessive personal data. This limitation was mitigated by the recruitment strategy, as the study was only advertised on closed professional UX groups.
145
+
146
+ § 3.5 ANALYSIS
147
+
148
+ Data obtained from the online survey was exported from SurveyGizmo and collated in a spreadsheet. The responses were then reviewed to ensure completion and consistency and to identify duplicates or outliers. Responses to open-ended questions were reviewed for quality by checking if the answers provided were relevant to the questions asked, in order to avoid any fraudulent responses [41].
149
+
150
+ Following quality assurance checks, data was then processed and coded to carry out the analysis. Descriptive statistics were employed to analyze the quantitative data provided by the multiple-choice questions. Comparative statistical analysis was not conducted due to the exploratory nature of the data and research questions.
151
+
152
+ Free text responses to open-ended questions were subjected to a thematic analysis to identify patterns across the data, and to complement and contextualize the quantitative findings from the survey. The thematic analysis was done using a data-driven inductive approach by the lead author - an experienced UX designer and techno-social researcher. Responses were systematically coded and reviewed for common, emergent themes following the guidelines by Braun and Clarke [4]. Since the responses were short and fact-oriented, and also not the main source of analysis for the predominantly quantitative instrument, a more extensive dual-annotator analysis was not necessary.
153
+
154
+ § 4 RESULTS
155
+
156
+ This section presents a synthesis of the data collected from the survey. Survey findings have been presented in tabular format to ensure accessibility (see supplementary materials for graphical format). Percentages have been rounded to the closest number in the text to improve readability.
157
+
158
+ § 4.1 DEMOGRAPHICS
159
+
160
+ A summary of the demographic profile of the 130 participants is presented in Table 2. Participants worked in various industries, with the principal organizational areas being information technology (37%), followed by education (18%) and finance (11%). The job titles of the participants included a wide range of UX roles, such as UX designer, product designer, design lead, UX architect, UX researcher, chief design officer, design strategist, UI designer, information architect, and UX consultant. The average length of the participants' work experience as a professional or a consultant in the field of UX was 6.55 years $\left( {\mathrm{{SD}} = {6.09}}\right)$ . Regarding their personal rating of experience in the field, most of the participants described their level of experience as intermediate (44%) or advanced (32%). In terms of education, most of the participants had some form of post-secondary education with either a bachelor's degree (55%) or a master's degree (29%). A large number of participants (74%) also received professional training or education in the fields of UX and/or HCI.
161
+
162
+ Table 2: Demographic profile of participants $\left( {n = {130}}\right)$
163
+
164
+ max width=
165
+
166
+ Variables Responses n %
167
+
168
+ 1-4
169
+ 3*Geographic location Canada 55 42.3
170
+
171
+ 2-4
172
+ United States 41 31.5
173
+
174
+ 2-4
175
+ Other 34 26.2
176
+
177
+ 1-4
178
+ 8*Industry Education / Research 23 17.7
179
+
180
+ 2-4
181
+ Finance / Banking / Insurance 14 10.8
182
+
183
+ 2-4
184
+ Government / Military X 3.1
185
+
186
+ 2-4
187
+ Healthcare / Medical X 2.3
188
+
189
+ 2-4
190
+ Information Technology X 36.9
191
+
192
+ 2-4
193
+ Telecommunications X 2.3
194
+
195
+ 2-4
196
+ Other X 22.3
197
+
198
+ 2-4
199
+ Not Sure X 4.6
200
+
201
+ 1-4
202
+
203
+ max width=
204
+
205
+ Variables Responses n %
206
+
207
+ 1-4
208
+ 7*Education High school degree or equivalent 5 3.8
209
+
210
+ 2-4
211
+ Some college, no degree X 11 8.5
212
+
213
+ 2-4
214
+ Associate degree X 3 2.3
215
+
216
+ 2-4
217
+ Bachelor's degree X 72 55.4
218
+
219
+ 2-4
220
+ Master’s degree X 37 28.5
221
+
222
+ 2-4
223
+ Professional degree X 1 0.8
224
+
225
+ 2-4
226
+ Doctorate X 1 0.8
227
+
228
+ 1-4
229
+ 4*Professional education in HCl/UX Yes X 9673.8
230
+
231
+ 2-4
232
+ No X 19 14.6
233
+
234
+ 2-4
235
+ Other X 8 6.2
236
+
237
+ 2-4
238
+ Not sure X 7 5.4
239
+
240
+ 1-4
241
+ 4*Level of experience in HCl/UX Expert X 19 14.6
242
+
243
+ 2-4
244
+ Advanced X 42 32.3
245
+
246
+ 2-4
247
+ Intermediate X 57 43.8
248
+
249
+ 2-4
250
+ Basic X 12 9.2
251
+
252
+ 1-4
253
+ 6*Web accessibility training/education Undergraduate courses X 30 23.1
254
+
255
+ 2-4
256
+ Graduate courses X 107.7
257
+
258
+ 2-4
259
+ Online courses X 56 43.1
260
+
261
+ 2-4
262
+ Training in the workplace (current or past) X 54 41.5
263
+
264
+ 2-4
265
+ Other X 14 10.8
266
+
267
+ 2-4
268
+ No training or education in web accessibility X 31 23.8
269
+
270
+ 1-4
271
+
272
+ § 4.2 GENERAL UNDERSTANDING AND AWARENESS
273
+
274
+ § 4.2.1 DIGITAL ACCESSIBILITY TRAINING AND EDUCATION
275
+
276
+ Participants were asked what kind of professional training or education they received in web accessibility through a multiple selection question. Most participants (76%) received some form of web accessibility education with the most common sources being online courses and workplace training programs, followed by undergraduate and graduate courses. Some of the other sources mentioned by participants include conferences, meetups, webinars, bootcamps, and personal research. It is notable that almost one-fourth of the participants did not receive any professional training or education in web accessibility (Table 2).
277
+
278
+ § 4.2.2 UNDERSTANDING SENIOR USER NEEDS
279
+
280
+ Concerning understanding of senior user needs, 58% of the participants stated that they knew how seniors use websites and how to design for them. The remaining 42% did not know how to design for seniors, and among them, 15% had no knowledge of how seniors use the web. Since many older adults use assistive technologies to access digital services, participants were also asked to specify all the assistive technologies they were familiar with through a multiple selection question. Almost all participants (96%) were familiar with assistive technologies, with the most popular selections being speech recognition tools, screen magnifiers, and screen readers (Table 3).
281
+
282
+ § 4.2.3 WEB ACCESSIBILITY LEGISLATION AND GUIDELINES
283
+
284
+ Only 49% of the participants reported that they were familiar with government laws on digital accessibility. Responses from a follow-up question regarding the level of familiarity they had with the accessibility laws have been summarized in Table 3. 36% of the participants reported understanding and following digital accessibility laws. On the other hand, 47% of the participants barely knew or never heard of any accessibility laws.
285
+
286
+ Table 3: General understanding and awareness
287
+
288
+ max width=
289
+
290
+ Responses n %
291
+
292
+ 1-3
293
+ 3|c|Level of understanding of how seniors use websites
294
+
295
+ 1-3
296
+ I am aware that seniors can use websites, but I don't know how they use them 19 14.6
297
+
298
+ 1-3
299
+ I know how seniors use websites, but I don't know how to design for them 36 27.7
300
+
301
+ 1-3
302
+ I know how seniors use websites and how to design for them, but I haven't designed for them 40 30.8
303
+
304
+ 1-3
305
+ I know how seniors use websites and I have designed for them 35 26.9
306
+
307
+ 1-3
308
+ 3|c|Familiarity with assistive technologies
309
+
310
+ 1-3
311
+ Screen reader 109 83.8
312
+
313
+ 1-3
314
+ Screen magnifier 110 84.6
315
+
316
+ 1-3
317
+ Braille-based tools (e.g. printers, embossed printers) 55 42.3
318
+
319
+ 1-3
320
+ Text-only browser 74 56.9
321
+
322
+ 1-3
323
+ Alternative keyboard 53 40.8
324
+
325
+ 1-3
326
+ Alternative mouse and joystick 44 33.8
327
+
328
+ 1-3
329
+ Speech recognition tools (e.g. Siri) 110 84.6
330
+
331
+ 1-3
332
+ Other 4 3.1
333
+
334
+ 1-3
335
+ I am not familiar with any assistive technology 5 3.8
336
+
337
+ 1-3
338
+ 3|c|Level of familiarity with accessibility laws
339
+
340
+ 1-3
341
+ I know the relevant law(s) and its web-related implications, and follow it 47 36.2
342
+
343
+ 1-3
344
+ I know the relevant law(s) and its web-related implications, but don't follow it 11 8.5
345
+
346
+ 1-3
347
+ I know the relevant law(s), but not its web-related implications 11 8.5
348
+
349
+ 1-3
350
+ I have heard about it / I barely know about it 29 22.3
351
+
352
+ 1-3
353
+ I have never heard about it 32 24.6
354
+
355
+ 1-3
356
+ 3|c|Familiarity with accessibility guidelines
357
+
358
+ 1-3
359
+ Web Content Accessibility Guidelines (WCAG) 88 67.7
360
+
361
+ 1-3
362
+ Authoring Tool Accessibility Guidelines (ATAG) 8 6.2
363
+
364
+ 1-3
365
+ User Agent Accessibility Guidelines (UAAG) 12 9.2
366
+
367
+ 1-3
368
+ I am not familiar with any accessibility guidelines 39 30
369
+
370
+ 1-3
371
+
372
+ Participants were asked which accessibility guidelines they were familiar with through a multiple selection question, and many reported being familiar with the WCAG (68%). It is also worth mentioning that ${30}\%$ of the participants were not familiar with any accessibility guidelines. Regarding knowledge of accessibility checking tools, 64% reported being familiar with these tools with specific mentions of WAVE, AChecker, Axe, Google Lighthouse, Contrast Analyzer, Siteimprove, etc.
373
+
374
+ § 4.2.4 SENIOR-FRIENDLY DESIGN GUIDELINES
375
+
376
+ Most participants (83%) were not familiar with the senior-friendly design guidelines published by the NIA and NLM. Only 12 participants (9%) reported knowing about these guidelines. When inquired about familiarity with other senior-friendly design guidelines, most participants (73%) did not know of any other senior-friendly design guidelines either.
377
+
378
+ § 4.3 PRACTICAL EXPERIENCES
379
+
380
+ § 4.3.1 WEB ACCESSIBILITY AND USABILITY FOR SENIORS AS PART OF PROJECTS
381
+
382
+ Concerning previous experience, 54% of the participants reported to having designed accessible interfaces for users with visual impairments. On the other hand, 41% reported to never having created any website or app that was accessible to users with visual impairments. Likewise, in terms of designing for older adults, 43% of the participants previously created websites or apps that were accessible to seniors, while 40% had no previous experience designing for seniors.
383
+
384
+ Majority of the participants $\left( {{81}\% ,\mathrm{n} = {105}}\right)$ reported considering accessibility in the design projects they were involved in. Few of them $\left( {6\% ,\mathrm{n} = 8}\right)$ did not consider accessibility in their projects and they were asked to explain their reasons for doing so through an open-ended question. All eight participants responded to the question and the reasons stated by them can be classified under the following themes: accessibility not being included in project scope $\left( {\mathrm{n} = 4}\right)$ , accessibility not being a requirement for the target group/customer $\left( {\mathrm{n} = 2}\right)$ , time and budget constraints $\left( {\mathrm{n} = 1}\right)$ , lack of client support $\left( {\mathrm{n} = 1}\right)$ , and lack of information and tools for accessibility $\left( {\mathrm{n} = 1}\right)$ .
385
+
386
+ In terms of senior-friendliness, only ${40}\% \left( {\mathrm{n} = {52}}\right)$ of the participants considered usability for seniors in the design projects they were involved in, while ${34}\% \left( {\mathrm{n} = {44}}\right)$ stated that they did not consider designing for seniors. The reasons given by 43 of these participants for not considering senior-friendliness in their projects can be classified under the following themes: senior-friendliness not being a requirement for the target group/customer (n=32), lack of awareness of how seniors use the Internet (n=4), senior-friendliness not required by clients/stakeholders $\left( {\mathrm{n} = 4}\right)$ , senior-friendliness not being a priority $\left( {\mathrm{n} = 3}\right)$ , no opportunity to interact with seniors $\left( {\mathrm{n} = 2}\right)$ , lack of knowledge about designing for seniors $\left( {\mathrm{n} = 1}\right)$ , limited project scope $\left( {\mathrm{n} = 1}\right)$ , and time and budget constraints $\left( {\mathrm{n} = 1}\right)$ .
387
+
388
+ § 4.3.2 RESEARCH METHODS FOR ACCESSIBLE AND SENIOR- FRIENDLY DESIGNS
389
+
390
+ The 105 participants who reported considering accessibility in their design projects were asked which research methods they used to design for users with disabilities through a multiple selection question. A similar question regarding research methods was asked to the 52 participants who reported considering senior-friendliness in their design projects. According to the responses, the most widely used method for both designing for users with disabilities and designing for seniors was following accessibility guidelines (Table 4).
391
+
392
+ § 4.3.3 EVALUATION TECHNIQUES FOR ACCESSIBLE AND SENIOR- FRIENDLY DESIGNS
393
+
394
+ The participants who reported considering accessibility and/or senior-friendliness in their design projects were also asked about the evaluation techniques they used when designing for people with disabilities or seniors. For designing for people with disabilities, the most prominent evaluation technique among the 105 participants was checking for compliance with accessibility guidelines. On the other hand, for evaluation techniques for designing for seniors, the most widely used technique among the 52 participants was conducting usability tests with seniors (Table 4).
395
+
396
+ Table 4: Use of research methods and evaluation techniques sessions
397
+
398
+ max width=
399
+
400
+ Research methods 2|c|Designing for users with disabilities (n=105) 2|c|Designing for senior users (n=52)
401
+
402
+ 1-5
403
+ X n % n %
404
+
405
+ 1-5
406
+ Follow accessibility guidelines 80 76.9 32 61.5
407
+
408
+ 1-5
409
+ Follow senior-friendly design guidelines 20 19.2 12 23.1
410
+
411
+ 1-5
412
+ Conduct interviews 34 32.7 16 30.8
413
+
414
+ 1-5
415
+ Conduct surveys 24 23.1 12 23.1
416
+
417
+ 1-5
418
+ Generate personas 32 30.8 20 38.5
419
+
420
+ 1-5
421
+ Conduct usability tests 31 29.8 21 40.4
422
+
423
+ 1-5
424
+ Conduct participatory design 15 14.4 7 13.5
425
+
426
+ 1-5
427
+
428
+ max width=
429
+
430
+ Research methods 2|c|Designing for users with disabilities (n=105) 2|c|Designing for senior users (n=52)
431
+
432
+ 1-5
433
+ X n % n %
434
+
435
+ 1-5
436
+ Conduct heuristic evaluations 48 46.2 20 38.5
437
+
438
+ 1-5
439
+ Other 8 7.7 3 5.8
440
+
441
+ 1-5
442
+ I don't use any research methods 8 7.7 6 11.5
443
+
444
+ 1-5
445
+ 5|c|Evaluation techniques
446
+
447
+ 1-5
448
+ Conduct usability tests with users with disabilities 33 31.4 14 26.9
449
+
450
+ 1-5
451
+ Conduct usability tests with seniors 35 33.3 28 53.8
452
+
453
+ 1-5
454
+ Test with automatic accessibility assessment tools 55 52.4 19 36.5
455
+
456
+ 1-5
457
+ Check compliance according to accessibility guidelines 60 57.1 25 48.1
458
+
459
+ 1-5
460
+ HTML validation 47 44.8 16 30.8
461
+
462
+ 1-5
463
+ CSS validation 40 38.1 15 28.8
464
+
465
+ 1-5
466
+ Test with assistive technologies 32 30.5 14 26.9
467
+
468
+ 1-5
469
+ Other 5 4.8 3 5.8
470
+
471
+ 1-5
472
+ I don't evaluate my designs 7 6.7 9 17.3
473
+
474
+ 1-5
475
+
476
+ § 4.4 MOTIVATIONS
477
+
478
+ § 4.4.1 PERCEPTIONS OF USABILITY FOR SENIORS IN ORGANIZATIONS
479
+
480
+ Participants were asked to rate the importance given to accessibility for seniors by their organizations or independent practices. While there were varied responses to the question, accessibility for seniors was deemed to be less important for many organizations $\left( {{31}\% ,\mathrm{n} = {40}}\right)$ . The distribution of the other responses by participants were as follows: 12% very important 18% fairly important, 15% important, 15% not important.
481
+
482
+ § 4.4.2 MOTIVATIONS FOR USABILITY FOR SENIORS
483
+
484
+ Participants were asked about their organizational and personal motivations in ensuring usability for seniors through two separate multiple selection questions. The most cited motivational factor for organizations was customer requirements (80%), followed by being inclusive (69%) and abiding by the laws (66%). Concerning personal motivations, most of the participants stated being inclusive (82%), followed by being ethical (78%) and developing better products (76%), to be the primary motivations for ensuring usability for seniors (Table 5).
485
+
486
+ Table 5: Motivations for ensuring usability for seniors
487
+
488
+ max width=
489
+
490
+ X 2|c|Organizational 2|c|Personal
491
+
492
+ 1-5
493
+ Motivations n % n %
494
+
495
+ 1-5
496
+ Abiding by the laws 86 66.2% 60 46.2%
497
+
498
+ 1-5
499
+ Being ethical 75 57.7% 101 77.7%
500
+
501
+ 1-5
502
+ Being inclusive 89 68.5% 107 82.3%
503
+
504
+ 1-5
505
+ Customer requirements 103 79.2% 75 57.7%
506
+
507
+ 1-5
508
+ Developing better products 78 60% 99 76.2%
509
+
510
+ 1-5
511
+ Finding research opportunities 39 30% 50 38.5%
512
+
513
+ 1-5
514
+ Increasing income 55 42.3% 40 30.8%
515
+
516
+ 1-5
517
+ Organizational requirements 55 42.3% 35 26.9%
518
+
519
+ 1-5
520
+ Search engine optimization 24 18.5% 18 13.8%
521
+
522
+ 1-5
523
+ Other 4 3.1% 2 1.5%
524
+
525
+ 1-5
526
+ Not sure 1 0.8% 1 0.8%
527
+
528
+ 1-5
529
+
530
+ § 4.5 CHALLENGES
531
+
532
+ § 4.5.1 CHALLENGES OF ENSURING USABILITY FOR SENIORS
533
+
534
+ All participants were asked what the challenges of making websites or apps senior-friendly were through a multiple selection question. The most cited challenges by participants were lack of awareness regarding accessibility for seniors (75%), lack of training/knowledge (74%), time constraints (62%), budget restrictions (60%), and accessibility for seniors not being a requirement for the organization (Table 6).
535
+
536
+ Table 6: Challenges of ensuring usability for seniors
537
+
538
+ max width=
539
+
540
+ Challenges n %
541
+
542
+ 1-3
543
+ Lack of awareness regarding accessibility for seniors 98 75.4
544
+
545
+ 1-3
546
+ Lack of training/knowledge 96 73.8
547
+
548
+ 1-3
549
+ Time restrictions 81 62.3
550
+
551
+ 1-3
552
+ Budget restrictions 78 60
553
+
554
+ 1-3
555
+ Accessibility for seniors is not a requirement for the organization 77 59.2
556
+
557
+ 1-3
558
+ Lack of senior-friendly design guidelines X 56.9
559
+
560
+ 1-3
561
+ Accessibility for seniors is not a requirement for the target group/customers 72 55.4
562
+
563
+ 1-3
564
+ Lack of support from management X 48.5
565
+
566
+ 1-3
567
+ Lack of human resources 41 31.5
568
+
569
+ 1-3
570
+ No legal repercussions X 31.5
571
+
572
+ 1-3
573
+ Accessibility for seniors is not seen as a personal responsibility X 25.4
574
+
575
+ 1-3
576
+ Accessibility for seniors is outside the job description 26 20 X
577
+
578
+ 1-3
579
+ Other X 3.1
580
+
581
+ 1-3
582
+
583
+ § 5 DISCUSSION
584
+
585
+ This section revisits the findings from the survey and discusses key themes regarding challenges that affect the design of senior-friendly interfaces. The research questions asked in this study were exploratory in nature and were aimed at bringing to light the current practices of UX professionals in the context of designing for seniors. Formulating hypotheses was, therefore, not suitable for the type of research questions asked.
586
+
587
+ The key contribution of our study is the quantitative data from the survey which we presented in the previous section and interpret here in more detail. In addition to such quantitative data, we are using statements from participants to reflect on the interpretation of the data, which we are bringing into the discussion here. We have not reported the qualitative survey data in the Results section since most of our data was from quantitative surveys, with the free-text answers providing only a small addition to this. These answers were subject to thematic analysis, with the insights gained from this providing nuance and interpretation to the main results.
588
+
589
+ § 5.1 GENERAL UNDERSTANDING AND AWARENESS
590
+
591
+ The level of understanding and awareness among UX professionals about digital accessibility and usability for seniors was examined through the following dimensions: participation in web accessibility training, understanding of how senior users use the web, and familiarity with assistive technologies, digital accessibility legislation, standards, and tools, and senior-friendly design guidelines.
592
+
593
+ Although the survey was focused on senior-friendly design practices, the results suggest some parallels and connections to web accessibility frameworks which are worth discussing. Various trainings are provided on web accessibility in both industry and academia for design professionals to develop a practical understanding of the accessibility legislation, standards, and guidelines. It is evident from the responses that most of the participants (76%) received education on web accessibility, largely through online courses and training programs at their workplace, while thirty-one participants (24%) did not go through any formal accessibility education. Although it is concerning that one-fourth of the participants did not undergo any accessibility training, these numbers have improved a lot over the years as evident from previous studies [13],[16], which imply that web accessibility training has gained more popularity over time and more professionals are able to access these programs. This distribution of attendance in digital accessibility training was found to be similar to other recent studies on UX professionals in Turkey [19] and the Nordic countries [20].
594
+
595
+ Regarding familiarity with accessibility legislation, half the participants were not familiar with any government laws on web accessibility. Even among the 99 participants who went through web accessibility training, only 53 were aware of these policies which was surprising. In contrast, most participants in Lazar et al. [24] (74%) were familiar with accessibility legislation. This is important to consider since one of the most important factors influencing organizations to prioritize accessibility is governments enforcing legal compliance with accessibility standards [19],[20],[24]. On the other hand, in line with previous research [16],[19],[24], participants were mostly familiar with accessibility guidelines from the Web Accessibility Initiative (WAI), with WCAG (68%) being the most well-known set of guidelines and ATAG or UAAG being the least-known. Most participants were also aware of automated accessibility tools similar to Lazar et al. [24]. The level of awareness of accessibility guidelines and tools reported by participants in this study was higher compared to Inal et al. [20], which showed that very few UX professionals were familiar with web accessibility guidelines and accessibility assessment tools in the Nordic countries.
596
+
597
+ Although participants were generally familiar with different aspects of accessibility, there was a notable lack of awareness among participants regarding designing for seniors. Concerning understanding of senior user needs, 75 of the 130 participants (58%) stated that they knew how seniors use websites and how to design for them. The remaining 55 participants (42%) did not know how to design for seniors, and among them, 19 had no knowledge of how seniors used the web. A large number of participants were also not familiar with the senior-friendly design guidelines published by the National Institute on Aging (NIA) and the National Library of Medicine (NLM), which are the most cited set of design guidelines accommodating older adults' needs. Most participants were not aware of other senior-friendly guidelines either, which raises questions and contributes to the discussion regarding the transferability of HCI research-based recommendations from academia to practitioners in the technology design industry [34].
598
+
599
+ § 5.2 PRACTICAL EXPERIENCES
600
+
601
+ The current practices of UX professionals in the context of designing for accessibility and usability were examined through the following dimensions: consideration of digital accessibility in projects, consideration of usability for seniors in projects, and research methods and evaluation techniques used in both cases.
602
+
603
+ Findings reveal that most participants (81%) reported considering digital accessibility in the design projects they were involved in, which shows a greater rate of adoption compared to previous studies [13],[16],[19],[24]. This could possibly be a resulting factor of their increased awareness of web accessibility guidelines and tools. Only eight participants mentioned not considering accessibility in their projects, and the reasons stated by them were project scope not including accessibility, target group/customers not requiring accessibility, time and budget constraints, lack of client support, and lack of information and tools available for accessibility. Most of these reasons for not considering accessibility have also been observed in other studies [19],[20]. However, lack of awareness regarding accessibility was not considered to be a reason for participants, unlike previous research [19], where it played a significant role in the nonconsideration of accessibility in projects.
604
+
605
+ In terms of incorporating senior-friendliness, 60% of the participants did not consider usability for seniors in their projects. The most prominent reason behind the lack of consideration of senior-friendliness in their work was that seniors were not their target demographic. Other reasons stated by participants included lack of awareness of how seniors use the Internet, senior-friendliness not being required by clients or stakeholders, senior-friendliness not being a priority, lack of interaction opportunities with seniors, lack of knowledge about designing for seniors, limited project scope, and time and budget constraints. In comparison to their consideration of digital accessibility, while there are a few overlaps in the reasons especially in terms of project characteristics, what stands out are the reasons related to their awareness or expertise in terms of designing for seniors which did not seem to be an issue in the case of accessibility. This is also supported by earlier findings on general awareness (see 5.1), where participants were observed to be more familiar with accessibility compared to usability for seniors.
606
+
607
+ On comparing the HCI methods used for designing for people with disabilities and those used for designing for seniors, it was found that participants mostly followed an accessibility guidelines-based approach for both demographics. Among the participants who considered accessibility in their projects, the most common method applied to ensure their design met the requirements of users with disabilities was adhering to accessibility guidelines, followed by conducting heuristic evaluations. It is worth noting here that both these methodologies do not involve the target users and can be conducted without their participation. When designing for seniors, participants again primarily focused on accessibility guidelines, followed by usability tests with seniors, heuristic evaluations, and persona generation based on seniors. In this case, participants considered involving target users to some extent through usability testing, but still focused majorly on HCI methods that did not require user involvement.
608
+
609
+ Given the high preference for accessibility guidelines, the most common evaluation technique for accessibility among participants was to check for compliance with the said guidelines. Other evaluation techniques used by participants include testing with automated accessibility assessment tools and HTML validation. The same methodologies have also been observed in other studies on UX professionals [19],[20]. Only seven participants (7%) reported not evaluating their designs for accessibility, compared to 48% in older studies [16], which again shows the increase in accessibility practices adoption in the industry.
610
+
611
+ Regarding evaluating designs for seniors-friendliness, usability testing was the most common technique used to ensure their designs met the needs of senior users, followed by checking for compliance with accessibility guidelines and testing with automated accessibility assessment tools. Usability principles specific to seniors were barely used in the design of user interfaces for older adults, and this could be attributed to the earlier finding regarding the lack of familiarity with senior-friendly design guidelines (see 5.1).
612
+
613
+ § 5.3 MOTIVATIONS AND CHALLENGES
614
+
615
+ UX professionals' motivations for ensuring usability for seniors and the challenges they face in the process were examined through the following dimensions: perceptions of usability for seniors in organizations, motivations for usability for seniors at the organizational level and at an individual level, and challenges of ensuring usability for seniors.
616
+
617
+ Most organizations represented in this study deemed usability for seniors to be 'less important', in contrast to Inal et al.'s [20] findings on organizational perspectives, where digital accessibility was perceived to be an important asset to many organizations. The main drivers to ensure usability for seniors for these organizations were customer requirements, inclusion of all users, and legal repercussions. Participants believed that their organizations would be more interested in ensuring usability for seniors if it was required by their customers. They also thought that their organizations would be motivated to incorporate senior-friendliness if they realized the need to be inclusive to all user groups and if they were obligated by law. These findings are similar to Lazar et al. [24], where government regulations and knowing that people with disabilities are using their websites were the biggest motivators for participants to make their websites accessible, and can be observed in other more recent studies as well [16],[19],[20]. From a personal perspective, inclusivity, ethics, and the desire to develop better products were reported to be the main drivers for taking usability for seniors into account. The concept of ethics was also discussed by Lazar et al. [24] as most participants in their study reportedly considered ethics to be important in the development of accessible websites.
618
+
619
+ Regarding challenges of ensuring usability for seniors, the most important challenges stated by the participants were lack of awareness regarding accessibility for seniors, lack of training or knowledge, time and budget restrictions, and accessibility for seniors not being a requirement for the organizations. Other challenges cited by participants, in descending order of frequency, include lack of support from management, lack of human resources, no legal repercussions, accessibility for seniors not being seen as a personal responsibility, and accessibility for seniors being outside the job description. Some of the key themes that emerged from participants' responses regarding challenges that affect the design of senior-friendly interfaces are discussed below:
620
+
621
+ § 5.3.1 SENIORS ARE NOT THE TARGET USERS
622
+
623
+ Generally, the design requirements of products and services are based on the needs and pain points of the target user group. Based on responses from the participants, it is evident that seniors are barely considered as part of the main target demographic, even for applications that are generic in nature. One of the main reasons behind this is the common misconception that seniors are not tech-savvy or they are not using such online services. As a result, designing for them is often overlooked in favor of target user groups that are perceived to be more profitable, thus contributing to "digital ageism". Complementing several market and government census reports, research data from across the globe show that the percentage of older adults that use the Internet is increasing [14],[32],[35],[39]. Due to their perceived lack of senior users, many organizations are losing out on customers by not putting in the required effort to meet the needs of a considerable segment of their audience.
624
+
625
+ § 5.3.2 LACK OF STANDARDIZED SENIOR-FRIENDLY DESIGN GUIDELINES
626
+
627
+ Another challenge mentioned by participants was the lack of design guidelines that focused specifically on the needs of senior users. This was expected as very few design professionals were familiar with the guidelines published by NIA and NLM, or other guidelines. Of the 52 participants who reported considering usability for seniors in their projects, only 8 were familiar with these guidelines. This implies that these guidelines are barely used when designing for seniors. It is also evident from responses to other questions in the survey that participants were more familiar with the web accessibility guidelines and preferred using them, as opposed to the senior-friendly design guidelines, when designing for seniors. This lack of familiarity with senior-friendly guidelines can be attributed to the fact that they are not as universal or standardized as the web accessibility guidelines.
628
+
629
+ § 5.3.3 LACK OF SUPPORT FROM STAKEHOLDERS
630
+
631
+ Another common barrier to senior-friendly design as cited by participants was the lack of support from stakeholders or clients who commissioned the designers' services. Most clients are not aware nor knowledgeable about the need for senior-friendly designs, and as a result, the project briefs provided by them barely include accessibility for seniors as a crucial requirement. In order to consider accessibility for seniors in projects, UX professionals need additional time and resources, although budgets for these processes are often too restricted. Unless the client is on board, it is difficult for UX professionals to get the budget or the time to incorporate the needs of senior users, or to convince them why certain design choices must be made to accommodate related concerns. One participant stated:
632
+
633
+ "Once the client realizes this is a target market, there is no longer a question about UX for seniors. It all begins with the client."
634
+
635
+ If usability for seniors is not listed as a client requirement, it comes down to the time and cost budgeted for the project, and then accessibility for seniors is no longer a priority.
636
+
637
+ § 5.3.4 AESTHETICS VS ACCESSIBILITY
638
+
639
+ An important aspect that was brought up by a few participants was the prioritization of aesthetics over accessibility for seniors. Participants mentioned that the stakeholders did not care much about accessibility because the elegant design is what attracted new business, as also evidenced from Lazar et al. [24]. As a result, they would rarely budget for accessibility. Many designers also had a similar approach to this, assuming that in order to design for seniors, the trade-off would be a generic, less attractive, and less engaging product. For example, one participant mentioned:
640
+
641
+ "Sometimes we let design overrule contrast warnings and text size warnings since these don't affect the vast majority of our non-senior, non-consumer audience".
642
+
643
+ However, as evident from previous studies [48], when user interfaces are designed to be accessible, they render a positive user experience for both users with and without disabilities.
644
+
645
+ § 6 KEY INSIGHTS
646
+
647
+ This study highlighted several key issues that UX professionals face with respect to making their products more usable and more accessible to seniors. A summary of these issues has been included below. Uncovering these, in our view, is an essential step toward addressing the lack of senior-centered focus within the UX practice. Some of these insights are similar to those exposed by Lazar et al. [24] with respect to accessibility, which suggest that, (a) designing for seniors is yet to "catch up" to the gains made with respect to designing for accessibility, and (b) the issues uncovered here are not intractable, as Lazar et al.'s work [24] acted as the spark for numerous changes in accessible design. Further research is needed to determine the appropriate course of action to address the issues and gaps that our study exposed (which is outside of the scope of this paper and would be too speculative to include here). Meanwhile, we invite the broader research and design practice community to use these as starting points in reflecting on approaches to address the many issues identified by our survey.
648
+
649
+ 1. While UX professionals are generally aware of web accessibility guidelines, tools, and assistive technologies, their level of awareness regarding how to design for seniors and the availability of senior-friendly design principles is notably low.
650
+
651
+ 2. Very few UX professionals consider usability for seniors in the design projects they are involved in, primarily due to senior-friendliness not being a requirement of the target user group and lack of knowledge regarding designing for seniors.
652
+
653
+ 3. The main methodologies used by UX professionals when designing for senior users are to follow accessibility guidelines and to conduct usability tests with older adults.
654
+
655
+ 4. The familiarity with, and the use of senior-focused usability principles among UX professionals is minimal despite the availability of a wide variety of research-based recommendations.
656
+
657
+ 5. Organizations are motivated to ensure usability for seniors in their products when their customers require it, when they want to be inclusive to all user groups, and when it is required by law.
658
+
659
+ 6. At a personal level, UX professionals are motivated to design for seniors due to inclusiveness, ethics, and the desire to develop better products.
660
+
661
+ 7. Older adults are generally not considered to be the target demographic by most organizations, which leads to stakeholders not budgeting for the time and resources required to ensure usability for seniors.
662
+
663
+ 8. Higher emphasis is placed on visual design and aesthetics compared to accessibility features and usability needs for seniors.
664
+
665
+ § 1 LIMITATIONS AND FUTURE WORK
666
+
667
+ While our study draws methodologically from prior research, including following similar sample sizes, and using validated instruments, there are inherent limitations to our findings. Primarily, these limitations come from the exclusive use of Internet-based surveys - the only research method available to us during significant periods of pandemic-related lockdowns and restrictions to research activities. In coordination with our university's ethics and research office we have implemented various mechanisms to ensure that survey responses are completed in good faith; however, these mechanisms are not able to verify the specific accuracy of responses (e.g. time spent in industry, or number of projects worked on).
668
+
669
+ There are additional limitations inherent to surveys as a data collection method, such as not answering "Why" questions and gaining a deeper understanding of the respondents' challenges they face in their design practice. We plan to conduct follow-up in-person contextual inquiry sessions with some of our survey's respondents (most have provided us with their contact for followup), which will be situated in the context of their work or practice.
670
+
671
+ § 7 CONCLUSION
672
+
673
+ This research focused on investigating the perspectives and practices of design professionals in the context of designing for seniors. The study was conducted using an online survey, and 130 design professionals from various industries participated in this research. The results of the study show that most UX professionals are familiar with web accessibility guidelines and assistive technologies. However, there is a considerable lack of awareness regarding how to design for seniors, and a large number of design professionals are also not familiar with any senior-friendly design guidelines. Results also suggest that only few UX professionals consider usability for seniors in the design projects they are involved in. The primary reasons cited for this are senior-friendliness not being a requirement for the target group/customer, lack of awareness of how seniors use the Internet, senior-friendliness not required by clients/stakeholders, and senior-friendliness not being a priority.
674
+
675
+ This study opens the door for future investigations that may explore and validate approaches to improving UX professionals' awareness of designing for seniors. A follow-up study will focus on larger scale surveys that refine our understanding gained in this research, and which will allow for more complex factor analysis. Further research will also include in-person guided interviews with participants. The primary goal of this study was to bring to light the lack of awareness and understanding that UX professionals have in terms of designing for seniors, and to identify some of the very specific causes of this issue. The knowledge obtained about these causes is a first, and a very important step toward addressing the overarching lack of consideration of seniors in the design of user interfaces. Similar to Lazar et al. [24], this study lays the groundwork for other researchers to propose ways to address this issue and improve the state of usability for seniors in the UX practice. Overall, it is a valuable account of the current state of awareness and activity in the field of technology design with regards to usability for older adults, and a reminder that there is much work to be done to promote the how and why of designing for an older audience.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/x_MfBxtP2Y/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,397 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Saliency Driven Gaze Control for Autonomous Pedestrians
2
+
3
+ Category: Research
4
+
5
+ ![01963e0b-e5cc-7456-af21-037726a7e985_0_218_327_1365_401_0.jpg](images/01963e0b-e5cc-7456-af21-037726a7e985_0_218_327_1365_401_0.jpg)
6
+
7
+ Figure 1: Components of the proposed Particle Gaze method. A saliency map is generated in real time from the current agent view (Left). The values from this saliency map are used to set the Z values of a spline surface, representing a potential field $V\left( {x, y}\right)$ (Middle). The gradient of this potential field $- \overrightarrow{\nabla V}$ is used to move the center of gaze towards a minimum in the field.
8
+
9
+ ## Abstract
10
+
11
+ How and why an agent looks at its environment can inform its navigation, behaviour and interaction with the environment. A human agent's visual-motor system is complex and requires both an understanding of visual stimulus as well as adaptive methods to control and aim its gaze in accordance with goal-driven behaviour or intent. Drawing from observations and techniques in psychology, computer vision and human physiology, we present techniques to procedurally generate various types of gaze movements (head movements, saccades, microsaccades, and smooth pursuits) driven entirely by visual input in the form of saliency maps which represent pre-attentive processing of visual stimuli in order to replicate human gaze behaviour. Each method is designed to be agnostic to attention and cognitive processing, able to cover the nuances for each type of gaze movement, and desired intentional or passive behaviours. In combination with parametric saliency map generation, they serve as a foundation for modelling completely visually driven, procedural gaze in simulated human agents.
12
+
13
+ Index Terms: Computing methodologies-Agent / discrete model; Computing methodologies-Procedural animation
14
+
15
+ ## 1 INTRODUCTION
16
+
17
+ Modelling and simulating human gaze is a complicated endeavour. There are many approaches for estimating and approximating how a human agent may observe the world, most of which aim to replicate it to create believable appearing humans. These methods are effective and often take advantage of scene information from the simulation to calculate believable gaze patterns. However, if one's goal is to replicate gaze, not starting at the desired end result but from the basis of vision then it is important to try and follow a guideline of 'sensory honesty'; by generating gaze movements in an agent-encapsulated manner. In other words, without using information or data that a real human agent would not have. This work presents two methods for controlling gaze using only visual information as input. Specifically, saliency maps are used as input representing pre-attentive processing of the human psycho-visual system of visual stimulus.
18
+
19
+ There are many factors and complications to consider when presenting a model of human gaze. As such, we construct a framework for authoring a variety of gaze behaviours. We use two modes of controlling gaze with saliency maps as input to develop a novel method which can cover a wide range of human gaze behaviours.
20
+
21
+ Thus far, gaze behaviours in crowd simulations have been largely absent or homogeneous. A robust gaze model requires more than just saliency maps. Once a saliency map has been generated, how do we determine which targets to look at, the order in which they should be gazed at, and the duration of the fixation, in such a way that the resulting behaviour and animation is robust and convincing, and also in a way that can be adjusted depending on the situation? This work proposes two models that provide an authorable framework for designing a diverse set of gaze behaviours that can be adjusted on a per-agent basis, promoting heterogeneity in crowd simulation gaze behaviours.
22
+
23
+ Our contributions are as follows. First we present a particle gaze model that uses a potential gradient field to drive gaze towards salient regions of the saliency map. Second, we present a second probabilistic saccade model that chooses targets from the saliency map probabilistically and executes quick saccades, and is capable of making microsaccades by then choosing fixation points based on saliency in the localized region of the target. Third, we evaluate our particle model against pyStar-FC, a notable multi-saccade generator and demonstrate that our model can be tuned to a high degree of similarity with other models. Finally we compare our two models against each other qualitatively.
24
+
25
+ ## 2 HUMAN GAZE
26
+
27
+ Human gaze is a complex topic, weaving between physiology and psychology. Consequently, a model of gaze that neglects either aspect will be woefully incomplete. One of the core points of this work is to accurately replicate the suite of human gaze movements along with their subtle nuances, which involves understanding the mechanics, limitations and strategies of how people look at things. This is discussed in more detail in [4].
28
+
29
+ ### 2.1 Gaze Movements
30
+
31
+ Human eye movements have been the subject of intense study for many decades. Over this time eye movements have been classified into 7 categories. The standard set of eye movements are (from [36]):
32
+
33
+ - Saccade: voluntary jump-like movements that move the retina from one point in the visual field to another;
34
+
35
+ - Microsaccades: small, jerk-like, eye movements, similar to miniature versions of voluntary saccades, with amplitudes from 2 to 120 arcminutes;
36
+
37
+ - Vestibular-Ocular Reflex: these stabilize the visual image on the retina by causing compensatory changes in eye position as the head moves;
38
+
39
+ - Optokinetic Nystagmus: this stabilizes gaze during sustained, low frequency image rotations at constant velocity;
40
+
41
+ - Smooth Pursuit: these are voluntary eye movements that track moving stimuli;
42
+
43
+ - Vergence: these are coordinated movements of both eyes, converging for objects moving towards and diverging for objects moving away from the eyes;
44
+
45
+ - Torsion: coordinated rotation of the eyes around the optical axis, dependent on head tilt and eye elevation.
46
+
47
+ There are quite a few different types of eye movements, each with its complexities and implications on human gaze. For the sake of this work, we focus on how to model saccades, microsaccades and smooth pursuits. Vestibular-ocular reflex is responsible for stabilizing the visual image as the head moves. In addition to eye movements, head movements are also a crucial part of gaze. We label the set of eye and head movements as gaze movements. Due to the slower nature of head movements, these tend to be less categorized. As a general rule, humans tend to align their heads with what they are looking at. According to [25], this is because a discrepancy between the head and eye directions causes interference in visual processing, as well as a degradation in accuracy for localizing attentional focus and hand-eye coordination. Head movements are less erratic than saccades or otherwise would cause strain on the human neck. One study [5] found that head movement duration can range between ${200} - {800}\mathrm{\;{ms}}$ when a series of saccades make up a gaze shift, with larger head rotation speeds for larger gaze shifts. In contrast, a single saccade-fixation action requires about ${200}\mathrm{\;{ms}}$ . In [6], a saccade took just under ${200}\mathrm{\;{ms}}$ while a head movement took just under ${450}\mathrm{\;{ms}}$ to complete in a single trial, suggesting that head movements are generally slower than saccades. Most natural gaze shifts utilize a combination of saccades and head movements, with head rotations typically following the eyes with a delay of ${20} - {50}\mathrm{\;{ms}}$ [32]. A model which aims to emulate human gaze should be able to parameterize and replicate these types of gaze movements, or at least a sufficient subset of them. The large problem with generalizing a control structure however is that human gaze behaviour tends to be very diverse and idiosyncratic [32]. The selection of gaze targets are drawn from attention and deliberate intent, which then informs the gaze. For example, a slow-moving object of interest in view will elicit a smooth pursuit. However, if this target is moving too fast smooth pursuit is no longer possible and the human visual system will resort to "catch-up" saccades to keep track of the object. A model of gaze should be able to generate a range of plausible eye movements given knowledge or a map of how the given agent is attending to their world.
48
+
49
+ ### 2.2 Memory and Inhibition of Return (IOR)
50
+
51
+ Inhibition of Return (IOR) is described by [15] as a delayed response to stimuli in a peripheral location which was previously attended to or looked at. Originally found in [30], and followed up by and defined in [31], IOR's function appears to be orienting gaze towards novel locations which facilitates foraging and other search behaviours. This is fairly intuitive, e.g. if you were searching your office for a specific item it would make sense to avoid searching where you have already looked. Alternatively, if you were just trying to gather information about your environment, the same mechanism aids in information gathering. IOR typically appears in the literature when the stimulus event is not task-relevant or there is no task given to the observer [15]. When test subjects were tasked with making saccadic movements which seemed most comfortable after viewing a brief stimulus they most often would look away from the location of the stimulus. $\left\lbrack {{15},{16}}\right\rbrack$ found that across multiple studies it appeared that IOR is often encoded in environmental coordinates rather than retinal coordinates. This effect appears in the early IOR literature [30,31]. Further studies have also shown that in some instances IOR appears to be encoded on an object basis $\left\lbrack {1,8,{34},{35}}\right\rbrack$ . Both environmental location and object attachment as IOR encodings have strong experimental evidence to support them. The question becomes, in what cases do either occur? [15] suggests that this change in encoding occurs depending on contextual factors such as whether the observer is moving, objects in the view are moving and what the intent or task of the observer is. In early studies, IOR appeared as related to a reluctance of motor response to focus on particular locations, not inhibiting or suppressing attention. However, studies have found IOR to occur in spatial tasks as well, not just stimulus-response. These findings have shifted the general consensus that IOR does indeed occur on an attentional level as well as oculomotor response. The reasoning again appears to be contextual. For example, the type of stimuli, as well as the difficulty in discriminating stimuli within an observer's view affects the introduction of IOR on the attentional level. The presence of IOR on attention is further supported by findings that IOR also appears in auditory $\left\lbrack {{23},{24},{33}}\right\rbrack$ and tactile $\left\lbrack {34}\right\rbrack$ modes Results are consistent in demonstrating that IOR's effect is to inhibit responses typically associated with stimuli. Narrowing down how IOR mechanisms will function is a difficult task affected by many factors. Studies have found generally that IOR typically takes between ${100}\mathrm{\;{ms}}$ and ${200}\mathrm{\;{ms}}$ of cued saccade fixation to kick in which aligns with the time between saccades which typically has latencies of 200-250 ms [7]. The effects can last several seconds, however, this can easily be affected by changes in the scene or task of the observer.
52
+
53
+ The multitude of open questions as well as contextual changes makes it difficult to define an inhibition of return mechanism which can be used as a part of a gaze control system. Factors like environment, agent factors, intent, and task all need to be taken into account to decide for example what kind of encoding to use. That's not to mention open questions on specific mechanisms within IOR. There are many valid ways to implement IOR, in this paper we try to focus on one subset of factors and contexts and suggest an IOR mechanism contingent on that based on the previously mentioned literature. It is the hope as well that implementing gaze control systems based on attention and vision literature opens up possibilities to explore many of the open challenges yet unexplained.
54
+
55
+ ## 3 Related Works
56
+
57
+ To create a full pipeline modelling the gaze of an agent requires first defining what to look at and then how to look at it. Our work uses saliency to capture a visual representation of what/where an agent may look at. Deciding how to use saliency information to generate fixations or eye movements is an area of ongoing study. Though the field is mostly saturated with models for predicting fixations within $2\mathrm{D}$ images, we draw inspiration from other works in this area and apply concepts to our problems for $3\mathrm{D}$ characters in a dynamic simulation.
58
+
59
+ ### 3.1 Saliency
60
+
61
+ Saliency models attempt to represent what is important within a field of view, typically concerning human visual processing. The most common form of representing this is in the form of a saliency map, a 2D image which describes which regions within a field of view are "salient". In this sense, saliency is usually interpreted as the probability that a human observer will look in a particular location. Rule-based models such as [11]'s originative work construct saliency maps based on things like colours, contrast, location etc. More recent deep learning models like [12] and [20] aim to emulate human saliency specifically by training off of human scan-path data on sets of images. However, the main issue with these models are inherent biases within datasets, and in the case of human simulation, they are prohibitively slow to use in real-time simulation. Virtual saliency models are aimed at implementing saliency specifically for the purposes of human or agent simulation. For example, it is possible to construct a model of saliency from a simulation scene database and assign scores to objects within an agent's view [26]. This is a simple and effective approach however is limited in its uses outside of simulating visually believable gaze animations. To go beyond these limitations it is possible to use a rule-based model for generating saliency maps (parametric saliency maps) in real-time during a simulation using information from embedded in the scene graph and localized per observer [17]. The advantage of this approach is bringing saliency maps to real time-simulation, which means vision-based approaches to gaze control and scene understanding are possible. In follow-up work, the parameters of the parametric saliency map were learned by minimizing the output difference from state-of-the-art deep saliency models on a virtual dataset [18]. Work has shown that visual attention is guided by features depending on the task, and that pre-attentive features like colour, luminance, motion, orientation, depth, and size are all key elements of visual attention [39]. All of these can be compactly encoded into parametric saliency maps, which is why it is an efficient representation of pre-attentive processing for attentive tasks like fixations. Similarly, work has shown that bottom-up features (stimuli) guide attention under natural conditions, for example, simple undirected gaze with no intent or goal [27].
62
+
63
+ ### 3.2 Fixation Prediction
64
+
65
+ Many findings summarized in [40] conclude that saccadic selection avoids areas of little or less structure within an image. When compared with random fixation point selection on datasets of images, regions chosen by actual fixation locations have consistently higher signal variance than random selection. [41] found that the mean-variance ratio of random vs. real fixations ${\sigma }_{eye}^{2}/{\sigma }_{\text{rand }}^{2}$ to be around 1.35. Active fixation prediction from [37] aims to generate a temporal series of fixation locations in an image which can be used to construct scan paths. They accomplish this through a tiered saliency approach, blending a coarse feature map on the periphery with a high detail saliency map located at the point of fixation. Combining this with a temporal inhibition of return mechanism (IOR) they are able to generate very plausible scan paths. Notable takeaways from this approach are the importance of selective suppression of attention or saliency in the periphery, combined with some mode of memory to implement inhibition of return which from [16], says is consistently found in studies of fixations and saccadic eye movements. The recent Deepgaze III model from [19] trained a deep neural network to predict and generate scan paths and fixations from fixation density maps (i.e. saliency) for free-viewing of natural images. The model generally outperformed other similar models (such as the previously mentioned STAR-FC) in various statistical measures on state-of-the-art datasets. The model is particularly interesting due to the modular architecture allowing them to conduct ablation studies to quantify the effects and relevance of input data. It was found that scene content has much higher importance on fixation prediction than previous scan path history. As noted by Tstotsos, J., one key limitation is the static nature of images and how shifting of gaze does not affect the image. Key challenges we address are how to implement inhibition of return given dynamic environment, agent position and agent gaze. As well as how to select fixation points.
66
+
67
+ The problem generally with all these approaches is the focus on free viewing of static images. That is useful for trying to predict how someone may look at an image, however, as noted above, humans do not see in 2D static images. Human visual systems contend with stimuli changes from dynamic environments as well as egocentric effects when gaze movement occurs (i.e., changing where you look completely changes the information available to your vision). The pursuit of fixation prediction in active-vision applications; such as simulation or robotics, must contend with temporally changing environments, changes in agent position, changes in agent gaze orientation, and spatial-temporal memory.
68
+
69
+ ### 3.3 Gaze Control
70
+
71
+ One of the closest implementations of our approach is [29], where the Itti-Koch-Niebur (IKN) model [11] is used to generate saliency maps from the perspective of a virtual agent. This was used to determine which objects within view would be 'salient' and queued them as targets in the scene database. They also implemented a form of memory where agents would keep track of scene objects that they have observed. The spirit of their work was 'sensory honesty', in trying to use as little simulation knowledge as possible. In the same vein, our work also shares this same goal but attempts to take it further by including no information about the transforms of objects in the scene database in gaze, having it entirely driven by visual stimulus. The most significant limitation of the authors was the lack of a top-down attention component. This is addressed by our inclusion of parametric saliency maps from [17]. The benefit of using saliency maps is that the processing time is limited only by the cost of the attention model and the rendering pipeline. Another limitation is simply that humans don't have a scene database to draw information from. Approaching the problem of gaze and attention from a visual stimulus-driven standpoint opens the door for more grounded modelling of virtual humans. More complex totalistic models for automating gaze behaviour have been worked on for over two decades, in the form of cognitive models of attention and intent which form a high-level controller $\left\lbrack {3,{14},{22}}\right\rbrack$ . These models rather interestingly attempt to join ideas of task relevance and action to inform gaze movements. This is an often-overlooked factor despite environmental conditions impacting visual understanding of the environment which also impact general locomotion and movements, such as the increase in foot clearance on steps in different lighting levels [10]. Our proposed approach poses a simpler parametric framework for authoring and generating gaze behaviours in a way which compartmentalizes attention and intent away from control. Our work fits in as a link between the vision-based approaches, like [29], and high-level control structures, such as [3, 14, 22]. Other pseudo-saliency driven gaze approaches do not use visual stimulus as input control for gaze by explicitly targeting transforms of objects within the scene $\left\lbrack {2,{26}}\right\rbrack$ . These approaches create reasonably believable procedural gaze animations however are limited to scale as a scene becomes increasingly complex the computational costs of such gaze models too will increase.
72
+
73
+ Gaze behaviour modelling is not only important in real-time applications but for a variety of purposes. For example, gaze behaviour can be inferred from motion capture data and automatically integrated into animation as done in [28].
74
+
75
+ A recent paper proposed a real-time method for driving gaze behaviour using a multi-layered saliency approach similar to ours [9], but it does not take into account 3D information from the scene such as velocity of agents, and the customization maps seem to be created for an entire viewpoint rather than for individual objects, so new customization maps would have to be made for each viewing direction of an object whereas our method allows semantic masking that is attached to the object and works for any viewpoint. Additionally, the use of a ML saliency model limits the customizability of the saliency maps, whereas our method uses PSM [17] which provides great flexibility and authorability.
76
+
77
+ ![01963e0b-e5cc-7456-af21-037726a7e985_3_163_146_726_235_0.jpg](images/01963e0b-e5cc-7456-af21-037726a7e985_3_163_146_726_235_0.jpg)
78
+
79
+ Figure 2: Examples of generated saliency maps from the perspective of an agent walking through a simulated urban crowd, using PSM weights specified in [18].
80
+
81
+ ## 4 METHODS
82
+
83
+ Each method presented takes as input a square, grayscale image representing the saliency of an agent's view at that time, and then outputs a new orientation and the speed at which to interpolate to it from the current orientation. Once the new target orientation is reached, the process is repeated. Each method is designed to be simple, yet capable of plausibly generating different types of gaze movements. At the same time, they are agnostic to top-down attention which is instead encoded in saliency maps. Through the combination of saliency and the control parameters for the gaze-control methods, a wide range of intentional and passive gaze behaviours can be modelled.
84
+
85
+ We use the Predictive Avoidance Model (PAM) [13] for agent navigation, which senses obstacles and neighbours within some field of view and produces piece-wise predicted repulsive forces to avoid them. Our gaze behaviour models change the center of the field of view, which affects the neighbours and obstacles avoided. Further selecting avoidance targets based on saliency is planned future work.
86
+
87
+ ### 4.1 Saliency Map Generation
88
+
89
+ We utilize the parametric saliency maps (PSMs) method from [18]. This allows for saliency maps to be easily generated in real-time for virtual agents. PSM is a compact way of encoding pre-attentive and top-down factors. Parameters can be easily adjusted to suit different attentive loads. The saliency of an object from the perspective of an observer is computed from the combination of weighted parameters,
90
+
91
+ $$
92
+ S = W \cdot \left( {{w}_{d}{S}_{d} + {w}_{F}{S}_{F} + {w}_{v}{S}_{v} + {w}_{R}{S}_{R} + {w}_{I}{S}_{I}}\right) \cdot \left( {{W}_{M}{S}_{M}}\right) \cdot \left( {{W}_{A}{S}_{A}}\right)
93
+ $$
94
+
95
+ (1)
96
+
97
+ The values of weights are set by the observer. The parameter values come from the objects in the scene. For example, the interestingness factor ${S}_{I}$ is an intrinsic value from an object/character. It is an effective way to generate saliency maps in a simulation and change the attentive factors as needed, either globally through factor values, or on a per-agent basis through the factor weights. A Gaussian blur is applied afterwards to smooth out hard edges.
98
+
99
+ ### 4.2 Particle Model
100
+
101
+ We now introduce the particle model for saliency-driven gaze control. This model treats the center of gaze as a particle which is acted on by driving forces. By imagining the center of gaze as a particle in a potential field we can use equations of motion to describe how it moves. The potential field comes from the saliency of what the agent is seeing.
102
+
103
+ #### 4.2.1 Particle Update
104
+
105
+ The point which lies in the center of view (from a virtual camera) can be imagined as a point on the $3\mathrm{D}$ viewing sphere around an agent. Moving this point around the sphere is equivalent to changing the direction in which an agent is looking. Treating this point as a particle, gaze "forces" can be applied to it which change the direction of gaze.
106
+
107
+ For a given discrete time step $t$ , an agents gaze state can be described by ${\mathbf{G}}_{t} = \left( {\mathbf{\theta },\phi }\right)$ , which represents a point in spherical space for a fixed radius, where(0,0)is the natural or forward-facing orientation. For a saliency map ${S}_{t}$ ; which represents the current view’s saliency map, a potential field is defined as $V\left( \mathbf{G}\right)$ . By interpreting points of high saliency as potential wells in $V$ , following the gradient will drive the gaze-particle into highly salient regions. We can formulate the motion of the particle as,
108
+
109
+ $$
110
+ \ddot{\mathbf{G}} = - \overrightarrow{\nabla }V - {k}_{d}\dot{\mathbf{G}} \tag{2}
111
+ $$
112
+
113
+ Where $- \overrightarrow{\nabla }V$ is the force applied by the potential to the gaze particle based on what the agent is currently seeing in the current saliency ${S}_{t}$ . The term $- {k}_{d}{\dot{\mathbf{G}}}_{t}$ represents damping with coefficient ${k}_{d}$ . The algorithm to update the position of the particle for step size $\lambda$ is given by,
114
+
115
+ $$
116
+ {\ddot{\mathbf{G}}}_{t} = - \overrightarrow{\nabla }V\left( {\mathbf{G}}_{t}\right) - {k}_{d}{\dot{\mathbf{G}}}_{t}
117
+ $$
118
+
119
+ $$
120
+ {\dot{\mathbf{G}}}_{t + 1} = {\dot{\mathbf{G}}}_{t} + \lambda \cdot {\ddot{\mathbf{G}}}_{t} \tag{3}
121
+ $$
122
+
123
+ $$
124
+ {\mathbf{G}}_{t + 1} = {\mathbf{G}}_{t} + \lambda \cdot {\dot{\mathbf{G}}}_{t + 1}
125
+ $$
126
+
127
+ Additionally, we can include a noise term $A \cdot {z}_{t}$ ; where ${z}_{t} \in$ ${\left\lbrack -1,1\right\rbrack }^{2}$ with amplitude $A$ , in the final position update which gives added flexibility to model more complex gaze movements. The final update is then,
128
+
129
+ $$
130
+ {\mathbf{G}}_{t + 1} = {\mathbf{G}}_{t} + \lambda \cdot {\dot{\mathbf{G}}}_{t + 1} + A \cdot {\mathbf{z}}_{t} \tag{4}
131
+ $$
132
+
133
+ An important consideration then is how to construct the potential fields from the saliency maps. Looking at examples in Fig. 2, one problem is that in most saliency maps there are large regions of little or constant saliency. This presents a problem because there would be no gradient in these regions. Another thing to consider is that highly salient stimuli should draw gaze towards it regardless of where it is in the visual field. Of course, the method should be computationally efficient in order to scale for large groups of agents. Calculating the potential field from an $n \times n$ image could be very costly, especially scaled to scenarios with many agents. The solution we chose is to use a parametric surface to model the potential by sampling from the saliency map. Cubic B-splines have useful properties which make them very effective and efficient for this task. Assuming an appropriately chosen number of control points, a cubic B-spline surface will have a non-zero gradient in almost all regions of the space, as well as being very fast to compute. Sampling from the saliency map, the heights of control points on a spline surface can be set giving a reasonable approximation of a potential field. $\overrightarrow{\nabla }V\left( \mathbf{G}\right)$ is then the gradient of the surface with respect to the $x - y$ plane.
134
+
135
+ The max-pool and average-pool algorithms are commonly used in computer vision to downscale images into a lower resolution space. We use average-pool to pool values from a saliency map into the control points of a spline surface. For a lattice of $m \times m$ control points, the saliency map is divided into $m \times m$ windows. The heights of control points are set as negative results of pooling each window, with the maximum depth being -1 . At each time-step $t$ the control points of the surface are set, giving the potential field. Fig. 3 shows a simple example for an $m = 7$ surface. Since the gaze-particle is always at the center of the visual field, the gradient is always sampled at the center of the potential field as well. Following Eq. 3, the gaze-particles position on the viewing sphere is updated, changing the point of view.
136
+
137
+ We chose to use a 3D spline surface over other traditional $2\mathrm{D}$ methods because it allows us to model it as a physics problem. By adjusting the parameters of the spline surface, we can get a continuous gradient without the issue of gradient deadzones.
138
+
139
+ ![01963e0b-e5cc-7456-af21-037726a7e985_4_173_349_625_521_0.jpg](images/01963e0b-e5cc-7456-af21-037726a7e985_4_173_349_625_521_0.jpg)
140
+
141
+ Figure 3: Spline surface representing the potential field. Values from the saliency map are pooled into control points corresponding to quadrants.
142
+
143
+ ![01963e0b-e5cc-7456-af21-037726a7e985_4_161_1348_689_434_0.jpg](images/01963e0b-e5cc-7456-af21-037726a7e985_4_161_1348_689_434_0.jpg)
144
+
145
+ Figure 4: Projection of environment onto agents view. $\overrightarrow{\nabla }V$ is the gradient at the center of the potential field. For small values, $\overrightarrow{\nabla }V \simeq \left( {{\Delta \theta },{\Delta \phi }}\right)$ , where $\left( {{\Delta \theta },{\Delta \phi }}\right)$ are the updates to the current camera orientation. This moves the view until the center is in a local minimum (Typically corresponding to the center of an object of interest)
146
+
147
+ Algorithm 1 Particle Gaze Model
148
+
149
+ ---
150
+
151
+ STATE $\leftarrow$ search
152
+
153
+ $G \leftarrow \left( {{0.5},{0.5}}\right)$ $\vartriangleright$ Center of viewport
154
+
155
+ $\dot{G} \leftarrow \left( {0,0}\right)$
156
+
157
+ while true do
158
+
159
+ $V \leftarrow \operatorname{SetPotential}\left( {S}_{t}\right)$
160
+
161
+ $\ddot{G} = - \overrightarrow{\nabla }V\left( G\right) - {k}_{d} \cdot \dot{G}$
162
+
163
+ if STATE $= =$ search then
164
+
165
+ $\lambda \leftarrow {\lambda }_{\text{search }}$
166
+
167
+ if FixationDetected( ) then
168
+
169
+ STATE $\leftarrow$ fixation
170
+
171
+ end if
172
+
173
+ else if ${STATE} = =$ fixation then
174
+
175
+ $\lambda \leftarrow {\lambda }_{\text{fixation }}$
176
+
177
+ if fixationtime $> {\tau }_{\text{fixation }}$ then
178
+
179
+ STATE $\leftarrow$ search
180
+
181
+ end if
182
+
183
+ end if
184
+
185
+ $\dot{G} \leftarrow \dot{G} + \lambda \cdot \ddot{G}$
186
+
187
+ $G \leftarrow G + \lambda \cdot \dot{G}$
188
+
189
+ end while
190
+
191
+ ---
192
+
193
+ #### 4.2.2 Control
194
+
195
+ The primary parameters for control are the step size $\lambda$ and damping coefficient ${k}_{d}$ . A large step size will cause the view to move quickly through the visual field however will struggle to stay on target. A small step size will have excellent tracking of targets once fixated but will struggle to move to new targets. For this, we propose a two-state system for varying the behaviour of the particle's movement. In the search state, the step size is set to ${\lambda }_{\text{search }}$ . The gaze is free to move around and will be drawn in by salient regions in the view. As the particle moves into a potential well, the gradient will get smaller. At this point, there needs to be some definition for detecting a fixation, which should work regardless of motion either egocentric or by the target object. We define a simple rule which measures the average gradient of the potential within some temporal window. If the average gradient has dropped below a threshold then a fixation has occurred and the state is changed. In this state, step size is set to ${\lambda }_{\text{fixation }}$ , and this state lasts for ${\tau }_{\text{fixation }}$ seconds.
196
+
197
+ After the fixation time, the saliency of the target would still affect the potential field, thus it is important to implement an inhibition of return mechanism to prevent getting stuck on one target. For the parametric saliency maps we utilize, the saliency of targets under the particle can be decayed. This simple rule allows the particle to move on to new targets naturally, encoding object-based IOR. A good default value is a decay time of 1-2 seconds for general searching/foraging gaze behaviour, however, it must be noted that for accurately replicating specific gaze behaviours this value would likely need to be different depending on need. There is also some added complexity to consider in how exactly saliency returns after it has been decayed, however for the scope of this paper we do not discuss what/how this might be done as for general use, targets will be well out of view before IOR would wear off. If one imagines walking down a busy street people, cars, signs etc. will constantly be coming in and out of view, so we feel this rule is sufficient.
198
+
199
+ There are properties of the particle model which lend themselves well to controlling head movements, as well as smooth-pursuit eye-movements. First, is the naturally smooth motion which arises towards targets of high interest. Second, for a small number of control points; recommended 7 for a degree 3 spline surface, this method has the natural tendency to align with general areas of high interest at low resolution. This often means looking at the "center of mass" of areas with high saliency targets as opposed to specific individual elements if there are many within view. If there are sparse, spaced-out objects of interest the gaze will instead align with the individual elements. Both these behaviours arise without explicit programming. Setting the points in the control surface to a higher resolution will yield more spacial acuity, and thus the gaze will fall on more narrow targets. Changing the step size $\lambda$ will determine how fast the gaze will move towards targets, as well as how strongly those targets will be tracked. Smooth pursuit eye-movements can be elicited by having a high resolution in the control surface; recommended 11 for a degree 3 spline surface, and a larger ${\lambda }_{\text{fixation }}$ value. It is difficult to recommend any particular value for ${\lambda }_{\text{fixation }}$ because this will be scaled with how the spline surface is defined, steep peaks are as well as how fast objects move across the field of view which is limited by the frame rate of a given simulation. The length of smooth pursuits is something contextual. For a typical "search" behaviour, the length of fixations ${\tau }_{\text{fixation }}$ should average ${150} - {300}\mathrm{\;{ms}}$ . For saccadic movements, a larger ${\lambda }_{\text{search }}$ value will give faster rapid target acquisition. To emulate micro-saccades we can peturb the final position using a noise term $A \cdot {z}_{t}$ , where the amplitude corresponds less than ${0.1}^{ \circ }$ of visual angle. This will depend on camera projection parameters, but a small angle approximation $A \simeq {0.1}^{ \circ }$ is acceptable. Additionally to improve accuracy and avoid oscillations, multiple steps can be taken per simulation time step. In the scope of this work we do not describe how to switch between saccades and smooth pursuits. This is largely because smooth pursuits are typically intentional actions and need to be specified by the author of the behaviour.
200
+
201
+ ### 4.3 Probabilistic Model
202
+
203
+ #### 4.3.1 Target Selection
204
+
205
+ In this section we introduce another method for saliency driven gaze control, based largely on prior works in fixation prediction for static images. A saliency map can be thought of as a probability distribution for likely gaze targets. With this interpretation, fixation targets can be sampled from this distribution. For a probability distribution ${S}_{t}$ , a random point $\mathbf{x} \sim {S}_{t}$ is drawn. Based on the projection parameters of the virtual camera, this point in the viewing image can be converted to an orientation. The agents view can then be rotated accordingly to match this orientation.
206
+
207
+ #### 4.3.2 Control
208
+
209
+ Algorithm 2 Probabilistic Gaze Model
210
+
211
+ ---
212
+
213
+ Def: LookAt(point, time)
214
+
215
+ STATE $\leftarrow$ search
216
+
217
+ $G \leftarrow \left( {{0.5},{0.5}}\right)$ $\vartriangleright$ Center of viewport
218
+
219
+ while true do
220
+
221
+ if STATE $= =$ search then
222
+
223
+ $\mathbf{x} \leftarrow$ SamplePoint $\left( {S}_{t}\right)$
224
+
225
+ LookAt $\left( {\mathbf{x},\Delta {t}_{\text{saccade }}}\right)$
226
+
227
+ STATE $\leftarrow$ fixation $\; \vartriangleright$ Wait until reached target
228
+
229
+ else if ${STATE} = =$ fixation then
230
+
231
+ ${S}_{W} \leftarrow {S}_{t}$ .window $\left( {R}_{\text{focus }}\right)$
232
+
233
+ $\mathbf{x} \leftarrow$ SamplePoint $\left( {S}_{W}\right)$
234
+
235
+ if fixationTime $> {\tau }_{\text{fixation }}$ then
236
+
237
+ STATE $\leftarrow$ search
238
+
239
+ else
240
+
241
+ $\operatorname{LookAt}\left( {\mathbf{x},\Delta {t}_{\mu \text{ saccade }}}\right)$
242
+
243
+ Wait $\left( {\tau }_{\mu \text{ fixation }}\right) \; \vartriangleright$ Hold for length of $\mu$ -fixation
244
+
245
+ end if
246
+
247
+ end if
248
+
249
+ end while
250
+
251
+ ---
252
+
253
+ Given a point $\mathbf{x}{S}_{t}$ in viewport coordinates, a line can be drawn from the camera center through this point in world space. This vector represents an orientation ${G}^{\prime }$ . The current camera orientation $G$ can then be interpolated to this new orientation over a desired time. The speed of the rotation is then determined by the interpolation time.
254
+
255
+ Divide control into two primary states: search and fixation. In the search state, a point is sampled from the entire field of view. The view is then oriented to this target over $\Delta {t}_{\text{saccade }}$ . The angular speed of the saccade is the amplitude (angular) divided by $\Delta {t}_{\text{saccade }}$ . Once this target is picked the state transitions to fixation control. Over a total time ${\tau }_{\text{fixation }}$ saliency outside a small foveated region of radius ${R}_{\text{focus }}$ is suppressed. Within this fixation, new points are drawn from the foveated region of interest as targets for micro-fixations. The point is then interpolated to over $\Delta {t}_{\mu \text{ saccade }}$ . This point is looked at for time ${\tau }_{\mu }$ fixation, at which point a new target is selected. This repeats over the entire fixation length. Once the fixation has concluded, the state returns to search. Each parameter can be set statically or dynamically depending on desired behaviours.
256
+
257
+ This method of control is designed to allow modeling of target point selection saccade and micro-saccade eye-movements. Depending on the level of detail desired, keeping $\Delta {t}_{\text{saccade }}$ and $\Delta {t}_{\mu \text{saccade }}$ constant will achieve linear eye velocities expected for angular distances less than ${20}^{ \circ }$ , which typically reach up to ${300}^{ \circ }/s$ . However, for most applications it suffices to have a very small or zero travel time (i.e. instantaneous). Changing the ${\tau }_{\text{fixation }}$ parameter will affect how much searching is done in the visual field. Veering from typical reported values of around ${100} - {200}\mathrm{\;{ms}}$ will result in either rapid eye-darting for smaller values, or more focused eye-movements in the case of larger values. Tightening or increasing the size of the focus region ${R}_{\text{focus }}$ will either restrict the space of micro-saccade movements (thus decreasing their amplitude) or allow for more outside stimuli to draw micro-saccades respectively. Depending on the desired behaviour either can be appropriate. For example, a character reading a book would have very infrequent saccades (large or infinite ${\tau }_{\text{fixation }}$ ), frequent micro-saccades (small ${\tau }_{\mu \text{fixation }}$ , and a small radius of focus ${R}_{\text{focus }}$ . Similarly to the particle method, we implement inhibition of return as a decay in object saliency.
258
+
259
+ ## 5 RESULTS AND EVALUATION
260
+
261
+ Here we present evaluations of our models. First, it should be noted that the PSM saliency maps our models are predicated on have been previously evaluated against SALICON, a state-of-the-art machine learning saliency, with high correspondence [18].
262
+
263
+ We compare our particle model fixations against pyStar-FC [38], a notable multi-saccade generator. The pyStar-FC model generates saccades for static images, so we construction scenarios in our virtual environment where neither the viewing agent nor pedestrian agents are moving in order to create static images for comparison. The gaze movement of the viewing agent can then be projected onto this static image to show the scanpath of the agent using our model. Then we compare this scanpath to the output of pyStar-FC on the same RGB image. We used mostly default parameters for pyStar-FC, using Deepgaze II with ICF as the saliency model [21]. The input viewing size was modified to match the field of view of our agents. Changing the ior (inhibition of return) decay rate parameter in pyStar-FC did not produce significantly different results, so it was left at default.
264
+
265
+ The results emphasize the authorability of our method. By adjusting our model parameters, our particle model can be tuned to match pyStar-FC's output or any other model. Ten pairs of images were compared, five of which are shown in Fig. 5. It is worth noting that our use case was not the intended purpose of either Deepgaze II nor pyStar-FC, so there may be biases in their output on our virtual images. The tendency of pyStar-FC to fixate on the neon signage is likely a result of bias in the datasets used to create these models, which probably used well-lit, non-virtual environments. Thus the output given by pyStar-FC may not be representative of what humans would look at while navigating in this environment. Regardless, our aim in this comparison is simply to illustrate the authorability of our model and show that by adjusting the parameters of our particle model, we can match the output of pyStar-FC or any other model with a high degree of similarity.
266
+
267
+ Table 1: K nearest neighbour similarity scores for five trials comparing our method’s fixation points with pyStar-FC’s, where k=2.
268
+
269
+ <table><tr><td>Trial</td><td>KNN Similarity</td></tr><tr><td>1</td><td>0.965</td></tr><tr><td>2</td><td>0.976</td></tr><tr><td>3</td><td>0.989</td></tr><tr><td>4</td><td>0.989</td></tr><tr><td>5</td><td>0.988</td></tr></table>
270
+
271
+ We therefore compared fixations from our particle model to pyStar-FC fixations using k-nearest neighbour similarity for the same five trials. The resulting knn similarity scores were all over 0.95 indicating a high degree of similarity. The results are summarized in Table 1. Thus we show that we are able to match other models with a high degree of similarity. Matching it to real human gaze data should therefore be possible and is important planned future work. However it should be emphasized that our goal is not to match human gaze data but to present a flexible and customizable system for authoring gaze behaviour in virtual agents, which we have shown.
272
+
273
+ We can make some comparisons between both models. Figure 6 shows target selection for both models. The particle model drives gaze in the direction of the potential field gradient. The probabilistic model identifies potential gaze targets highlighted with red circles, and chooses one probabilistically based on saliency at that location. Once the probabilistic model chooses a target, gaze is snapped to that location-similar to human saccades. Additionally, the ability to perform microsaccades is one of the defining features of the probabilistic model. Reducing the field of view once a target is selected produces a zoomed result from which microssaccade targets can then be selected. Figure 7 shows this zoomed effect in comparison with the particle model.
274
+
275
+ Our models also account for saliency decay. While an agent fixates on something in the scene, we perform a raycast in the fixation direction, which when it hits the target triggers the saliency for that object and that viewer to decay, and this continues over time while that target is fixated on. An example is shown in Figure 9, where two agents are viewing the same man with different saliencies due to saliency decay. Decay rates for both models differ in these examples, however can easily be parameterized to produce different gaze behaviours-such as nervous eye movements versus watchful gaze. While the decay rate here for the probabilistic model is fast in order to encourage quick saccades, the particle model was set to a slower decay rate. More research is needed to determine an optimal decay rate and this is important planned future work, and we hypothesize that these values relate to context and stylization of the behaviour. An in-depth statistical evaluation of our gaze models is planned future work.
276
+
277
+ We also note that our method provides for multi-agent saliency evaluation as shown in Figure 8. This affords complex scenes with a multiplicity of independent gaze controller automatically driven by diverse scenes. That is, crowd respond naturally to the makeup of a scene from signage to fellow pedestrians.
278
+
279
+ ## 6 Discussion
280
+
281
+ The strength of this approach is that the user does not need to explicitly define gaze patterns, but instead need to define an agent's visual task or intent. One of the main principles of this work is creating control which adheres to the idea of sensory honesty. Prior works in the area of simulated gaze control have been able to create reasonably believable gaze movements for characters utilizing information from the simulation itself such as the scene database to locate gaze targets and track their position. The hope, is that we can start to think of autonomous virtual humans and how they actively view their environment in terms such as their intentions, goals and knowledge. We could describe what they are attending to and what their visual task is without having to write explicit patterns for how they should then generate gaze movements. Perhaps the most obvious addition to our work is definition of high level control for generating saliency maps and appropriately selecting the correct control parameters. SDGC only provides only one part of a full solution for generating plausible gaze movements. Ultimately, this requires thinking about how saliency (attention) should be defined and the interplay with an agents intent. It fundamentally changes how we view and approach gaze of virtual agents from asking, "what is this character looking at?" to instead asking, "what is this character interested in, and what are they trying to do?". In practical terms this is deciding how to define saliency, and deciding what kinds of gaze-movements to use. Of course, an obvious criticism is that due to this, no general solution is offered which covers all or a large number of gaze behaviours. However, even in light of this our framework does expand the capabilities of similar works like [29] by including top down pre-attentive component in the form of parametric saliency maps from [17] which allows encoding things like novelty or task relevance directly into saliency. High level controllers for automating attending behaviour such as the extensive work from $\left\lbrack {3,{14},{22}}\right\rbrack$ combined with our Saliency-Driven Gaze Control (SDGC) approach to create a totalistic saliency driven model which takes into account agent action and intent, and subsequently delegate saliency generation and SDGC methods to generate the final gaze behaviours. This would also allow us to improve our implementation of inhibition of return, which currently does not address how this effect is modulated depending on the intentions of the viewer.
282
+
283
+ Our methods are sensitive to the model parameters, and to the parameters of PSM which controls the saliency map generation. Parameter sensivity and tuning for PSM was described in [17, 18]. For the particle model, it is important to choose appropriate sampling points, and gradient step size for the best results. Large step sizes result in targets being missed which can produce oscillations. The saliency decay rate, fixation duration, and fixation conditions should be chosen appropriately for the desired behaviour in both models. For example, larger decay rates and fixation durations and lower thresholds for triggering fixations produce quicker, darting gaze behaviours.
284
+
285
+ A limitation in both of our methods is that only the current view of the agent is considered, and objects outside the agent's current field of view do not impact gaze behaviour. The saliency decay mechanism models some aspects of memory, since the saliency amount is remembered even if it leaves an agent's field of view and then comes back into it. However, complete models would include a model of memory that keeps track of objects recently seen but not currently within the field of view and their relative positions so that agents could look back at them directly even when they are not inside the field of view. Matching our model parameters to real human gaze data remains important planned future work. However we have illustrated that our model is highly flexible and customizable, and can be used to author a variety of virtual gaze behaviours.
286
+
287
+ ## 7 CONCLUSION
288
+
289
+ We presented two Saliency-Driven Gaze Control (SDGC) methods, the particle model and probabilistic model, which when combined with appropriately defined saliency (attention) are able to cover a wide range of well studied and understood human gaze-movements. SDGC takes as input a real-time map off attention in an autonomous agents visual field and generates gaze-movements. The two SDGC methods, the particle model and probabilistic model, are able to elicit physiologically based head movements, smooth pursuits, saccades and microsaccades. For a defined visual task, we show that through combination of parameterized visual attention and gaze-movements that appropriate gaze behaviours will arise.
290
+
291
+ ![01963e0b-e5cc-7456-af21-037726a7e985_7_385_154_1024_1206_0.jpg](images/01963e0b-e5cc-7456-af21-037726a7e985_7_385_154_1024_1206_0.jpg)
292
+
293
+ Figure 5: For (a) an RGB image, (b) gaze heatmaps for our particle method overlayed on the RGB image, comparison of scanpath traces between (c) our method with (d) pyStar-FC.
294
+
295
+ ![01963e0b-e5cc-7456-af21-037726a7e985_7_159_1478_1542_541_0.jpg](images/01963e0b-e5cc-7456-af21-037726a7e985_7_159_1478_1542_541_0.jpg)
296
+
297
+ Figure 6: Comparison of target selection between the two models. Left: Particle model, which drives gaze in the direction of the potential field gradient. Right: Probabilistic model target selection. Red circles indicate potential target locations, which once selected will trigger a saccade.
298
+
299
+ ![01963e0b-e5cc-7456-af21-037726a7e985_8_173_319_742_357_0.jpg](images/01963e0b-e5cc-7456-af21-037726a7e985_8_173_319_742_357_0.jpg)
300
+
301
+ Figure 7: Left: Particle model saliency map. Right: Probabilistic model view of the same subjects. The Probabilistic model uses a reduced field of view to produce a zoomed effect for the purpose of facilitating microsaccades.
302
+
303
+ ![01963e0b-e5cc-7456-af21-037726a7e985_8_164_1172_692_582_0.jpg](images/01963e0b-e5cc-7456-af21-037726a7e985_8_164_1172_692_582_0.jpg)
304
+
305
+ Figure 8: Two agents walking while using the particle gaze model simultaneously. Top: RGB view from behind the agent. Bottom: Saliency map from the agent's POV. The small red line in the center of the saliency map indicates the current direction of the particle gradient.
306
+
307
+ ![01963e0b-e5cc-7456-af21-037726a7e985_8_941_145_693_583_0.jpg](images/01963e0b-e5cc-7456-af21-037726a7e985_8_941_145_693_583_0.jpg)
308
+
309
+ Figure 9: One agent walks behind another and both see the same man sitting on a bench. On the right the man's saliency is lower due to saliency decay during fixation. Top: RGB view from behind the agent so that the head orientation is visible. Bottom: Saliency map from the agent's POV using the particle model.
310
+
311
+ ## REFERENCES
312
+
313
+ [1] R. A. Abrams and R. S. Dobkin. Inhibition of return: effects of attentional cuing on eye movement latencies. Journal of Experimental Psychology: Human Perception and Performance, 20(3):467, 1994.
314
+
315
+ [2] U. Ağil and U. Güdükbay. A group-based approach for gaze behavior of virtual crowds incorporating personalities. Computer Animation and Virtual Worlds, 29(5):e1806, 2018.
316
+
317
+ [3] N. I. Badler, D. M. Chi, and S. Chopra. Virtual human animation based on movement observation and cognitive behavior models. In Proceedings Computer Animation 1999, pp. 128-137. IEEE, 1999.
318
+
319
+ [4] P. N. Caruana. Pseudo-saliency for human gaze simulation, 2022-09- 29.
320
+
321
+ [5] Y. Fang, R. Nakashima, K. Matsumiya, I. Kuriki, and S. Shioiri. Eye-head coordination for visual cognitive processing. PloS one, 10(3):e0121035, 2015.
322
+
323
+ [6] E. G. Freedman. Coordination of the eyes and head during visual orienting. Experimental brain research, 190(4):369-387, 2008.
324
+
325
+ [7] A. Fuchs. Saccadic and smooth pursuit eye movements in the monkey. The Journal of Physiology, 191(3):609, 1967.
326
+
327
+ [8] B. S. Gibson and H. Egeth. Inhibition of return to object-based and environment-based locations. Perception & Psychophysics, 55(3):323- 339, 1994.
328
+
329
+ [9] I. Goudé, A. Bruckert, A.-H. Olivier, J. Pettré, R. Cozot, K. Bouatouch, M. Christie, and L. Hoyet. Real-time multi-map saliency-driven gaze behavior for non-conversational characters. IEEE Transactions on Visualization and Computer Graphics, 2023.
330
+
331
+ [10] K. A. Hamel, N. Okita, J. S. Higginson, and P. R. Cavanagh. Foot clearance during stair descent: effects of age and illumination. Gait & posture, 21(2):135-140, 2005.
332
+
333
+ [11] L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on pattern analysis and machine intelligence, 20(11):1254-1259, 1998.
334
+
335
+ [12] M. Jiang, S. Huang, J. Duan, and Q. Zhao. Salicon: Saliency in context. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1072-1080, 2015.
336
+
337
+ [13] I. Karamouzas, P. Heil, P. Van Beek, and M. H. Overmars. A predictive collision avoidance model for pedestrian simulation. In International workshop on motion in games, pp. 41-52. Springer, 2009.
338
+
339
+ [14] S. C. Khullar and N. I. Badler. Where to look? automating attending
340
+
341
+ behaviors of virtual human characters. Autonomous Agents and Multi-Agent Systems, 4(1):9-23, 2001.
342
+
343
+ [15] R. M. Klein. Inhibition of return. Trends in cognitive sciences, 4(4):138- 147, 2000.
344
+
345
+ [16] R. M. Klein and J. Ivanoff. Inhibition of return. In Neurobiology of Attention, chap. 16, pp. 96-100. Elsevier Academic Pres, 2005.
346
+
347
+ [17] M. Kremer, P. Caruana, B. Haworth, M. Kapadia, and P. Faloutsos. Psm: Parametric saliency maps for autonomous pedestrians. In Motion, Interaction and Games, pp. 1-7. 2021.
348
+
349
+ [18] M. Kremer, P. Caruana, B. Haworth, M. Kapadia, and P. Faloutsos. Automatic estimation of parametric saliency maps (psms) for autonomous pedestrians. Computers & Graphics, 2022.
350
+
351
+ [19] M. Kümmerer, M. Bethge, and T. S. Wallis. Deepgaze iii: Modeling free-viewing human scanpaths with deep learning. Journal of Vision, 22(5):7-7, 2022.
352
+
353
+ [20] M. Kümmerer, T. S. Wallis, and M. Bethge. Deepgaze ii: Reading fixations from deep features trained on object recognition. arXiv preprint arXiv:1610.01563, 2016.
354
+
355
+ [21] M. Kummerer, T. S. Wallis, L. A. Gatys, and M. Bethge. Understanding low-and high-level contributions to fixation prediction. In Proceedings of the IEEE international conference on computer vision, pp. 4789- 4798, 2017.
356
+
357
+ [22] S. P. Lee. Facial animation system with realistic eye movement based on a cognitive model for virtual agents. University of Pennsylvania, 2002.
358
+
359
+ [23] J. J. McDonald and L. M. Ward. Spatial relevance determines facilita-tory and inhibitory effects of auditory covert spatial orienting. Journal of Experimental Psychology: Human Perception and Performance, 25(5):1234, 1999.
360
+
361
+ [24] T. A. Mondor, L. M. Breau, and B. Milliken. Inhibitory processes in auditory selective attention: Evidence of location-based and frequency-based inhibition of return. Perception & Psychophysics, 60(2):296-302, 1998.
362
+
363
+ [25] R. Nakashima and S. Shioiri. Why do we move our head to look at an object in our peripheral region? lateral viewing interferes with attentive search. PloSone, 9(3):e92284, 2014.
364
+
365
+ [26] O. Oyekoya, W. Steptoe, and A. Steed. A saliency-based method of simulating visual attention in virtual scenes. In Proceedings of the 16th ACM symposium on virtual reality software and technology, pp. 199-206, 2009.
366
+
367
+ [27] D. J. Parkhurst and E. Niebur. Stimulus-driven guidance of visual attention in natural scenes. In Neurobiology of Attention, chap. 39, pp. 240-245. Elsevier Academic Pres, 2005.
368
+
369
+ [28] T. Pejsa, D. Rakita, B. Mutlu, and M. Gleicher. Authoring directed gaze for full-body motion capture. ACM Transactions on Graphics (TOG), 35(6):1-11, 2016.
370
+
371
+ [29] C. Peters and C. O'Sullivan. Bottom-up visual attention for virtual human animation. In Proceedings 11th IEEE International Workshop on Program Comprehension, pp. 111-117. IEEE, 2003.
372
+
373
+ [30] M. I. Posner, Y. Cohen, et al. Components of visual orienting. Attention and performance $X$ : Control of language processes,32:531-556,1984.
374
+
375
+ [31] M. I. Posner, R. D. Rafal, L. S. Choate, and J. Vaughan. Inhibition of return: Neural basis and function. Cognitive neuropsychology, 2(3):211-228, 1985.
376
+
377
+ [32] K. Ruhland, C. E. Peters, S. Andrist, J. B. Badler, N. I. Badler, M. Gle-icher, B. Mutlu, and R. McDonnell. A review of eye gaze in virtual agents, social robotics and hci: Behaviour generation, user interaction and perception. In Computer graphics forum, vol. 34, pp. 299-326. Wiley Online Library, 2015.
378
+
379
+ [33] G. Tassinari and D. Campara. Consequences of covert orienting to non-informative stimuli of different modalities: A unitary mechanism? Neuropsychologia, 34(3):235-245, 1996.
380
+
381
+ [34] S. P. Tipper, J. Driver, and B. Weaver. Object-centred inhibition of return of visual attention. The Quarterly Journal of Experimental Psychology, 43(2):289-298, 1991.
382
+
383
+ [35] S. P. Tipper, B. Weaver, L. M. Jerreat, and A. L. Burak. Object-based
384
+
385
+ and environment-based inhibition of return of visual attention. Journal of Experimental Psychology: Human perception and performance, 20(3):478, 1994.
386
+
387
+ [36] J. Tsotsos, I. Kotseruba, and C. Wloka. A focus on selection for fixation. Journal of Eye Movement Research, 9(5), 2016.
388
+
389
+ [37] C. Wloka, I. Kotseruba, and J. K. Tsotsos. Active fixation control to predict saccade sequences. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3184-3193, 2018.
390
+
391
+ [38] C. Wloka, I. Kotseruba, and J. K. Tsotsos. Saccade sequence prediction: Beyond static saliency maps. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
392
+
393
+ [39] J. M. Wolfe. Guidance of visual search by preattentive information. In Neurobiology of Attention, chap. 17, pp. 101-107. Elsevier Academic Pres, 2005.
394
+
395
+ [40] C. Zetsche. Natural scene statistics and salient visual features. In Neurobiology of Attention, chap. 37, pp. 226-231. Elsevier Academic Pres, 2005.
396
+
397
+ [41] C. Zetzsche, K. Schill, H. Deubel, G. Krieger, E. Umkehrer, and S. Beinlich. Investigation of a sensorimotor system for saccadic scene analysis: an integrated approach. In Proc. 5th Int. Conf. Simulation Adaptive Behav, vol. 5, pp. 120-126, 1998.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/x_MfBxtP2Y/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,320 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Saliency Driven Gaze Control for Autonomous Pedestrians
2
+
3
+ Category: Research
4
+
5
+ < g r a p h i c s >
6
+
7
+ Figure 1: Components of the proposed Particle Gaze method. A saliency map is generated in real time from the current agent view (Left). The values from this saliency map are used to set the Z values of a spline surface, representing a potential field $V\left( {x,y}\right)$ (Middle). The gradient of this potential field $- \overrightarrow{\nabla V}$ is used to move the center of gaze towards a minimum in the field.
8
+
9
+ § ABSTRACT
10
+
11
+ How and why an agent looks at its environment can inform its navigation, behaviour and interaction with the environment. A human agent's visual-motor system is complex and requires both an understanding of visual stimulus as well as adaptive methods to control and aim its gaze in accordance with goal-driven behaviour or intent. Drawing from observations and techniques in psychology, computer vision and human physiology, we present techniques to procedurally generate various types of gaze movements (head movements, saccades, microsaccades, and smooth pursuits) driven entirely by visual input in the form of saliency maps which represent pre-attentive processing of visual stimuli in order to replicate human gaze behaviour. Each method is designed to be agnostic to attention and cognitive processing, able to cover the nuances for each type of gaze movement, and desired intentional or passive behaviours. In combination with parametric saliency map generation, they serve as a foundation for modelling completely visually driven, procedural gaze in simulated human agents.
12
+
13
+ Index Terms: Computing methodologies-Agent / discrete model; Computing methodologies-Procedural animation
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ Modelling and simulating human gaze is a complicated endeavour. There are many approaches for estimating and approximating how a human agent may observe the world, most of which aim to replicate it to create believable appearing humans. These methods are effective and often take advantage of scene information from the simulation to calculate believable gaze patterns. However, if one's goal is to replicate gaze, not starting at the desired end result but from the basis of vision then it is important to try and follow a guideline of 'sensory honesty'; by generating gaze movements in an agent-encapsulated manner. In other words, without using information or data that a real human agent would not have. This work presents two methods for controlling gaze using only visual information as input. Specifically, saliency maps are used as input representing pre-attentive processing of the human psycho-visual system of visual stimulus.
18
+
19
+ There are many factors and complications to consider when presenting a model of human gaze. As such, we construct a framework for authoring a variety of gaze behaviours. We use two modes of controlling gaze with saliency maps as input to develop a novel method which can cover a wide range of human gaze behaviours.
20
+
21
+ Thus far, gaze behaviours in crowd simulations have been largely absent or homogeneous. A robust gaze model requires more than just saliency maps. Once a saliency map has been generated, how do we determine which targets to look at, the order in which they should be gazed at, and the duration of the fixation, in such a way that the resulting behaviour and animation is robust and convincing, and also in a way that can be adjusted depending on the situation? This work proposes two models that provide an authorable framework for designing a diverse set of gaze behaviours that can be adjusted on a per-agent basis, promoting heterogeneity in crowd simulation gaze behaviours.
22
+
23
+ Our contributions are as follows. First we present a particle gaze model that uses a potential gradient field to drive gaze towards salient regions of the saliency map. Second, we present a second probabilistic saccade model that chooses targets from the saliency map probabilistically and executes quick saccades, and is capable of making microsaccades by then choosing fixation points based on saliency in the localized region of the target. Third, we evaluate our particle model against pyStar-FC, a notable multi-saccade generator and demonstrate that our model can be tuned to a high degree of similarity with other models. Finally we compare our two models against each other qualitatively.
24
+
25
+ § 2 HUMAN GAZE
26
+
27
+ Human gaze is a complex topic, weaving between physiology and psychology. Consequently, a model of gaze that neglects either aspect will be woefully incomplete. One of the core points of this work is to accurately replicate the suite of human gaze movements along with their subtle nuances, which involves understanding the mechanics, limitations and strategies of how people look at things. This is discussed in more detail in [4].
28
+
29
+ § 2.1 GAZE MOVEMENTS
30
+
31
+ Human eye movements have been the subject of intense study for many decades. Over this time eye movements have been classified into 7 categories. The standard set of eye movements are (from [36]):
32
+
33
+ * Saccade: voluntary jump-like movements that move the retina from one point in the visual field to another;
34
+
35
+ * Microsaccades: small, jerk-like, eye movements, similar to miniature versions of voluntary saccades, with amplitudes from 2 to 120 arcminutes;
36
+
37
+ * Vestibular-Ocular Reflex: these stabilize the visual image on the retina by causing compensatory changes in eye position as the head moves;
38
+
39
+ * Optokinetic Nystagmus: this stabilizes gaze during sustained, low frequency image rotations at constant velocity;
40
+
41
+ * Smooth Pursuit: these are voluntary eye movements that track moving stimuli;
42
+
43
+ * Vergence: these are coordinated movements of both eyes, converging for objects moving towards and diverging for objects moving away from the eyes;
44
+
45
+ * Torsion: coordinated rotation of the eyes around the optical axis, dependent on head tilt and eye elevation.
46
+
47
+ There are quite a few different types of eye movements, each with its complexities and implications on human gaze. For the sake of this work, we focus on how to model saccades, microsaccades and smooth pursuits. Vestibular-ocular reflex is responsible for stabilizing the visual image as the head moves. In addition to eye movements, head movements are also a crucial part of gaze. We label the set of eye and head movements as gaze movements. Due to the slower nature of head movements, these tend to be less categorized. As a general rule, humans tend to align their heads with what they are looking at. According to [25], this is because a discrepancy between the head and eye directions causes interference in visual processing, as well as a degradation in accuracy for localizing attentional focus and hand-eye coordination. Head movements are less erratic than saccades or otherwise would cause strain on the human neck. One study [5] found that head movement duration can range between ${200} - {800}\mathrm{\;{ms}}$ when a series of saccades make up a gaze shift, with larger head rotation speeds for larger gaze shifts. In contrast, a single saccade-fixation action requires about ${200}\mathrm{\;{ms}}$ . In [6], a saccade took just under ${200}\mathrm{\;{ms}}$ while a head movement took just under ${450}\mathrm{\;{ms}}$ to complete in a single trial, suggesting that head movements are generally slower than saccades. Most natural gaze shifts utilize a combination of saccades and head movements, with head rotations typically following the eyes with a delay of ${20} - {50}\mathrm{\;{ms}}$ [32]. A model which aims to emulate human gaze should be able to parameterize and replicate these types of gaze movements, or at least a sufficient subset of them. The large problem with generalizing a control structure however is that human gaze behaviour tends to be very diverse and idiosyncratic [32]. The selection of gaze targets are drawn from attention and deliberate intent, which then informs the gaze. For example, a slow-moving object of interest in view will elicit a smooth pursuit. However, if this target is moving too fast smooth pursuit is no longer possible and the human visual system will resort to "catch-up" saccades to keep track of the object. A model of gaze should be able to generate a range of plausible eye movements given knowledge or a map of how the given agent is attending to their world.
48
+
49
+ § 2.2 MEMORY AND INHIBITION OF RETURN (IOR)
50
+
51
+ Inhibition of Return (IOR) is described by [15] as a delayed response to stimuli in a peripheral location which was previously attended to or looked at. Originally found in [30], and followed up by and defined in [31], IOR's function appears to be orienting gaze towards novel locations which facilitates foraging and other search behaviours. This is fairly intuitive, e.g. if you were searching your office for a specific item it would make sense to avoid searching where you have already looked. Alternatively, if you were just trying to gather information about your environment, the same mechanism aids in information gathering. IOR typically appears in the literature when the stimulus event is not task-relevant or there is no task given to the observer [15]. When test subjects were tasked with making saccadic movements which seemed most comfortable after viewing a brief stimulus they most often would look away from the location of the stimulus. $\left\lbrack {{15},{16}}\right\rbrack$ found that across multiple studies it appeared that IOR is often encoded in environmental coordinates rather than retinal coordinates. This effect appears in the early IOR literature [30,31]. Further studies have also shown that in some instances IOR appears to be encoded on an object basis $\left\lbrack {1,8,{34},{35}}\right\rbrack$ . Both environmental location and object attachment as IOR encodings have strong experimental evidence to support them. The question becomes, in what cases do either occur? [15] suggests that this change in encoding occurs depending on contextual factors such as whether the observer is moving, objects in the view are moving and what the intent or task of the observer is. In early studies, IOR appeared as related to a reluctance of motor response to focus on particular locations, not inhibiting or suppressing attention. However, studies have found IOR to occur in spatial tasks as well, not just stimulus-response. These findings have shifted the general consensus that IOR does indeed occur on an attentional level as well as oculomotor response. The reasoning again appears to be contextual. For example, the type of stimuli, as well as the difficulty in discriminating stimuli within an observer's view affects the introduction of IOR on the attentional level. The presence of IOR on attention is further supported by findings that IOR also appears in auditory $\left\lbrack {{23},{24},{33}}\right\rbrack$ and tactile $\left\lbrack {34}\right\rbrack$ modes Results are consistent in demonstrating that IOR's effect is to inhibit responses typically associated with stimuli. Narrowing down how IOR mechanisms will function is a difficult task affected by many factors. Studies have found generally that IOR typically takes between ${100}\mathrm{\;{ms}}$ and ${200}\mathrm{\;{ms}}$ of cued saccade fixation to kick in which aligns with the time between saccades which typically has latencies of 200-250 ms [7]. The effects can last several seconds, however, this can easily be affected by changes in the scene or task of the observer.
52
+
53
+ The multitude of open questions as well as contextual changes makes it difficult to define an inhibition of return mechanism which can be used as a part of a gaze control system. Factors like environment, agent factors, intent, and task all need to be taken into account to decide for example what kind of encoding to use. That's not to mention open questions on specific mechanisms within IOR. There are many valid ways to implement IOR, in this paper we try to focus on one subset of factors and contexts and suggest an IOR mechanism contingent on that based on the previously mentioned literature. It is the hope as well that implementing gaze control systems based on attention and vision literature opens up possibilities to explore many of the open challenges yet unexplained.
54
+
55
+ § 3 RELATED WORKS
56
+
57
+ To create a full pipeline modelling the gaze of an agent requires first defining what to look at and then how to look at it. Our work uses saliency to capture a visual representation of what/where an agent may look at. Deciding how to use saliency information to generate fixations or eye movements is an area of ongoing study. Though the field is mostly saturated with models for predicting fixations within $2\mathrm{D}$ images, we draw inspiration from other works in this area and apply concepts to our problems for $3\mathrm{D}$ characters in a dynamic simulation.
58
+
59
+ § 3.1 SALIENCY
60
+
61
+ Saliency models attempt to represent what is important within a field of view, typically concerning human visual processing. The most common form of representing this is in the form of a saliency map, a 2D image which describes which regions within a field of view are "salient". In this sense, saliency is usually interpreted as the probability that a human observer will look in a particular location. Rule-based models such as [11]'s originative work construct saliency maps based on things like colours, contrast, location etc. More recent deep learning models like [12] and [20] aim to emulate human saliency specifically by training off of human scan-path data on sets of images. However, the main issue with these models are inherent biases within datasets, and in the case of human simulation, they are prohibitively slow to use in real-time simulation. Virtual saliency models are aimed at implementing saliency specifically for the purposes of human or agent simulation. For example, it is possible to construct a model of saliency from a simulation scene database and assign scores to objects within an agent's view [26]. This is a simple and effective approach however is limited in its uses outside of simulating visually believable gaze animations. To go beyond these limitations it is possible to use a rule-based model for generating saliency maps (parametric saliency maps) in real-time during a simulation using information from embedded in the scene graph and localized per observer [17]. The advantage of this approach is bringing saliency maps to real time-simulation, which means vision-based approaches to gaze control and scene understanding are possible. In follow-up work, the parameters of the parametric saliency map were learned by minimizing the output difference from state-of-the-art deep saliency models on a virtual dataset [18]. Work has shown that visual attention is guided by features depending on the task, and that pre-attentive features like colour, luminance, motion, orientation, depth, and size are all key elements of visual attention [39]. All of these can be compactly encoded into parametric saliency maps, which is why it is an efficient representation of pre-attentive processing for attentive tasks like fixations. Similarly, work has shown that bottom-up features (stimuli) guide attention under natural conditions, for example, simple undirected gaze with no intent or goal [27].
62
+
63
+ § 3.2 FIXATION PREDICTION
64
+
65
+ Many findings summarized in [40] conclude that saccadic selection avoids areas of little or less structure within an image. When compared with random fixation point selection on datasets of images, regions chosen by actual fixation locations have consistently higher signal variance than random selection. [41] found that the mean-variance ratio of random vs. real fixations ${\sigma }_{eye}^{2}/{\sigma }_{\text{ rand }}^{2}$ to be around 1.35. Active fixation prediction from [37] aims to generate a temporal series of fixation locations in an image which can be used to construct scan paths. They accomplish this through a tiered saliency approach, blending a coarse feature map on the periphery with a high detail saliency map located at the point of fixation. Combining this with a temporal inhibition of return mechanism (IOR) they are able to generate very plausible scan paths. Notable takeaways from this approach are the importance of selective suppression of attention or saliency in the periphery, combined with some mode of memory to implement inhibition of return which from [16], says is consistently found in studies of fixations and saccadic eye movements. The recent Deepgaze III model from [19] trained a deep neural network to predict and generate scan paths and fixations from fixation density maps (i.e. saliency) for free-viewing of natural images. The model generally outperformed other similar models (such as the previously mentioned STAR-FC) in various statistical measures on state-of-the-art datasets. The model is particularly interesting due to the modular architecture allowing them to conduct ablation studies to quantify the effects and relevance of input data. It was found that scene content has much higher importance on fixation prediction than previous scan path history. As noted by Tstotsos, J., one key limitation is the static nature of images and how shifting of gaze does not affect the image. Key challenges we address are how to implement inhibition of return given dynamic environment, agent position and agent gaze. As well as how to select fixation points.
66
+
67
+ The problem generally with all these approaches is the focus on free viewing of static images. That is useful for trying to predict how someone may look at an image, however, as noted above, humans do not see in 2D static images. Human visual systems contend with stimuli changes from dynamic environments as well as egocentric effects when gaze movement occurs (i.e., changing where you look completely changes the information available to your vision). The pursuit of fixation prediction in active-vision applications; such as simulation or robotics, must contend with temporally changing environments, changes in agent position, changes in agent gaze orientation, and spatial-temporal memory.
68
+
69
+ § 3.3 GAZE CONTROL
70
+
71
+ One of the closest implementations of our approach is [29], where the Itti-Koch-Niebur (IKN) model [11] is used to generate saliency maps from the perspective of a virtual agent. This was used to determine which objects within view would be 'salient' and queued them as targets in the scene database. They also implemented a form of memory where agents would keep track of scene objects that they have observed. The spirit of their work was 'sensory honesty', in trying to use as little simulation knowledge as possible. In the same vein, our work also shares this same goal but attempts to take it further by including no information about the transforms of objects in the scene database in gaze, having it entirely driven by visual stimulus. The most significant limitation of the authors was the lack of a top-down attention component. This is addressed by our inclusion of parametric saliency maps from [17]. The benefit of using saliency maps is that the processing time is limited only by the cost of the attention model and the rendering pipeline. Another limitation is simply that humans don't have a scene database to draw information from. Approaching the problem of gaze and attention from a visual stimulus-driven standpoint opens the door for more grounded modelling of virtual humans. More complex totalistic models for automating gaze behaviour have been worked on for over two decades, in the form of cognitive models of attention and intent which form a high-level controller $\left\lbrack {3,{14},{22}}\right\rbrack$ . These models rather interestingly attempt to join ideas of task relevance and action to inform gaze movements. This is an often-overlooked factor despite environmental conditions impacting visual understanding of the environment which also impact general locomotion and movements, such as the increase in foot clearance on steps in different lighting levels [10]. Our proposed approach poses a simpler parametric framework for authoring and generating gaze behaviours in a way which compartmentalizes attention and intent away from control. Our work fits in as a link between the vision-based approaches, like [29], and high-level control structures, such as [3, 14, 22]. Other pseudo-saliency driven gaze approaches do not use visual stimulus as input control for gaze by explicitly targeting transforms of objects within the scene $\left\lbrack {2,{26}}\right\rbrack$ . These approaches create reasonably believable procedural gaze animations however are limited to scale as a scene becomes increasingly complex the computational costs of such gaze models too will increase.
72
+
73
+ Gaze behaviour modelling is not only important in real-time applications but for a variety of purposes. For example, gaze behaviour can be inferred from motion capture data and automatically integrated into animation as done in [28].
74
+
75
+ A recent paper proposed a real-time method for driving gaze behaviour using a multi-layered saliency approach similar to ours [9], but it does not take into account 3D information from the scene such as velocity of agents, and the customization maps seem to be created for an entire viewpoint rather than for individual objects, so new customization maps would have to be made for each viewing direction of an object whereas our method allows semantic masking that is attached to the object and works for any viewpoint. Additionally, the use of a ML saliency model limits the customizability of the saliency maps, whereas our method uses PSM [17] which provides great flexibility and authorability.
76
+
77
+ < g r a p h i c s >
78
+
79
+ Figure 2: Examples of generated saliency maps from the perspective of an agent walking through a simulated urban crowd, using PSM weights specified in [18].
80
+
81
+ § 4 METHODS
82
+
83
+ Each method presented takes as input a square, grayscale image representing the saliency of an agent's view at that time, and then outputs a new orientation and the speed at which to interpolate to it from the current orientation. Once the new target orientation is reached, the process is repeated. Each method is designed to be simple, yet capable of plausibly generating different types of gaze movements. At the same time, they are agnostic to top-down attention which is instead encoded in saliency maps. Through the combination of saliency and the control parameters for the gaze-control methods, a wide range of intentional and passive gaze behaviours can be modelled.
84
+
85
+ We use the Predictive Avoidance Model (PAM) [13] for agent navigation, which senses obstacles and neighbours within some field of view and produces piece-wise predicted repulsive forces to avoid them. Our gaze behaviour models change the center of the field of view, which affects the neighbours and obstacles avoided. Further selecting avoidance targets based on saliency is planned future work.
86
+
87
+ § 4.1 SALIENCY MAP GENERATION
88
+
89
+ We utilize the parametric saliency maps (PSMs) method from [18]. This allows for saliency maps to be easily generated in real-time for virtual agents. PSM is a compact way of encoding pre-attentive and top-down factors. Parameters can be easily adjusted to suit different attentive loads. The saliency of an object from the perspective of an observer is computed from the combination of weighted parameters,
90
+
91
+ $$
92
+ S = W \cdot \left( {{w}_{d}{S}_{d} + {w}_{F}{S}_{F} + {w}_{v}{S}_{v} + {w}_{R}{S}_{R} + {w}_{I}{S}_{I}}\right) \cdot \left( {{W}_{M}{S}_{M}}\right) \cdot \left( {{W}_{A}{S}_{A}}\right)
93
+ $$
94
+
95
+ (1)
96
+
97
+ The values of weights are set by the observer. The parameter values come from the objects in the scene. For example, the interestingness factor ${S}_{I}$ is an intrinsic value from an object/character. It is an effective way to generate saliency maps in a simulation and change the attentive factors as needed, either globally through factor values, or on a per-agent basis through the factor weights. A Gaussian blur is applied afterwards to smooth out hard edges.
98
+
99
+ § 4.2 PARTICLE MODEL
100
+
101
+ We now introduce the particle model for saliency-driven gaze control. This model treats the center of gaze as a particle which is acted on by driving forces. By imagining the center of gaze as a particle in a potential field we can use equations of motion to describe how it moves. The potential field comes from the saliency of what the agent is seeing.
102
+
103
+ § 4.2.1 PARTICLE UPDATE
104
+
105
+ The point which lies in the center of view (from a virtual camera) can be imagined as a point on the $3\mathrm{D}$ viewing sphere around an agent. Moving this point around the sphere is equivalent to changing the direction in which an agent is looking. Treating this point as a particle, gaze "forces" can be applied to it which change the direction of gaze.
106
+
107
+ For a given discrete time step $t$ , an agents gaze state can be described by ${\mathbf{G}}_{t} = \left( {\mathbf{\theta },\phi }\right)$ , which represents a point in spherical space for a fixed radius, where(0,0)is the natural or forward-facing orientation. For a saliency map ${S}_{t}$ ; which represents the current view’s saliency map, a potential field is defined as $V\left( \mathbf{G}\right)$ . By interpreting points of high saliency as potential wells in $V$ , following the gradient will drive the gaze-particle into highly salient regions. We can formulate the motion of the particle as,
108
+
109
+ $$
110
+ \ddot{\mathbf{G}} = - \overrightarrow{\nabla }V - {k}_{d}\dot{\mathbf{G}} \tag{2}
111
+ $$
112
+
113
+ Where $- \overrightarrow{\nabla }V$ is the force applied by the potential to the gaze particle based on what the agent is currently seeing in the current saliency ${S}_{t}$ . The term $- {k}_{d}{\dot{\mathbf{G}}}_{t}$ represents damping with coefficient ${k}_{d}$ . The algorithm to update the position of the particle for step size $\lambda$ is given by,
114
+
115
+ $$
116
+ {\ddot{\mathbf{G}}}_{t} = - \overrightarrow{\nabla }V\left( {\mathbf{G}}_{t}\right) - {k}_{d}{\dot{\mathbf{G}}}_{t}
117
+ $$
118
+
119
+ $$
120
+ {\dot{\mathbf{G}}}_{t + 1} = {\dot{\mathbf{G}}}_{t} + \lambda \cdot {\ddot{\mathbf{G}}}_{t} \tag{3}
121
+ $$
122
+
123
+ $$
124
+ {\mathbf{G}}_{t + 1} = {\mathbf{G}}_{t} + \lambda \cdot {\dot{\mathbf{G}}}_{t + 1}
125
+ $$
126
+
127
+ Additionally, we can include a noise term $A \cdot {z}_{t}$ ; where ${z}_{t} \in$ ${\left\lbrack -1,1\right\rbrack }^{2}$ with amplitude $A$ , in the final position update which gives added flexibility to model more complex gaze movements. The final update is then,
128
+
129
+ $$
130
+ {\mathbf{G}}_{t + 1} = {\mathbf{G}}_{t} + \lambda \cdot {\dot{\mathbf{G}}}_{t + 1} + A \cdot {\mathbf{z}}_{t} \tag{4}
131
+ $$
132
+
133
+ An important consideration then is how to construct the potential fields from the saliency maps. Looking at examples in Fig. 2, one problem is that in most saliency maps there are large regions of little or constant saliency. This presents a problem because there would be no gradient in these regions. Another thing to consider is that highly salient stimuli should draw gaze towards it regardless of where it is in the visual field. Of course, the method should be computationally efficient in order to scale for large groups of agents. Calculating the potential field from an $n \times n$ image could be very costly, especially scaled to scenarios with many agents. The solution we chose is to use a parametric surface to model the potential by sampling from the saliency map. Cubic B-splines have useful properties which make them very effective and efficient for this task. Assuming an appropriately chosen number of control points, a cubic B-spline surface will have a non-zero gradient in almost all regions of the space, as well as being very fast to compute. Sampling from the saliency map, the heights of control points on a spline surface can be set giving a reasonable approximation of a potential field. $\overrightarrow{\nabla }V\left( \mathbf{G}\right)$ is then the gradient of the surface with respect to the $x - y$ plane.
134
+
135
+ The max-pool and average-pool algorithms are commonly used in computer vision to downscale images into a lower resolution space. We use average-pool to pool values from a saliency map into the control points of a spline surface. For a lattice of $m \times m$ control points, the saliency map is divided into $m \times m$ windows. The heights of control points are set as negative results of pooling each window, with the maximum depth being -1 . At each time-step $t$ the control points of the surface are set, giving the potential field. Fig. 3 shows a simple example for an $m = 7$ surface. Since the gaze-particle is always at the center of the visual field, the gradient is always sampled at the center of the potential field as well. Following Eq. 3, the gaze-particles position on the viewing sphere is updated, changing the point of view.
136
+
137
+ We chose to use a 3D spline surface over other traditional $2\mathrm{D}$ methods because it allows us to model it as a physics problem. By adjusting the parameters of the spline surface, we can get a continuous gradient without the issue of gradient deadzones.
138
+
139
+ < g r a p h i c s >
140
+
141
+ Figure 3: Spline surface representing the potential field. Values from the saliency map are pooled into control points corresponding to quadrants.
142
+
143
+ < g r a p h i c s >
144
+
145
+ Figure 4: Projection of environment onto agents view. $\overrightarrow{\nabla }V$ is the gradient at the center of the potential field. For small values, $\overrightarrow{\nabla }V \simeq \left( {{\Delta \theta },{\Delta \phi }}\right)$ , where $\left( {{\Delta \theta },{\Delta \phi }}\right)$ are the updates to the current camera orientation. This moves the view until the center is in a local minimum (Typically corresponding to the center of an object of interest)
146
+
147
+ Algorithm 1 Particle Gaze Model
148
+
149
+ STATE $\leftarrow$ search
150
+
151
+ $G \leftarrow \left( {{0.5},{0.5}}\right)$ $\vartriangleright$ Center of viewport
152
+
153
+ $\dot{G} \leftarrow \left( {0,0}\right)$
154
+
155
+ while true do
156
+
157
+ $V \leftarrow \operatorname{SetPotential}\left( {S}_{t}\right)$
158
+
159
+ $\ddot{G} = - \overrightarrow{\nabla }V\left( G\right) - {k}_{d} \cdot \dot{G}$
160
+
161
+ if STATE $= =$ search then
162
+
163
+ $\lambda \leftarrow {\lambda }_{\text{ search }}$
164
+
165
+ if FixationDetected( ) then
166
+
167
+ STATE $\leftarrow$ fixation
168
+
169
+ end if
170
+
171
+ else if ${STATE} = =$ fixation then
172
+
173
+ $\lambda \leftarrow {\lambda }_{\text{ fixation }}$
174
+
175
+ if fixationtime $> {\tau }_{\text{ fixation }}$ then
176
+
177
+ STATE $\leftarrow$ search
178
+
179
+ end if
180
+
181
+ end if
182
+
183
+ $\dot{G} \leftarrow \dot{G} + \lambda \cdot \ddot{G}$
184
+
185
+ $G \leftarrow G + \lambda \cdot \dot{G}$
186
+
187
+ end while
188
+
189
+ § 4.2.2 CONTROL
190
+
191
+ The primary parameters for control are the step size $\lambda$ and damping coefficient ${k}_{d}$ . A large step size will cause the view to move quickly through the visual field however will struggle to stay on target. A small step size will have excellent tracking of targets once fixated but will struggle to move to new targets. For this, we propose a two-state system for varying the behaviour of the particle's movement. In the search state, the step size is set to ${\lambda }_{\text{ search }}$ . The gaze is free to move around and will be drawn in by salient regions in the view. As the particle moves into a potential well, the gradient will get smaller. At this point, there needs to be some definition for detecting a fixation, which should work regardless of motion either egocentric or by the target object. We define a simple rule which measures the average gradient of the potential within some temporal window. If the average gradient has dropped below a threshold then a fixation has occurred and the state is changed. In this state, step size is set to ${\lambda }_{\text{ fixation }}$ , and this state lasts for ${\tau }_{\text{ fixation }}$ seconds.
192
+
193
+ After the fixation time, the saliency of the target would still affect the potential field, thus it is important to implement an inhibition of return mechanism to prevent getting stuck on one target. For the parametric saliency maps we utilize, the saliency of targets under the particle can be decayed. This simple rule allows the particle to move on to new targets naturally, encoding object-based IOR. A good default value is a decay time of 1-2 seconds for general searching/foraging gaze behaviour, however, it must be noted that for accurately replicating specific gaze behaviours this value would likely need to be different depending on need. There is also some added complexity to consider in how exactly saliency returns after it has been decayed, however for the scope of this paper we do not discuss what/how this might be done as for general use, targets will be well out of view before IOR would wear off. If one imagines walking down a busy street people, cars, signs etc. will constantly be coming in and out of view, so we feel this rule is sufficient.
194
+
195
+ There are properties of the particle model which lend themselves well to controlling head movements, as well as smooth-pursuit eye-movements. First, is the naturally smooth motion which arises towards targets of high interest. Second, for a small number of control points; recommended 7 for a degree 3 spline surface, this method has the natural tendency to align with general areas of high interest at low resolution. This often means looking at the "center of mass" of areas with high saliency targets as opposed to specific individual elements if there are many within view. If there are sparse, spaced-out objects of interest the gaze will instead align with the individual elements. Both these behaviours arise without explicit programming. Setting the points in the control surface to a higher resolution will yield more spacial acuity, and thus the gaze will fall on more narrow targets. Changing the step size $\lambda$ will determine how fast the gaze will move towards targets, as well as how strongly those targets will be tracked. Smooth pursuit eye-movements can be elicited by having a high resolution in the control surface; recommended 11 for a degree 3 spline surface, and a larger ${\lambda }_{\text{ fixation }}$ value. It is difficult to recommend any particular value for ${\lambda }_{\text{ fixation }}$ because this will be scaled with how the spline surface is defined, steep peaks are as well as how fast objects move across the field of view which is limited by the frame rate of a given simulation. The length of smooth pursuits is something contextual. For a typical "search" behaviour, the length of fixations ${\tau }_{\text{ fixation }}$ should average ${150} - {300}\mathrm{\;{ms}}$ . For saccadic movements, a larger ${\lambda }_{\text{ search }}$ value will give faster rapid target acquisition. To emulate micro-saccades we can peturb the final position using a noise term $A \cdot {z}_{t}$ , where the amplitude corresponds less than ${0.1}^{ \circ }$ of visual angle. This will depend on camera projection parameters, but a small angle approximation $A \simeq {0.1}^{ \circ }$ is acceptable. Additionally to improve accuracy and avoid oscillations, multiple steps can be taken per simulation time step. In the scope of this work we do not describe how to switch between saccades and smooth pursuits. This is largely because smooth pursuits are typically intentional actions and need to be specified by the author of the behaviour.
196
+
197
+ § 4.3 PROBABILISTIC MODEL
198
+
199
+ § 4.3.1 TARGET SELECTION
200
+
201
+ In this section we introduce another method for saliency driven gaze control, based largely on prior works in fixation prediction for static images. A saliency map can be thought of as a probability distribution for likely gaze targets. With this interpretation, fixation targets can be sampled from this distribution. For a probability distribution ${S}_{t}$ , a random point $\mathbf{x} \sim {S}_{t}$ is drawn. Based on the projection parameters of the virtual camera, this point in the viewing image can be converted to an orientation. The agents view can then be rotated accordingly to match this orientation.
202
+
203
+ § 4.3.2 CONTROL
204
+
205
+ Algorithm 2 Probabilistic Gaze Model
206
+
207
+ Def: LookAt(point, time)
208
+
209
+ STATE $\leftarrow$ search
210
+
211
+ $G \leftarrow \left( {{0.5},{0.5}}\right)$ $\vartriangleright$ Center of viewport
212
+
213
+ while true do
214
+
215
+ if STATE $= =$ search then
216
+
217
+ $\mathbf{x} \leftarrow$ SamplePoint $\left( {S}_{t}\right)$
218
+
219
+ LookAt $\left( {\mathbf{x},\Delta {t}_{\text{ saccade }}}\right)$
220
+
221
+ STATE $\leftarrow$ fixation $\; \vartriangleright$ Wait until reached target
222
+
223
+ else if ${STATE} = =$ fixation then
224
+
225
+ ${S}_{W} \leftarrow {S}_{t}$ .window $\left( {R}_{\text{ focus }}\right)$
226
+
227
+ $\mathbf{x} \leftarrow$ SamplePoint $\left( {S}_{W}\right)$
228
+
229
+ if fixationTime $> {\tau }_{\text{ fixation }}$ then
230
+
231
+ STATE $\leftarrow$ search
232
+
233
+ else
234
+
235
+ $\operatorname{LookAt}\left( {\mathbf{x},\Delta {t}_{\mu \text{ saccade }}}\right)$
236
+
237
+ Wait $\left( {\tau }_{\mu \text{ fixation }}\right) \; \vartriangleright$ Hold for length of $\mu$ -fixation
238
+
239
+ end if
240
+
241
+ end if
242
+
243
+ end while
244
+
245
+ Given a point $\mathbf{x}{S}_{t}$ in viewport coordinates, a line can be drawn from the camera center through this point in world space. This vector represents an orientation ${G}^{\prime }$ . The current camera orientation $G$ can then be interpolated to this new orientation over a desired time. The speed of the rotation is then determined by the interpolation time.
246
+
247
+ Divide control into two primary states: search and fixation. In the search state, a point is sampled from the entire field of view. The view is then oriented to this target over $\Delta {t}_{\text{ saccade }}$ . The angular speed of the saccade is the amplitude (angular) divided by $\Delta {t}_{\text{ saccade }}$ . Once this target is picked the state transitions to fixation control. Over a total time ${\tau }_{\text{ fixation }}$ saliency outside a small foveated region of radius ${R}_{\text{ focus }}$ is suppressed. Within this fixation, new points are drawn from the foveated region of interest as targets for micro-fixations. The point is then interpolated to over $\Delta {t}_{\mu \text{ saccade }}$ . This point is looked at for time ${\tau }_{\mu }$ fixation, at which point a new target is selected. This repeats over the entire fixation length. Once the fixation has concluded, the state returns to search. Each parameter can be set statically or dynamically depending on desired behaviours.
248
+
249
+ This method of control is designed to allow modeling of target point selection saccade and micro-saccade eye-movements. Depending on the level of detail desired, keeping $\Delta {t}_{\text{ saccade }}$ and $\Delta {t}_{\mu \text{ saccade }}$ constant will achieve linear eye velocities expected for angular distances less than ${20}^{ \circ }$ , which typically reach up to ${300}^{ \circ }/s$ . However, for most applications it suffices to have a very small or zero travel time (i.e. instantaneous). Changing the ${\tau }_{\text{ fixation }}$ parameter will affect how much searching is done in the visual field. Veering from typical reported values of around ${100} - {200}\mathrm{\;{ms}}$ will result in either rapid eye-darting for smaller values, or more focused eye-movements in the case of larger values. Tightening or increasing the size of the focus region ${R}_{\text{ focus }}$ will either restrict the space of micro-saccade movements (thus decreasing their amplitude) or allow for more outside stimuli to draw micro-saccades respectively. Depending on the desired behaviour either can be appropriate. For example, a character reading a book would have very infrequent saccades (large or infinite ${\tau }_{\text{ fixation }}$ ), frequent micro-saccades (small ${\tau }_{\mu \text{ fixation }}$ , and a small radius of focus ${R}_{\text{ focus }}$ . Similarly to the particle method, we implement inhibition of return as a decay in object saliency.
250
+
251
+ § 5 RESULTS AND EVALUATION
252
+
253
+ Here we present evaluations of our models. First, it should be noted that the PSM saliency maps our models are predicated on have been previously evaluated against SALICON, a state-of-the-art machine learning saliency, with high correspondence [18].
254
+
255
+ We compare our particle model fixations against pyStar-FC [38], a notable multi-saccade generator. The pyStar-FC model generates saccades for static images, so we construction scenarios in our virtual environment where neither the viewing agent nor pedestrian agents are moving in order to create static images for comparison. The gaze movement of the viewing agent can then be projected onto this static image to show the scanpath of the agent using our model. Then we compare this scanpath to the output of pyStar-FC on the same RGB image. We used mostly default parameters for pyStar-FC, using Deepgaze II with ICF as the saliency model [21]. The input viewing size was modified to match the field of view of our agents. Changing the ior (inhibition of return) decay rate parameter in pyStar-FC did not produce significantly different results, so it was left at default.
256
+
257
+ The results emphasize the authorability of our method. By adjusting our model parameters, our particle model can be tuned to match pyStar-FC's output or any other model. Ten pairs of images were compared, five of which are shown in Fig. 5. It is worth noting that our use case was not the intended purpose of either Deepgaze II nor pyStar-FC, so there may be biases in their output on our virtual images. The tendency of pyStar-FC to fixate on the neon signage is likely a result of bias in the datasets used to create these models, which probably used well-lit, non-virtual environments. Thus the output given by pyStar-FC may not be representative of what humans would look at while navigating in this environment. Regardless, our aim in this comparison is simply to illustrate the authorability of our model and show that by adjusting the parameters of our particle model, we can match the output of pyStar-FC or any other model with a high degree of similarity.
258
+
259
+ Table 1: K nearest neighbour similarity scores for five trials comparing our method’s fixation points with pyStar-FC’s, where k=2.
260
+
261
+ max width=
262
+
263
+ Trial KNN Similarity
264
+
265
+ 1-2
266
+ 1 0.965
267
+
268
+ 1-2
269
+ 2 0.976
270
+
271
+ 1-2
272
+ 3 0.989
273
+
274
+ 1-2
275
+ 4 0.989
276
+
277
+ 1-2
278
+ 5 0.988
279
+
280
+ 1-2
281
+
282
+ We therefore compared fixations from our particle model to pyStar-FC fixations using k-nearest neighbour similarity for the same five trials. The resulting knn similarity scores were all over 0.95 indicating a high degree of similarity. The results are summarized in Table 1. Thus we show that we are able to match other models with a high degree of similarity. Matching it to real human gaze data should therefore be possible and is important planned future work. However it should be emphasized that our goal is not to match human gaze data but to present a flexible and customizable system for authoring gaze behaviour in virtual agents, which we have shown.
283
+
284
+ We can make some comparisons between both models. Figure 6 shows target selection for both models. The particle model drives gaze in the direction of the potential field gradient. The probabilistic model identifies potential gaze targets highlighted with red circles, and chooses one probabilistically based on saliency at that location. Once the probabilistic model chooses a target, gaze is snapped to that location-similar to human saccades. Additionally, the ability to perform microsaccades is one of the defining features of the probabilistic model. Reducing the field of view once a target is selected produces a zoomed result from which microssaccade targets can then be selected. Figure 7 shows this zoomed effect in comparison with the particle model.
285
+
286
+ Our models also account for saliency decay. While an agent fixates on something in the scene, we perform a raycast in the fixation direction, which when it hits the target triggers the saliency for that object and that viewer to decay, and this continues over time while that target is fixated on. An example is shown in Figure 9, where two agents are viewing the same man with different saliencies due to saliency decay. Decay rates for both models differ in these examples, however can easily be parameterized to produce different gaze behaviours-such as nervous eye movements versus watchful gaze. While the decay rate here for the probabilistic model is fast in order to encourage quick saccades, the particle model was set to a slower decay rate. More research is needed to determine an optimal decay rate and this is important planned future work, and we hypothesize that these values relate to context and stylization of the behaviour. An in-depth statistical evaluation of our gaze models is planned future work.
287
+
288
+ We also note that our method provides for multi-agent saliency evaluation as shown in Figure 8. This affords complex scenes with a multiplicity of independent gaze controller automatically driven by diverse scenes. That is, crowd respond naturally to the makeup of a scene from signage to fellow pedestrians.
289
+
290
+ § 6 DISCUSSION
291
+
292
+ The strength of this approach is that the user does not need to explicitly define gaze patterns, but instead need to define an agent's visual task or intent. One of the main principles of this work is creating control which adheres to the idea of sensory honesty. Prior works in the area of simulated gaze control have been able to create reasonably believable gaze movements for characters utilizing information from the simulation itself such as the scene database to locate gaze targets and track their position. The hope, is that we can start to think of autonomous virtual humans and how they actively view their environment in terms such as their intentions, goals and knowledge. We could describe what they are attending to and what their visual task is without having to write explicit patterns for how they should then generate gaze movements. Perhaps the most obvious addition to our work is definition of high level control for generating saliency maps and appropriately selecting the correct control parameters. SDGC only provides only one part of a full solution for generating plausible gaze movements. Ultimately, this requires thinking about how saliency (attention) should be defined and the interplay with an agents intent. It fundamentally changes how we view and approach gaze of virtual agents from asking, "what is this character looking at?" to instead asking, "what is this character interested in, and what are they trying to do?". In practical terms this is deciding how to define saliency, and deciding what kinds of gaze-movements to use. Of course, an obvious criticism is that due to this, no general solution is offered which covers all or a large number of gaze behaviours. However, even in light of this our framework does expand the capabilities of similar works like [29] by including top down pre-attentive component in the form of parametric saliency maps from [17] which allows encoding things like novelty or task relevance directly into saliency. High level controllers for automating attending behaviour such as the extensive work from $\left\lbrack {3,{14},{22}}\right\rbrack$ combined with our Saliency-Driven Gaze Control (SDGC) approach to create a totalistic saliency driven model which takes into account agent action and intent, and subsequently delegate saliency generation and SDGC methods to generate the final gaze behaviours. This would also allow us to improve our implementation of inhibition of return, which currently does not address how this effect is modulated depending on the intentions of the viewer.
293
+
294
+ Our methods are sensitive to the model parameters, and to the parameters of PSM which controls the saliency map generation. Parameter sensivity and tuning for PSM was described in [17, 18]. For the particle model, it is important to choose appropriate sampling points, and gradient step size for the best results. Large step sizes result in targets being missed which can produce oscillations. The saliency decay rate, fixation duration, and fixation conditions should be chosen appropriately for the desired behaviour in both models. For example, larger decay rates and fixation durations and lower thresholds for triggering fixations produce quicker, darting gaze behaviours.
295
+
296
+ A limitation in both of our methods is that only the current view of the agent is considered, and objects outside the agent's current field of view do not impact gaze behaviour. The saliency decay mechanism models some aspects of memory, since the saliency amount is remembered even if it leaves an agent's field of view and then comes back into it. However, complete models would include a model of memory that keeps track of objects recently seen but not currently within the field of view and their relative positions so that agents could look back at them directly even when they are not inside the field of view. Matching our model parameters to real human gaze data remains important planned future work. However we have illustrated that our model is highly flexible and customizable, and can be used to author a variety of virtual gaze behaviours.
297
+
298
+ § 7 CONCLUSION
299
+
300
+ We presented two Saliency-Driven Gaze Control (SDGC) methods, the particle model and probabilistic model, which when combined with appropriately defined saliency (attention) are able to cover a wide range of well studied and understood human gaze-movements. SDGC takes as input a real-time map off attention in an autonomous agents visual field and generates gaze-movements. The two SDGC methods, the particle model and probabilistic model, are able to elicit physiologically based head movements, smooth pursuits, saccades and microsaccades. For a defined visual task, we show that through combination of parameterized visual attention and gaze-movements that appropriate gaze behaviours will arise.
301
+
302
+ < g r a p h i c s >
303
+
304
+ Figure 5: For (a) an RGB image, (b) gaze heatmaps for our particle method overlayed on the RGB image, comparison of scanpath traces between (c) our method with (d) pyStar-FC.
305
+
306
+ < g r a p h i c s >
307
+
308
+ Figure 6: Comparison of target selection between the two models. Left: Particle model, which drives gaze in the direction of the potential field gradient. Right: Probabilistic model target selection. Red circles indicate potential target locations, which once selected will trigger a saccade.
309
+
310
+ < g r a p h i c s >
311
+
312
+ Figure 7: Left: Particle model saliency map. Right: Probabilistic model view of the same subjects. The Probabilistic model uses a reduced field of view to produce a zoomed effect for the purpose of facilitating microsaccades.
313
+
314
+ < g r a p h i c s >
315
+
316
+ Figure 8: Two agents walking while using the particle gaze model simultaneously. Top: RGB view from behind the agent. Bottom: Saliency map from the agent's POV. The small red line in the center of the saliency map indicates the current direction of the particle gradient.
317
+
318
+ < g r a p h i c s >
319
+
320
+ Figure 9: One agent walks behind another and both see the same man sitting on a bench. On the right the man's saliency is lower due to saliency decay during fixation. Top: RGB view from behind the agent so that the head orientation is visible. Bottom: Saliency map from the agent's POV using the particle model.
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/yWIplfQfx8/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,341 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # EnchantedBrush: Animating in Mixed Reality for Storytelling and Communication
2
+
3
+ ![01963e03-f9f4-771e-b91d-b05a07ee8cc9_0_215_266_1363_946_0.jpg](images/01963e03-f9f4-771e-b91d-b05a07ee8cc9_0_215_266_1363_946_0.jpg)
4
+
5
+ Figure 1: EnchantedBrush is a mixed-reality sketching interface for creating animated storyboards that can interact with the physical surroundings. It supports easy storytelling as if using a magical paintbrush to A) draw a vehicle and B) give it motion lines so that C) the vehicle comes to life with automatic motion and sound effects (the black represents elements in the real world, and the blue represents elements in the virtual world). This animation can be achieved using D) a set of interactive motion and sound brushes designed in a mixed-initiative interaction paradigm.
6
+
7
+ ## Abstract
8
+
9
+ Recent progress in head-worn 3D displays has made mixed-reality storytelling, which allows digital art to interact with the physical surroundings, a new and promising medium to visualize ideas and bring sketches to life. While previous works have introduced dynamic sketching and animation in 3D spaces, visual and audio effects typically need to be manually specified. We present EnchantedBrush, a novel mixed-reality approach for creating animated storyboards with automatic motion and sound effects in real-world environments. People can create animations that interact with their physical surroundings using a set of interactive motion and sound brushes. We evaluated our approach with 12 participants, including professional artists. The results suggest that EnchantedBrush facilitates storytelling and communication, and utilizing the physical environment eases animation authoring and simplifies story creation.
10
+
11
+ Index Terms: Human-centered computing-Human computer interaction (HCI)—Interaction paradigms—Mixed / augmented reality; Human-centered computing-Interaction design-Interaction design theory, concepts and paradigms
12
+
13
+ ## 1 INTRODUCTION
14
+
15
+ Sketching, a ubiquitous communication medium for visual thinking, performs a critical role in storytelling, idea expression, and creativity $\left\lbrack {{26},{29},{36},{38},{44},{45}}\right\rbrack$ . For more accessible animation authoring and expressive power, human-computer interaction researchers have been exploring dynamic sketching and interactive animation over decades in broad-ranging fields such as arts and storyboards [20-22], prototyping [14,34], data visualization [28,46], and education $\left\lbrack {{40},{50}}\right\rbrack$ . However, these works lack the consideration of interacting with the physical surroundings.
16
+
17
+ Mixed Reality (MR) technologies further introduced new possibilities into dynamic sketching and interactive animation by expanding the domain of interactions. The painting canvas is no longer constrained to an electronic screen but a broader canvas in the form of a freeform reality environment. This free manner has inspired numerous Virtual and Augmented Reality (VR/AR) works and commercial products with sketch-based interaction $\left\lbrack {2,8,{19},{25},{27},{31},{35},{42}}\right\rbrack$ . However, these works either lack the ability to support animation and sound effects or require manual efforts to specify such effects, which could be time-consuming and redundant when creating a story. Besides, previous works also do not exploit the interaction with physical surroundings as part of sketching or storytelling. This limits the use of physical objects in communication.
18
+
19
+ In this work, we present EnchantedBrush, a novel MR interface for dynamic sketching and interactive storyboards (Figure 1). Our approach utilizes a mixed-initiative interaction paradigm - the automatic mode and the customized mode - to assist users in designing visual and sound effects. Under the automatic mode, sketched elements are animated automatically with lively sounds based on their semantic nature. Under the customized mode, users can manually define animation and add sounds. This lets users quickly create visualizations and storyboards without spending intensive time on manual effect specifications. Under both modes, virtual sketched elements can interact with the physical surroundings and generate proper sound effects. This saves users' time by releasing them from preparing object models and setting up backgrounds.
20
+
21
+ Our approach works as follows: 1) element sketching: users design their storyboards based on the physical surroundings and then sketch elements in mid-air. 2) sketch recognition: our system adopts a sketch recognition model to recognize the sketched elements and provides the top two recognition results for users to choose from. 3) motion and sound designing: our system will automatically add the motion and sound effects to the sketched elements according to their semantic nature. In addition to the automatic behavior, users could also customize the animation by defining motion paths and sound effects by themselves. 4) animation: users animate the sketched storyboard and make it interactive with the real-world environment.
22
+
23
+ We evaluated our approach with 12 participants, including three professional artists. The results show the usability of Enchanted-Brush in easy storyboard creation and effective communication. We also discuss potential application scenarios based on the interviews with the participants.
24
+
25
+ To summarize, our contributions are:
26
+
27
+ - EnchantedBrush, a sketch-based interface in mixed reality for creating animated storyboards that can interact with the physical surroundings
28
+
29
+ - A mixed-initiative interaction paradigm for visual and audio design, including the automatic mode and the customized mode
30
+
31
+ - User evaluation of EnchantedBrush and a set of its potential application scenarios such as for the animation industry and children teaching
32
+
33
+ ## 2 RELATED WORK
34
+
35
+ Our work relates to prior research in 2D dynamic sketching interfaces, 3D animation authoring tools, and sketch-based interactions in VR/AR.
36
+
37
+ ### 2.1 Dynamic Sketching Interfaces in 2D
38
+
39
+ Traditional 2D sketching applications have been well investigated and developed. Various dynamic sketching tools cover different features for animation authoring. K-Sketch [7] introduces a pen-based system to create simple prototypes of animations. Kazi et al. build Draco [21] and Kitty [20], adding kinetic textures to a collection of objects for continuous animation effects and allowing users to customize different functional relationships between object entities for interactive illustrations. Energy-Brushes [48] investigates the role of flow particles in stylized animations to create fundamental dynamics. Motion Amplifiers [22] turns the principles of 2D animation into a set of amplifiers that users can apply to animate illustrations. Sketching can also allow for natural interactions during whiteboard presentations and visualization as demonstrated by SketchStory [28], and in fluid systems such as the illustration project by Zhu et al. [50]. Moreover, Motion Doodles [43] shows how sketching can animate the motion of characters, while Vignette [23] focuses on texture creation using interactive sketch gestures. These 2D works and interfaces allow people to utilize the power and nature of sketching to create animation and texture. They create the fundamental applications of sketching. Our work keeps the core design idea of a sketching tool, but expands it even further and implements the system in mixed reality to allow sketches to interact with physical environments.
40
+
41
+ ![01963e03-f9f4-771e-b91d-b05a07ee8cc9_1_922_148_712_150_0.jpg](images/01963e03-f9f4-771e-b91d-b05a07ee8cc9_1_922_148_712_150_0.jpg)
42
+
43
+ Figure 2: Motion and sound examples in storyboards, illustrations, and icons: (A) The motion lines behind the car indicate the car is running. (B) The path lines of the ball illustrate its movement trajectory. (C) The sound lines for the person show they are speaking.
44
+
45
+ ### 2.2 Animation Authoring Tools in 3D
46
+
47
+ Animation tools in 3D add one more dimension to the possibility and contribute to more expressive and lively graphics. For instance, we can use human bodies to animate static 3D meshes and craft interactions with graphical elements [6,39,49]. 3D puppetry [15] provides a real-time tool for 3D model animation using physical objects as input. As for character motion, Gambaretto and Piña [9] investigate facial expressions as a way to author character animation, while Glauser et al. [12] explore a tangible interface by fluid manipulation. Guay et al. [13] study a stroke method using a space-time curve to match 3D character motion based on $2\mathrm{D}$ lines. Recently, Ma et al. [32] propose stylized 3D animations in a layered authoring interface. However, these applications are still traditional screen-based interfaces. HCI researchers have further studied the potential of the interactive experience in VR. Hwang et al. [16] propose a performance-based animation system for virtual object manipulations. Another example is MagicalHands [4] which achieves animation authoring in VR using mid-air hand gestures. While these projects take the generation of animation to a new level, they make use of the interaction of body motions instead of sketching interactions. We focus on how 3D sketching in mixed reality can enhance users' ability for storytelling and idea visualization.
48
+
49
+ ### 2.3 Sketch-based User Interfaces in VR and AR
50
+
51
+ Sketching in 3D is becoming an increasingly popular medium not only in pure screen-based interaction but also in MR applications. There are a number of commercial sketching and animation products in VR/AR such as Tilt Brush [18], Medium [17], AnimVR [35], Quill [31], MasterpieceVR [41] and Tvori [47]. These systems allow people to create virtual sketches in a free manner or provide an interface for animating drawings. However, the interfaces of these consumer tools are not user-friendly to non-experts and require a learning curve. HCI researchers have been working on various system designs in order to have a better solution for a sketch-based interface. For example, HoloARt [1] investigates how people could paint in the air via finger gestures. VRSketchIn [8] uses interchangeable techniques of $2\mathrm{D}$ and $3\mathrm{D}$ sketching to create artifacts in VR. Other sketch-based 3D painting and modeling tools such as Cave-Painting [25], SweepCanvas [30], systems for model retrieval [11], Mobi3DSketch [27], and ChalkTalk [37] all provide users with a designed system to work in a virtual 3D scene. Immersive sketching tools including SymbiosisSketch [2] and PintAR [10] can create sketched elements that spatially integrate with the real world. Reali-tySketch [42] provides an AR interface to create embedding dynamic and responsive visualization within the physical environment. A bi-directional sketching interaction is introduced in Sketched Reality [19] that virtual objects and physical robots affect each other through AR sketches and tangible interfaces.
52
+
53
+ While these works try to address interactive sketching and design using different approaches, users need to manually specify the visual and audio effects at each step of the animation. The existing works do not take advantage of the semantic properties of sketched elements in the authoring process. Also, most of them do not support real-time interactions with the physical environment. In contrast, EnchantedBrush investigates a mixed-initiative interaction paradigm and uses the recognition of sketched elements to fill the unexplored area in these works. With our approach, users can create interactive animation in a real-world setting with expressive sound effects. Our contribution is a novel interaction approach in mixed reality using interactive motion and sound brushes for animating a storyboard.
54
+
55
+ ![01963e03-f9f4-771e-b91d-b05a07ee8cc9_2_148_147_1502_502_0.jpg](images/01963e03-f9f4-771e-b91d-b05a07ee8cc9_2_148_147_1502_502_0.jpg)
56
+
57
+ Figure 3: Interaction techniques. A) Drawing an element. B) Drawing a path. C) Drawing motion lines. D) Drawing sounds. E) Selecting and creating a dependency relationship. F) Interacting with physical surroundings.
58
+
59
+ ## 3 ENCHANTEDBRUSH
60
+
61
+ The goal of EnchantedBrush is to allow people to visualize their ideas quickly and easily in an MR environment. To this end, we studied a set of grammar that is commonly used in storyboards, comics, illustrations, and icons. We found that motion and sound effects are two key components that make a scene dynamic, and creators and storytellers rely heavily on visual iconography to present those two effects as introduced by Scott McCloud in the book Understanding Comics [33]. Figure 2 displays three motion or sound examples. Therefore, we focus on the visual and sound design for animated storyboards. We propose a mixed-initiative interaction paradigm for motion and sound effects authoring. More precisely, our system automatically associates a default motion effect and a sound effect to each sketched object by leveraging its semantic nature. For example, the default motion effect for an airplane is flying around, and its default sound effect is an engine sound. Furthermore, to enhance the expressiveness of our system, we allow users to customize the motion and sound effects based on their own design idea.
62
+
63
+ The remainder of this section is organized as follows. We first introduce the design concepts of EnchantedBrush. Next, we demonstrate using EnchantedBrush for animation by a storytelling example.
64
+
65
+ ### 3.1 Concepts
66
+
67
+ Based on Understanding Comics [33], we observed that storyboards are usually built upon the following concepts: Drawing Elements, Drawing Motion, Drawing Sound, Selecting and Creating Relationship. Accordingly, we design Sketch Brush, Path Brush and Motion Brush, Sound Brush, and Selection to achieve each concept, respectively.
68
+
69
+ #### 3.1.1 Drawing Elements
70
+
71
+ Sketch Brush allows users to draw freely in the air (Figure 3A). Once a sketched element is finished, the back-end sketch recognition model will take the sketch as input and send back the sketch recognition results to the user.
72
+
73
+ #### 3.1.2 Drawing Motion
74
+
75
+ Path Brush and Motion Brush are used to animate the sketches. There are two behavior modes in EnchantedBrush, automatic mode and customized mode. In the automatic mode, the system animates the sketch automatically based on sketch recognition and gives it a common behavior. This releases users from manually prescribing how an element should be animated. The common behavior is decided by a motion verb commonly associated with the given element. For instance, if the drawn element is a basketball, it will fall and bounce on the ground, while an airplane will fly around by default. Path brush is designed to use in the mode of customized behavior when the user wants their sketched elements to follow some specific trajectory (Figure 3B). When a customized path is provided, the system will switch to the mode of customized behavior and animate the element following the provided path. In either mode, the animation is started with motion lines drawn by Motion Brush (Figure 3C). The length of motion lines is parameterized and proportional to the element's movement speed. Longer motion lines cause the element to move faster so that users can control the element's speed and adapt to the unique need of their story.
76
+
77
+ #### 3.1.3 Drawing Sound
78
+
79
+ EnchantedBrush supports sound effects by incorporating lively audio. This enhances the expressiveness of storytelling and the sense of immersion. Upon the recognition of a sketched element, each element is automatically assigned three sound properties aligned with their semantic nature:
80
+
81
+ 1. Self sound is the unique sound commonly produced by an element. For example, the self-sound of an ambulance is a siren, while the self-sound of a dog is barking.
82
+
83
+ 2. Movement sound refers to the sound made by an element while moving. For example, the movement sound of a car is an engine sound, and for a human, the movement sound is a walking sound.
84
+
85
+ 3. Collision sound is the sound made when an element collides with another element. For example, the collision sound of a car is a crash sound, and it will be triggered when the car collides with either a physical object or a virtual element.
86
+
87
+ Sound brush aims to help users add the self sound of a drawn element (Figure 3D). Users can customize and specify when the self-sound should be played. For simplification, the sounds of movement and collision are played automatically once the element starts moving or a collision is detected.
88
+
89
+ ![01963e03-f9f4-771e-b91d-b05a07ee8cc9_3_165_145_703_406_0.jpg](images/01963e03-f9f4-771e-b91d-b05a07ee8cc9_3_165_145_703_406_0.jpg)
90
+
91
+ Figure 4: Example storyboard: aliens escape from a UFO before it collides with a space station (real-world objects replace beige elements).
92
+
93
+ #### 3.1.4 Selecting and Creating Relationships
94
+
95
+ Selection enables users to select the element they want to edit and animate among multiple elements. It is implicitly activated based on the proximity of the brush to an element, and the selected element is highlighted with a yellow outline, as shown in Figure 3E. Once an element is selected, users can perform the animation on it using the designed brushes. In addition, to enhance the expressivity of the system, EnchantedBrush also supports inter-object animation by creating dependency relationships. For example, in Figure 3E, the UFO element is first selected, and a customized trajectory is provided for it. Users can then use the trajectory as a timeline and draw another element (i.e., the cow) along the line as if inserting a keyframe in the timeline. The cow is treated as a dependency of the UFO, and its appearance depends on the movement of the UFO. Moreover, as illustrated in Figure 3F, the sketched virtual element (i.e., the UFO) can interact with the physical surroundings (i.e., the table). Users can use physical objects for object models and backgrounds in their story which simplifies the story creation.
96
+
97
+ ### 3.2 Composing Animation
98
+
99
+ In this section, we demonstrate how EnchantedBrush visualizes the story in Figure 4 using the interaction concepts introduced above. Figure 5 illustrates the design process.
100
+
101
+ We start with sketching a UFO element (Figure 5A) with Sketch Brush. Since we want the element to move in a specified behavior, we switch to Path Brush and draw a trajectory to make it collide with the shelf (the purple line in Figure 5B). The alien escapes by parachute before the accident happens. The position of the appearance of the parachute depends on the movement of the UFO, so we compose a relationship between the UFO and the parachute by selecting the UFO (Figure 5B). When the escape happens, the UFO makes an alarm sound, and we can draw this by using Sound Brush (Figure 5C). We then switch back to Sketch Brush and draw the parachute element where the escape occurs (Figure 5D). Now the story is ready to be animated, so we switch to Motion Brush and animate the story using motion lines (Figure 5E). The movement sound is played automatically when the UFO starts moving, and the collision sound is played automatically when it crashes into the space station, i.e., the shelf (Figure 5F).
102
+
103
+ ## 4 Prototype Implementation
104
+
105
+ We developed a prototype for the proposed concepts. In this section, we describe the implementation overview and detail the important components of our system including spatial mapping, sketch recognition, and object automatic behaviors.
106
+
107
+ ![01963e03-f9f4-771e-b91d-b05a07ee8cc9_3_923_147_726_767_0.jpg](images/01963e03-f9f4-771e-b91d-b05a07ee8cc9_3_923_147_726_767_0.jpg)
108
+
109
+ Figure 5: Steps to visualize a storyboard using EnchantedBrush: A) Sketch the main element of the story. B) Provide a customized trajectory (purple line) and select the animated element. C) Add the sound effect of an alarm. D) Sketch the dependency element (the parachute below the UFO). E) Animate the storyboard. F) Auto-play the movement sound and the collision sound.
110
+
111
+ ### 4.1 System Overview and Setup
112
+
113
+ Our system requires two hardware components: Oculus Quest 2 as the Head-Mounted Display (HMD) and ZED Mini as the mixed-reality camera. The ZED camera is mounted on top of the Oculus HMD so that the virtual world resides in the real world. We use Oculus controllers as the input device.
114
+
115
+ Similar to SymbiosisSketch [2], we configure the setup of En-chantedBrush based on the bimanual design of painters in real life - painters use their dominant hand to hold the paintbrush, and the non-dominant hand to hold the palette. In our case, the controller held in the dominant hand acts as the main paintbrush while the other controller in the non-dominant hand acts as the palette which is a set of switchable brushes (Section 3.1). The system is implemented in the Unity engine by $\mathrm{C}\#$ .
116
+
117
+ ![01963e03-f9f4-771e-b91d-b05a07ee8cc9_3_926_1552_719_404_0.jpg](images/01963e03-f9f4-771e-b91d-b05a07ee8cc9_3_926_1552_719_404_0.jpg)
118
+
119
+ Figure 6: Sketch recognition process: the 3D strokes are first projected onto a 2D best-fitting plane to get a flattened image. Flattened images are then converted to normalized bitmaps. The recognition model offers a list of predictions with confidence based on the bitmaps. Sketch A is an airplane, and Sketch B is a basketball.
120
+
121
+ ![01963e03-f9f4-771e-b91d-b05a07ee8cc9_4_141_143_1509_714_0.jpg](images/01963e03-f9f4-771e-b91d-b05a07ee8cc9_4_141_143_1509_714_0.jpg)
122
+
123
+ Figure 7: Storyboards used in the evaluation sessions (real-world objects replace beige elements): A) A ball bounces on the table and makes a bouncing sound. B) An airplane flies around with some engine sound. C) A police car moves forward and collides with a wall. An engine, a siren, and a collision sound are playing in the process. D) A poor cow is abandoned on an island by a UFO. An ambulance then takes the injured cow to the hospital. The sounds of the cow, the UFO, and the ambulance are playing in the process.
124
+
125
+ ### 4.2 Spatial Mapping
126
+
127
+ We mark the location of physical objects in order to achieve interaction with the physical world. By using the built-in spatial mapping function of ZED Mini, we scan the real-world environment and model it in the 3D triangle mesh format. After the system stores the mesh of the real-world environment, we make it invisible to the users.
128
+
129
+ ### 4.3 Sketch Recognition
130
+
131
+ In order to achieve the "enchantment" of EnchantedBrush, we implement automatic sketch recognition capabilities for our system through a sketch classification neural network. Due to the lack of publicly available dataset of multi-category $3\mathrm{D}$ sketches, we train the neural network using a 2D sketch dataset, Quick Draw Dataset from Google ${}^{1}$ . We then convert the 3D sketch into a 2D sketch before passing them to the sketch recognition network. To achieve this, we first project the $3\mathrm{D}$ points onto a $2\mathrm{D}$ best-fitting plane, then render the projected points into a normalized image. The neural network returns two candidates of prediction according to its confidence in the accuracy of recognition. Figure 6 demonstrates this sketch recognition process. Once obtaining the recognition results from the neural network, we display the two results to the users and ask them to select the desired one.
132
+
133
+ ### 4.4 Automatic Object Behaviors
134
+
135
+ If users do not provide a customized trajectory, the sketched element will be animated following its automatic behavior. A question raised here is: how do we decide what the automatic behavior of a given object should be? Before answering this question, we need to first define what "automatic behavior" is. Within the scope of EnchantedBrush, we define the automatic behavior of an object to be the common motion that is widely associated with the object. For example, the automatic behavior of a ball would be bouncing, while for a car, the automatic behavior would be running forward. To enhance the scalability, we experimented with the powerful language model Generative Pre-trained Transformer 3 (GPT-3) developed by OpenAI [5] to generate a motion verb for a random object and use the verb as its automatic behavior. Once we obtain a common verb associated with the given object, we transform the verb into an animation effect. The translation from a verb to a motion path is implemented as changing the position vector of the object in Unity. Take an airplane as an example, we first get the verb fly from GPT-3, then we transform it into a flying behavior: we define the flying motion as a circular path and the rotation around the y-axis in degrees per unit time.
136
+
137
+ Upon the recognition of a sketch, the corresponding sound effects are automatically assigned to the sketched element. We use local pre-downloaded audio resources in the prototype. In detail, for each supported object category, we downloaded self, movement, and collision sounds. When the system is notified of the object identity, it retrieves the sound effect locally and assigns it to the sketched element.
138
+
139
+ ## 5 EVALUATION
140
+
141
+ We conducted an exploratory user study with both general users and professional artists. The first goal was to evaluate the usability and interaction techniques of our prototype; the second was to identify limitations and potential applications of our interface.
142
+
143
+ ### 5.1 Participants
144
+
145
+ We recruited 12 participants (eight females) aged 23 to 34 to evaluate our system. Three participants are professional artists (P7, P11, and P12). Participant P7 has eight years of professional experience in illustration and sketching, P11 has 11 years of experience in architecture and urban design, and P12 has six years of experience
146
+
147
+ ---
148
+
149
+ ${}^{1}$ The Quick Draw Dataset: https://github.com/ googlecreativelab/quickdraw-dataset
150
+
151
+ ---
152
+
153
+ Table 1: Results for the questionnaires of user preferences (Median and Interquartile Range). in animation and design. The other participants have little to good experience with sketching and storytelling and minor to moderate experience with animation.
154
+
155
+ <table><tr><td>Statements</td><td/><td>Median (IQR)</td></tr><tr><td colspan="3">I felt...</td></tr><tr><td>1. it was easy to create a story.</td><td/><td>9 (3)</td></tr><tr><td>2. it was easy to sketch an object.</td><td/><td>7.5 (2.25)</td></tr><tr><td>3. it was easy to trigger a sound.</td><td/><td>10 (1.25)</td></tr><tr><td>4. it was easy to create a trajectory.</td><td/><td>10 (1)</td></tr><tr><td>5. it was easy to start a story using motion lines.</td><td/><td>9 (1.25)</td></tr><tr><td>6. the interface was simple and easy to use.</td><td/><td>10 (1.25)</td></tr><tr><td>7. the real-world environment supported the story.</td><td/><td>9 (1.25)</td></tr><tr><td>8. the EnchantedBrush made storytelling easy.</td><td/><td>9 (1)</td></tr><tr><td>9. the EnchantedBrush made animation authoring easy.</td><td/><td>9 (2.25)</td></tr></table>
156
+
157
+ ### 5.2 Tasks
158
+
159
+ Participants were given four tasks to work on using EnchantedBrush. The first three tasks were simple animated scenes which aim to familiarize participants with the system's main features. The last task was to recreate a short story using EnchantedBrush once participants felt confident about using the system. The four tasks are the following:
160
+
161
+ Task 1. (Figure 7A): participants were required to draw a bouncing basketball on the top of a physical table. The basketball was animated using its automatic behavior. Participants would hear the auto-played bouncing sound. Sketch Brush and Motion Brush were used in this task.
162
+
163
+ Task 2. (Figure 7B): participants were required to draw an airplane that flies around. This task used the automatic behavior to animate the airplane element. Participants would hear the jet engine sound while the airplane was flying. Similar to Task 1, Sketch Brush and Motion Brush were used in this task.
164
+
165
+ Task 3. (Figure 7C): participants were asked to create a car crash story. A physical curtain replaced the wall that the car collides with. Sketch Brush, Motion Brush, Path Brush, and Sound Brush were all used in this task.
166
+
167
+ Task 4. (Figure 7D): participants were asked to create a more complicated short story. One day, a UFO flew over an island, and as it passed over the island, it abandoned a poor cow that had been kidnapped. Fortunately, there was an animal hospital on this island. An ambulance arrived in time for the injured cow and took him to the hospital. In addition to the four brushes, Sketch Brush, Motion Brush, Path Brush, and Sound Brush, Selection was also used to select the UFO so that the user could sketch the cow as a dependency of the UFO. Existing physical objects, such as a table and flowers, were used as the island and the animal hospital, respectively. Thus, participants could focus on the main story plot, saving their time making models of islands or hospitals.
168
+
169
+ ### 5.3 Procedure
170
+
171
+ The study was conducted in our lab. Pieces of furniture and objects, including a table, a shelf, and a flower, were used to set up the study space, so that participants could use them while creating a storyboard. The example storyboard for each task (Figure 7A-D) was displayed on a separate monitor, and participants could refer to them whenever needed.
172
+
173
+ Our evaluation session consisted of three steps. First, participants filled out a background questionnaire. The facilitator then introduced the user interface and core concepts and demonstrated the functionalities of each brush by going through Task 1 to Task 3. Participants then took time to familiarize themselves with the interface and interaction methods. In the second step, participants were asked to perform the four tasks independently. The facilitator provided light guidance if the participant had trouble using the tool. There was no time limit on task completion, so participants could pay full attention to the tool's usability without pressure on completing a task on time. Finally, participants were asked to fill out a usability questionnaire and participate in an interview discussion to collect more in-depth insights about our system. Sessions lasted approximately 60 minutes, and participants were compensated with 20 CAD.
174
+
175
+ ## 6 Results and Discussion
176
+
177
+ Our evaluation suggested that participants enjoyed using Enchant-edBrush to create storyboards and animate their ideas. Participants appreciated the simplicity of the interface and the easiness of animation authoring. Besides the system usability, we also analyzed the potential applications of EnchantedBrush according to the discussions with the participants. Figure 8 shows the photos taken during our user study, and Figure 9 shows four sample results produced by our participants.
178
+
179
+ ### 6.1 Quantitative Metrics
180
+
181
+ The results of the usability metrics are summarized in Table 1. We used a set of 1-10 Likert scale questions for the measurement (1 $=$ strongly disagree to ${10} =$ strongly agree). Overall, participants found it easy to use EnchantedBrush for creating a story (Q1). They were satisfied with the interaction techniques, including triggering a sound effect (Q3), making an object follow a given path (Q4), and using motion lines to animate an object (Q5). At the same time, we found the easiness level of sketching an object was lower than the other features (Q2). Through discussions with the participants, they suggested they were not used to the precision and control over sketch strokes when drawing in 3D space. This finding echoes the discussions made by previous works $\left\lbrack {2,3,{24}}\right\rbrack$ that it is a general challenge for humans to draw accurately in the air due to the inability of ergonomics. In terms of the user interface, participants all agreed on the simplicity of the interface and suggested that it was easy and straightforward to use (Q6). Participants felt that interacting with the real-world environment made it easy to tell a story (Q7). All participants were confident that EnchantedBrush made storytelling and animation authoring easier (Q8, Q9).
182
+
183
+ ![01963e03-f9f4-771e-b91d-b05a07ee8cc9_6_150_148_719_271_0.jpg](images/01963e03-f9f4-771e-b91d-b05a07ee8cc9_6_150_148_719_271_0.jpg)
184
+
185
+ Figure 8: Participants working on the tasks.
186
+
187
+ The questionnaire results demonstrated the usability of our tool and showed that EnchantedBrush provides users with an easy sketch-based interface to create storyboards and communicate their ideas. Sound components, motion lines, customized trajectories, and interactive mixed-reality environments were appreciated for their power and effectiveness.
188
+
189
+ ### 6.2 Qualitative Feedback
190
+
191
+ We conducted guided interviews and open-ended discussions with participants and collected qualitative feedback and insights.
192
+
193
+ Overall, participants showed great excitement about our tool and how they could create stories in the real world. For example, P5, an amateur artist who studied animation for two years and has experience sketching for about ten years, appreciated how they could use physical objects for animation which freed them from making models of objects. P12, a professional animator, also liked the mixed-reality environment and pointed out that seeing the physical world contributed to the magical senses of the tool.
194
+
195
+ P5: "Normally, when I animate things, it's all digital. If I want a wall for the car to crash into, I have to make the wall. I have to put it in the right spot and figure out how to make it part of the story. But since the wall is already there, I think using it as part of the story is clever. Same with the table. I felt like if you're animating, you need to figure out the contact and the rigid body if you're making a ball down, so it's cool that it already knew how to use the table."
196
+
197
+ Professional artists (P7, P11, P12) commented that Enchanted-Brush provided a simple and interactive interface and believed that EnchantedBrush made dynamic planning, idea presentation in front of a team, and communication more effective.
198
+
199
+ P7 (pro): "I had a lot of fun using it [...] It'll be so easy to draw a thing that's interactive, and your whole team can see it and brainstorm a lot easier [...] You are being able to do a dynamic plan and brainstorming with your whole story-boarding team. I think it'll make the connection between the authors (people making up the story) and people making it come to reality... And it'll be easier, quicker, more organic."
200
+
201
+ P12 (pro): "I think that it was quite amusing and inspiring to see things that you draw come up to life [...] You can test certain scenarios that you're trying to build for [such as] animation, like just straight out very quickly, and see how it's actually like, without any dialogue or any other function. Just by the movement, you can tell a story."
202
+
203
+ They also expressed strong interest in having such a tool in their career work.
204
+
205
+ P11 (pro): "It was very easy and fun to use. I sort of wish I had [this when] I worked in After Effects as an architectural designer. It would be amazing if I had something similar to this for the purpose of presenting initial animation ideas [...] Having a tool like this would really help the concept formation, or the storyboard formation phase [and] will save everyone a lot of time on work. [An example is like] I want you to show the buildings in relationship to the public space [...] I would spend a week or two weeks working with other people who were doing the modeling, trying to assemble the thing, but then when we showed it to our supervisor, it was not what he wanted, so we had to do it all over again [...] What to show, what elements to show, and what components to show at what angle is really important, so I see work like this is a really good future in saving people's lives about not doing repetitive work."
206
+
207
+ ![01963e03-f9f4-771e-b91d-b05a07ee8cc9_6_926_149_719_454_0.jpg](images/01963e03-f9f4-771e-b91d-b05a07ee8cc9_6_926_149_719_454_0.jpg)
208
+
209
+ Figure 9: Sample results created by participants in the task sessions. From left to right and from top to bottom are: a bouncing basketball (Task 1), an airplane (Task 2), a car with a siren sound (Task 3), and an abandoned cow (Task 4).
210
+
211
+ ### 6.3 Potential Applications
212
+
213
+ All the participants believed in the potential benefits of Enchant-edBrush for different applications and use scenarios. Animation industries are one of the fields that our participants mentioned. "In the animation industry, you need to actually draw every frame. But in this case, even if you don't draw every frame, it sort of copies it (i.e. frames) over and has a moving effect, so I think I can see some potential." (P1). Seven (out of 12) participants stated that such a tool would be appealing to children, so it could be useful in educational fields such as student engagement, concept demonstration, and class teaching. P3 pointed out that the power of quick storytelling of EnchantedBrush could help teachers reproduce a story scenario for children, so children could not only comprehend the story from oral descriptions but also experience the story themselves to enhance their understanding and creativity from visualizations. Another point suggested by P3 is that children could learn about different physical properties (such as gravity, materials, and sounds) of an object using EnchantedBrush based on the interaction and visual effects between virtual objects and real-world environments. Besides, P11 commented that EnchantedBrush would also be useful in social media and video making. "When people go on Instagram live or TikTok live, and they tell a story, and then people get bored about just listening to them talking about the story and showing their face without anything going on. I see this thing as like when you listen to profs giving lectures, sometimes you get bored. That's why they draw stuff on the blackboard and why they used animation slides [...] But if they could use your tool and actually draw out the story, I'm pretty sure you will attract so many more listeners." (P11).
214
+
215
+ ### 6.4 Sketch Recognition Performance
216
+
217
+ The sketch recognition model was trained on $2\mathrm{D}$ drawings, so the accuracy of recognizing $3\mathrm{D}$ sketches is not perfect. The overall accuracy rate of $3\mathrm{D}$ sketch recognition in our study is ${68}\%$ on average. We also analyzed the performance based on each task: Task 1 has an accuracy rate of ${100}\%$ , Task 2 and Task 3 are ${58.3}\%$ , and Task 4 gets an accuracy rate of ${63.89}\%$ . We found that the model worked perfectly on recognizing a basketball, but the accuracy of other objects varies significantly from person to person due to various sketching skills, styles and the deformation caused by conversion between 3D sketch to 2D sketch. Since sketch recognition itself is not one of our main focuses, and to allow users to evaluate our interaction approach in a better way, the false results were corrected manually during the sessions.
218
+
219
+ ## 7 LIMITATIONS AND FUTURE WORK
220
+
221
+ Although we demonstrated that EnchantedBrush enables users to create storyboards and animate ideas easily, there are a few limitations along with opportunities for future work and research.
222
+
223
+ In our current prototype, the sketch recognition neural network is trained by a 2D sketch dataset. This limits the sketch recognition accuracy of our system. With the emergence of large-scale datasets of multi-category $3\mathrm{D}$ sketches in the future, we believe sketch recognition accuracy could be improved. Meanwhile, the current sketched elements are relatively flat, as sketching 3D-shaped objects in mid-air is generally challenging for users. The MR design space could be augmented so users could sketch more 3D-shaped elements more easily. Besides, the number of object categories that our current prototype supports was small since we focused on validating our concepts and accessing the usability of the implemented features. This restricts users to limited storytelling examples without freeform exploration. Expanding the scale of the object categories could support users' creativity and further leverage the interaction techniques of EnchantedBrush. Lastly, the retrieval of sound effects could also be expanded to use a text-based audio retrieval method from the internet rather than using pre-downloaded audio files.
224
+
225
+ In our current design, we focused on assisting users to easily control the animation of the sketched elements. It would be also interesting to support the deformation of sketched elements according to their unique physical properties or materials in reaction to contacts and collisions. One possible direction is to explore how to assist users to create this casual physics-based deformation in MR. This could allow users to achieve more realistic scenes.
226
+
227
+ Another research direction is to further automate our proposed interaction techniques by automatically interpolating users' intended animation effects. In our approach, we designed various brushes for motion and audio effects. Future work could be to leverage human knowledge and more visual languages in comics design and storytelling to help machines understand the design intent of users. This could save users from the explicit use of brushes and thus lead to a more powerful and free-form tool for animation authoring.
228
+
229
+ ## 8 CONCLUSION
230
+
231
+ We present EnchantedBrush, a novel mixed-reality sketching interface for animating storyboards in real-world environments with automatic sound effects. We propose a mixed-initiative interaction paradigm for motion and sound effects based on the semantic nature of sketched elements, which fills the gap in the existing works. The proposed interaction paradigm allows users to quickly create visualizations and storyboards without spending intensive time on manual effect specifications. EnchantedBrush also supports storyboards to interact with the physical surroundings which simplifies the creation process. A user study demonstrates the usability and effectiveness of our system. User feedback also suggests a variety of potential applications of our approach.
232
+
233
+ ## REFERENCES
234
+
235
+ [1] J. Amores and J. Lanier. Holoart: Painting with holograms in mixed reality. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, CHI EA '17, p. 421-424. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10.1145/3027063.3050435
236
+
237
+ [2] R. Arora, R. Habib Kazi, T. Grossman, G. Fitzmaurice, and K. Singh. Symbiosissketch: Combining $2\mathrm{\;d}\& ;3\mathrm{\;d}$ sketching for designing detailed 3d objects in situ. In Proceedings of the 2018 CHI Conference on
238
+
239
+ Human Factors in Computing Systems, CHI '18, p. 1-15. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10.1145/ 3173574.3173759
240
+
241
+ [3] R. Arora, R. H. Kazi, F. Anderson, T. Grossman, K. Singh, and G. Fitz-
242
+
243
+ maurice. Experimental evaluation of sketching on surfaces in vr. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17, p. 5643-5654. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10.1145/3025453.3025474
244
+
245
+ [4] R. Arora, R. H. Kazi, D. M. Kaufman, W. Li, and K. Singh. Magical-hands: Mid-air hand gestures for animating in vr. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, UIST '19, p. 463-477. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3332165.3347942
246
+
247
+ [5] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot learners. 2020.
248
+
249
+ [6] J. Chen, S. Izadi, and A. Fitzgibbon. KinÉtre: Animating the World with the Human Body, p. 435-444. Association for Computing Machinery, New York, NY, USA, 2012.
250
+
251
+ [7] R. C. Davis, B. Colwell, and J. A. Landay. K-sketch: A 'kinetic' sketch pad for novice animators. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '08, p. 413-422. Association for Computing Machinery, New York, NY, USA, 2008. doi: 10. 1145/1357054.1357122
252
+
253
+ [8] T. Drey, J. Gugenheimer, J. Karlbauer, M. Milo, and E. Rukzio. VRS-ketchIn: Exploring the Design Space of Pen and Tablet Interaction for 3D Sketching in Virtual Reality, p. 1-14. Association for Computing Machinery, New York, NY, USA, 2020.
254
+
255
+ [9] E. Gambaretto and C. Piña. Real-time animation of cartoon character faces. In ACM SIGGRAPH 2014 Computer Animation Festival, SIG-GRAPH '14, p. 1. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/2633956.2658830
256
+
257
+ [10] D. Gasques, J. G. Johnson, T. Sharkey, and N. Weibel. What you sketch is what you get: Quick and easy augmented reality prototyping with pintar. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, CHI EA '19, p. 1-6. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/ 3290607.3312847
258
+
259
+ [11] D. Giunchi, S. James, and A. Steed. 3d sketching for interactive model retrieval in virtual reality. In Proceedings of the Joint Symposium on Computational Aesthetics and Sketch-Based Interfaces and Modeling and Non-Photorealistic Animation and Rendering, Expressive '18. Association for Computing Machinery, New York, NY, USA, 2018. doi: 10.1145/3229147.3229166
260
+
261
+ [12] O. Glauser, W.-C. Ma, D. Panozzo, A. Jacobson, O. Hilliges, and O. Sorkine-Hornung. Rig animation with a tangible and modular input device. ACM Trans. Graph., 35(4), jul 2016. doi: 10.1145/2897824. 2925909
262
+
263
+ [13] M. Guay, R. Ronfard, M. Gleicher, and M.-P. Cani. Space-time sketching of character animation. ACM Trans. Graph., 34(4), jul 2015. doi: 10.1145/2766893
264
+
265
+ [14] B. Hartmann, S. Doorley, S. Kim, and P. Vora. Wizard of oz sketch animation for experience prototyping. Adjunct Procs. of Ubicomp, 2006.
266
+
267
+ [15] R. Held, A. Gupta, B. Curless, and M. Agrawala. 3D Puppetry: A Kinect-Based Interface for 3D Animation, p. 423-434. Association for Computing Machinery, New York, NY, USA, 2012.
268
+
269
+ [16] J. Hwang, K. Kim, I. H. Suh, and T. Kwon. Performance-based animation using constraints for virtual object manipulation. IEEE Computer Graphics and Applications, 37(4):95-102, 2017. doi: 10.1109/MCG. 2017.3271455
270
+
271
+ [17] A. Inc. Medium. https://www.adobe.com/ca/products/medium.html, 2016.
272
+
273
+ [18] G. Inc. Tilt brush, 2016.
274
+
275
+ [19] H. Kaimoto, K. Monteiro, M. Faridan, J. Li, S. Farajian, Y. Kakehi, K. Nakagaki, and R. Suzuki. Sketched reality: Sketching bi-directional interactions between virtual and physical worlds with ar and actuated
276
+
277
+ tangible ui. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, UIST '22. Association for Computing Machinery, New York, NY, USA, 2022. doi: 10.1145/ 3526113.3545626
278
+
279
+ [20] R. H. Kazi, F. Chevalier, T. Grossman, and G. Fitzmaurice. Kitty: Sketching dynamic and interactive illustrations. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, UIST '14, p. 395-405. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/2642918.2647375
280
+
281
+ [21] R. H. Kazi, F. Chevalier, T. Grossman, S. Zhao, and G. Fitzmaurice. Draco: Bringing life to illustrations with kinetic textures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '14, p. 351-360. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10.1145/2556288.2556987
282
+
283
+ [22] R. H. Kazi, T. Grossman, N. Umetani, and G. Fitzmaurice. Motion amplifiers: Sketching dynamic illustrations using the principles of $2\mathrm{\;d}$ animation. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16, p. 4599-4609. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/ 2858036.2858386
284
+
285
+ [23] R. H. Kazi, T. Igarashi, S. Zhao, and R. Davis. Vignette: Interactive texture design and manipulation with freeform gestures for pen-and-ink illustration. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '12, p. 1727-1736. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10.1145/ 2207676.2208302
286
+
287
+ [24] D. Keefe, R. Zeleznik, and D. Laidlaw. Drawing on air: Input techniques for controlled 3d line illustration. IEEE Transactions on Visualization and Computer Graphics, 13(5):1067-1081, 2007. doi: 10. 1109/TVCG.2007.1060
288
+
289
+ [25] D. F. Keefe, D. A. Feliz, T. Moscovich, D. H. Laidlaw, and J. J. LaViola. Cavepainting: A fully immersive $3\mathrm{\;d}$ artistic medium and interactive experience. In Proceedings of the 2001 Symposium on Interactive 3D Graphics, I3D '01, p. 85-93. Association for Computing Machinery, New York, NY, USA, 2001. doi: 10.1145/364338.364370
290
+
291
+ [26] T. R. Kelley. Design sketching: A lost skill: It is not enough to have students know how to create design sketches but to also know the purpose of sketching in design. Technology and Engineering Teacher, 76(8), 2017.
292
+
293
+ [27] K. C. Kwan and H. Fu. Mobi3dsketch: 3d sketching in mobile ar. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, p. 1-11. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3290605.3300406
294
+
295
+ [28] B. Lee, R. H. Kazi, and G. Smith. Sketchstory: Telling more engaging stories with data through freeform sketching. IEEE Transactions on Visualization and Computer Graphics, 19(12):2416-2425, 2013. doi: 10.1109/TVCG.2013.191
296
+
297
+ [29] M. Lewis. Sketching from the imagination: Storytelling. 2021.
298
+
299
+ [30] Y. Li, X. Luo, Y. Zheng, P. Xu, and H. Fu. Sweepcanvas: Sketch-based 3d prototyping on an rgb-d image. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, UIST '17, p. 387-399. Association for Computing Machinery, New York, NY, USA, 2017. doi: 10.1145/3126594.3126611
300
+
301
+ [31] S. LLC. Quill. https://quill.art/, 2021.
302
+
303
+ [32] J. Ma, L.-Y. Wei, and R. H. Kazi. A layered authoring tool for stylized 3d animations. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI '22. Association for Computing Machinery, New York, NY, USA, 2022. doi: 10.1145/3491102. 3501894
304
+
305
+ [33] S. McCloud. Understanding Comics: The Invisible Art. William Morrow, 1993.
306
+
307
+ [34] G. Nishida, I. Garcia-Dorado, D. G. Aliaga, B. Benes, and A. Bousseau. Interactive sketching of urban procedural models. ACM Transactions on Graphics (TOG), 35(4):1-11, 2016.
308
+
309
+ [35] NVRMIND. Animvr. https://nvrmind.io, 2018.
310
+
311
+ [36] S. Özker and E. S. Makakli. Importance of sketching in the design process and education. The Online Journal of Science and Technology, 7(2):73, 2017.
312
+
313
+ [37] K. Perlin, Z. He, and F. Zhu. Chalktalk vr/ar. In K. Zhu, A. Lugmayr, and X. Ma, eds., 7th International Workshop on Semantic Ambient
314
+
315
+ Media Experiences, SAME 2014, number 2 in International series on information systems and management in creative eMedia, pp. 30-31. International Ambient Media Association (iAMEA), 2017. Publisher Copyright: © 2017 International Ambient Media Association (iAMEA). All rights reserved. Copyright: Copyright 2018 Elsevier B.V., All rights reserved.; 10th International Workshop on Semantic Ambient Media Experiences: Artificial Intelligence Meets Virtual and Augmented Worlds (AIVR), SAME 2017 ; Conference date: 27-11-2017 Through 27-11-2017. doi: 10.1145/1234
316
+
317
+ [38] P. Åkerman, A. Puikkonen, P. Huuskonen, A. Virolainen, and J. Häkkilä. Sketching with strangers: In the wild study of ad hoc social communication by drawing. In Proceedings of the 12th ACM International Conference on Ubiquitous Computing, UbiComp '10, p. 193-202. Association for Computing Machinery, New York, NY, USA, 2010. doi: 10.1145/1864349.1864390
318
+
319
+ [39] N. Saquib, R. H. Kazi, L.-Y. Wei, and W. Li. Interactive body-driven graphics for augmented video performance. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19, p. 1-12. Association for Computing Machinery, New York, NY, USA, 2019. doi: 10.1145/3290605.3300852
320
+
321
+ [40] J. Scott and R. Davis. Physink: Sketching physical behavior. In Proceedings of the Adjunct Publication of the 26th Annual ACM Symposium on User Interface Software and Technology, UIST '13 Adjunct, p. 9-10. Association for Computing Machinery, New York, NY, USA, 2013. doi: 10.1145/2508468.2514930
322
+
323
+ [41] M. Studio. Masterpiece vr. https://masterpiecestudio.com/, 2017.
324
+
325
+ [42] R. Suzuki, R. H. Kazi, L.-Y. Wei, S. DiVerdi, W. Li, and D. Leithinger. Realitysketch: Embedding responsive graphics and visualizations in ar with dynamic sketching. In Adjunct Publication of the 33rd Annual ACM Symposium on User Interface Software and Technology, UIST '20 Adjunct, p. 135-138. Association for Computing Machinery, New York, NY, USA, 2020. doi: 10.1145/3379350.3416155
326
+
327
+ [43] M. Thorne, D. Burke, and M. van de Panne. Motion doodles: An interface for sketching character motion. ACM Trans. Graph., 23(3):424-431, aug 2004. doi: 10.1145/1015706.1015740
328
+
329
+ [44] R. van der Lugt. How sketching can affect the idea generation process in design group meetings. Design Studies, 26(2):101-122, 2005. doi: 10.1016/j.destud.2004.08.003
330
+
331
+ [45] I. Verstijnen, C. van Leeuwen, G. Goldschmidt, R. Hamel, and J. Hennessey. Sketching and creative discovery. Design Studies, 19(4):519- 546, 1998. doi: 10.1016/S0142-694X(98)00017-9
332
+
333
+ [46] B. Victor. Drawing dynamic visualizations, 2013.
334
+
335
+ [47] T. VR. Tvori. http://tvori.co, 2016.
336
+
337
+ [48] J. Xing, R. H. Kazi, T. Grossman, L.-Y. Wei, J. Stam, and G. Fitz-maurice. Energy-brushes: Interactive tools for illustrating stylized elemental dynamics. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, UIST '16, p. 755-766. Association for Computing Machinery, New York, NY, USA, 2016. doi: 10.1145/2984511.2984585
338
+
339
+ [49] Y. Zhang, T. Han, Z. Ren, N. Umetani, X. Tong, Y. Liu, T. Shiratori, and X. Cao. Bodyavatar: Creating freeform 3d avatars using first-person body gestures. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology, UIST '13, p. 387-396. Association for Computing Machinery, New York, NY, USA, 2013. doi: 10.1145/2501988.2502015
340
+
341
+ [50] B. Zhu, M. Iwata, R. Haraguchi, T. Ashihara, N. Umetani, T. Igarashi, and K. Nakazawa. Sketch-based dynamic illustration of fluid systems. ACM Trans. Graph., 30(6):1-8, dec 2011. doi: 10.1145/2070781 .2024168
papers/Graphics_Interface/Graphics_Interface 2023/Graphics_Interface 2023 Conference_SD/yWIplfQfx8/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,261 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § ENCHANTEDBRUSH: ANIMATING IN MIXED REALITY FOR STORYTELLING AND COMMUNICATION
2
+
3
+ < g r a p h i c s >
4
+
5
+ Figure 1: EnchantedBrush is a mixed-reality sketching interface for creating animated storyboards that can interact with the physical surroundings. It supports easy storytelling as if using a magical paintbrush to A) draw a vehicle and B) give it motion lines so that C) the vehicle comes to life with automatic motion and sound effects (the black represents elements in the real world, and the blue represents elements in the virtual world). This animation can be achieved using D) a set of interactive motion and sound brushes designed in a mixed-initiative interaction paradigm.
6
+
7
+ § ABSTRACT
8
+
9
+ Recent progress in head-worn 3D displays has made mixed-reality storytelling, which allows digital art to interact with the physical surroundings, a new and promising medium to visualize ideas and bring sketches to life. While previous works have introduced dynamic sketching and animation in 3D spaces, visual and audio effects typically need to be manually specified. We present EnchantedBrush, a novel mixed-reality approach for creating animated storyboards with automatic motion and sound effects in real-world environments. People can create animations that interact with their physical surroundings using a set of interactive motion and sound brushes. We evaluated our approach with 12 participants, including professional artists. The results suggest that EnchantedBrush facilitates storytelling and communication, and utilizing the physical environment eases animation authoring and simplifies story creation.
10
+
11
+ Index Terms: Human-centered computing-Human computer interaction (HCI)—Interaction paradigms—Mixed / augmented reality; Human-centered computing-Interaction design-Interaction design theory, concepts and paradigms
12
+
13
+ § 1 INTRODUCTION
14
+
15
+ Sketching, a ubiquitous communication medium for visual thinking, performs a critical role in storytelling, idea expression, and creativity $\left\lbrack {{26},{29},{36},{38},{44},{45}}\right\rbrack$ . For more accessible animation authoring and expressive power, human-computer interaction researchers have been exploring dynamic sketching and interactive animation over decades in broad-ranging fields such as arts and storyboards [20-22], prototyping [14,34], data visualization [28,46], and education $\left\lbrack {{40},{50}}\right\rbrack$ . However, these works lack the consideration of interacting with the physical surroundings.
16
+
17
+ Mixed Reality (MR) technologies further introduced new possibilities into dynamic sketching and interactive animation by expanding the domain of interactions. The painting canvas is no longer constrained to an electronic screen but a broader canvas in the form of a freeform reality environment. This free manner has inspired numerous Virtual and Augmented Reality (VR/AR) works and commercial products with sketch-based interaction $\left\lbrack {2,8,{19},{25},{27},{31},{35},{42}}\right\rbrack$ . However, these works either lack the ability to support animation and sound effects or require manual efforts to specify such effects, which could be time-consuming and redundant when creating a story. Besides, previous works also do not exploit the interaction with physical surroundings as part of sketching or storytelling. This limits the use of physical objects in communication.
18
+
19
+ In this work, we present EnchantedBrush, a novel MR interface for dynamic sketching and interactive storyboards (Figure 1). Our approach utilizes a mixed-initiative interaction paradigm - the automatic mode and the customized mode - to assist users in designing visual and sound effects. Under the automatic mode, sketched elements are animated automatically with lively sounds based on their semantic nature. Under the customized mode, users can manually define animation and add sounds. This lets users quickly create visualizations and storyboards without spending intensive time on manual effect specifications. Under both modes, virtual sketched elements can interact with the physical surroundings and generate proper sound effects. This saves users' time by releasing them from preparing object models and setting up backgrounds.
20
+
21
+ Our approach works as follows: 1) element sketching: users design their storyboards based on the physical surroundings and then sketch elements in mid-air. 2) sketch recognition: our system adopts a sketch recognition model to recognize the sketched elements and provides the top two recognition results for users to choose from. 3) motion and sound designing: our system will automatically add the motion and sound effects to the sketched elements according to their semantic nature. In addition to the automatic behavior, users could also customize the animation by defining motion paths and sound effects by themselves. 4) animation: users animate the sketched storyboard and make it interactive with the real-world environment.
22
+
23
+ We evaluated our approach with 12 participants, including three professional artists. The results show the usability of Enchanted-Brush in easy storyboard creation and effective communication. We also discuss potential application scenarios based on the interviews with the participants.
24
+
25
+ To summarize, our contributions are:
26
+
27
+ * EnchantedBrush, a sketch-based interface in mixed reality for creating animated storyboards that can interact with the physical surroundings
28
+
29
+ * A mixed-initiative interaction paradigm for visual and audio design, including the automatic mode and the customized mode
30
+
31
+ * User evaluation of EnchantedBrush and a set of its potential application scenarios such as for the animation industry and children teaching
32
+
33
+ § 2 RELATED WORK
34
+
35
+ Our work relates to prior research in 2D dynamic sketching interfaces, 3D animation authoring tools, and sketch-based interactions in VR/AR.
36
+
37
+ § 2.1 DYNAMIC SKETCHING INTERFACES IN 2D
38
+
39
+ Traditional 2D sketching applications have been well investigated and developed. Various dynamic sketching tools cover different features for animation authoring. K-Sketch [7] introduces a pen-based system to create simple prototypes of animations. Kazi et al. build Draco [21] and Kitty [20], adding kinetic textures to a collection of objects for continuous animation effects and allowing users to customize different functional relationships between object entities for interactive illustrations. Energy-Brushes [48] investigates the role of flow particles in stylized animations to create fundamental dynamics. Motion Amplifiers [22] turns the principles of 2D animation into a set of amplifiers that users can apply to animate illustrations. Sketching can also allow for natural interactions during whiteboard presentations and visualization as demonstrated by SketchStory [28], and in fluid systems such as the illustration project by Zhu et al. [50]. Moreover, Motion Doodles [43] shows how sketching can animate the motion of characters, while Vignette [23] focuses on texture creation using interactive sketch gestures. These 2D works and interfaces allow people to utilize the power and nature of sketching to create animation and texture. They create the fundamental applications of sketching. Our work keeps the core design idea of a sketching tool, but expands it even further and implements the system in mixed reality to allow sketches to interact with physical environments.
40
+
41
+ < g r a p h i c s >
42
+
43
+ Figure 2: Motion and sound examples in storyboards, illustrations, and icons: (A) The motion lines behind the car indicate the car is running. (B) The path lines of the ball illustrate its movement trajectory. (C) The sound lines for the person show they are speaking.
44
+
45
+ § 2.2 ANIMATION AUTHORING TOOLS IN 3D
46
+
47
+ Animation tools in 3D add one more dimension to the possibility and contribute to more expressive and lively graphics. For instance, we can use human bodies to animate static 3D meshes and craft interactions with graphical elements [6,39,49]. 3D puppetry [15] provides a real-time tool for 3D model animation using physical objects as input. As for character motion, Gambaretto and Piña [9] investigate facial expressions as a way to author character animation, while Glauser et al. [12] explore a tangible interface by fluid manipulation. Guay et al. [13] study a stroke method using a space-time curve to match 3D character motion based on $2\mathrm{D}$ lines. Recently, Ma et al. [32] propose stylized 3D animations in a layered authoring interface. However, these applications are still traditional screen-based interfaces. HCI researchers have further studied the potential of the interactive experience in VR. Hwang et al. [16] propose a performance-based animation system for virtual object manipulations. Another example is MagicalHands [4] which achieves animation authoring in VR using mid-air hand gestures. While these projects take the generation of animation to a new level, they make use of the interaction of body motions instead of sketching interactions. We focus on how 3D sketching in mixed reality can enhance users' ability for storytelling and idea visualization.
48
+
49
+ § 2.3 SKETCH-BASED USER INTERFACES IN VR AND AR
50
+
51
+ Sketching in 3D is becoming an increasingly popular medium not only in pure screen-based interaction but also in MR applications. There are a number of commercial sketching and animation products in VR/AR such as Tilt Brush [18], Medium [17], AnimVR [35], Quill [31], MasterpieceVR [41] and Tvori [47]. These systems allow people to create virtual sketches in a free manner or provide an interface for animating drawings. However, the interfaces of these consumer tools are not user-friendly to non-experts and require a learning curve. HCI researchers have been working on various system designs in order to have a better solution for a sketch-based interface. For example, HoloARt [1] investigates how people could paint in the air via finger gestures. VRSketchIn [8] uses interchangeable techniques of $2\mathrm{D}$ and $3\mathrm{D}$ sketching to create artifacts in VR. Other sketch-based 3D painting and modeling tools such as Cave-Painting [25], SweepCanvas [30], systems for model retrieval [11], Mobi3DSketch [27], and ChalkTalk [37] all provide users with a designed system to work in a virtual 3D scene. Immersive sketching tools including SymbiosisSketch [2] and PintAR [10] can create sketched elements that spatially integrate with the real world. Reali-tySketch [42] provides an AR interface to create embedding dynamic and responsive visualization within the physical environment. A bi-directional sketching interaction is introduced in Sketched Reality [19] that virtual objects and physical robots affect each other through AR sketches and tangible interfaces.
52
+
53
+ While these works try to address interactive sketching and design using different approaches, users need to manually specify the visual and audio effects at each step of the animation. The existing works do not take advantage of the semantic properties of sketched elements in the authoring process. Also, most of them do not support real-time interactions with the physical environment. In contrast, EnchantedBrush investigates a mixed-initiative interaction paradigm and uses the recognition of sketched elements to fill the unexplored area in these works. With our approach, users can create interactive animation in a real-world setting with expressive sound effects. Our contribution is a novel interaction approach in mixed reality using interactive motion and sound brushes for animating a storyboard.
54
+
55
+ < g r a p h i c s >
56
+
57
+ Figure 3: Interaction techniques. A) Drawing an element. B) Drawing a path. C) Drawing motion lines. D) Drawing sounds. E) Selecting and creating a dependency relationship. F) Interacting with physical surroundings.
58
+
59
+ § 3 ENCHANTEDBRUSH
60
+
61
+ The goal of EnchantedBrush is to allow people to visualize their ideas quickly and easily in an MR environment. To this end, we studied a set of grammar that is commonly used in storyboards, comics, illustrations, and icons. We found that motion and sound effects are two key components that make a scene dynamic, and creators and storytellers rely heavily on visual iconography to present those two effects as introduced by Scott McCloud in the book Understanding Comics [33]. Figure 2 displays three motion or sound examples. Therefore, we focus on the visual and sound design for animated storyboards. We propose a mixed-initiative interaction paradigm for motion and sound effects authoring. More precisely, our system automatically associates a default motion effect and a sound effect to each sketched object by leveraging its semantic nature. For example, the default motion effect for an airplane is flying around, and its default sound effect is an engine sound. Furthermore, to enhance the expressiveness of our system, we allow users to customize the motion and sound effects based on their own design idea.
62
+
63
+ The remainder of this section is organized as follows. We first introduce the design concepts of EnchantedBrush. Next, we demonstrate using EnchantedBrush for animation by a storytelling example.
64
+
65
+ § 3.1 CONCEPTS
66
+
67
+ Based on Understanding Comics [33], we observed that storyboards are usually built upon the following concepts: Drawing Elements, Drawing Motion, Drawing Sound, Selecting and Creating Relationship. Accordingly, we design Sketch Brush, Path Brush and Motion Brush, Sound Brush, and Selection to achieve each concept, respectively.
68
+
69
+ § 3.1.1 DRAWING ELEMENTS
70
+
71
+ Sketch Brush allows users to draw freely in the air (Figure 3A). Once a sketched element is finished, the back-end sketch recognition model will take the sketch as input and send back the sketch recognition results to the user.
72
+
73
+ § 3.1.2 DRAWING MOTION
74
+
75
+ Path Brush and Motion Brush are used to animate the sketches. There are two behavior modes in EnchantedBrush, automatic mode and customized mode. In the automatic mode, the system animates the sketch automatically based on sketch recognition and gives it a common behavior. This releases users from manually prescribing how an element should be animated. The common behavior is decided by a motion verb commonly associated with the given element. For instance, if the drawn element is a basketball, it will fall and bounce on the ground, while an airplane will fly around by default. Path brush is designed to use in the mode of customized behavior when the user wants their sketched elements to follow some specific trajectory (Figure 3B). When a customized path is provided, the system will switch to the mode of customized behavior and animate the element following the provided path. In either mode, the animation is started with motion lines drawn by Motion Brush (Figure 3C). The length of motion lines is parameterized and proportional to the element's movement speed. Longer motion lines cause the element to move faster so that users can control the element's speed and adapt to the unique need of their story.
76
+
77
+ § 3.1.3 DRAWING SOUND
78
+
79
+ EnchantedBrush supports sound effects by incorporating lively audio. This enhances the expressiveness of storytelling and the sense of immersion. Upon the recognition of a sketched element, each element is automatically assigned three sound properties aligned with their semantic nature:
80
+
81
+ 1. Self sound is the unique sound commonly produced by an element. For example, the self-sound of an ambulance is a siren, while the self-sound of a dog is barking.
82
+
83
+ 2. Movement sound refers to the sound made by an element while moving. For example, the movement sound of a car is an engine sound, and for a human, the movement sound is a walking sound.
84
+
85
+ 3. Collision sound is the sound made when an element collides with another element. For example, the collision sound of a car is a crash sound, and it will be triggered when the car collides with either a physical object or a virtual element.
86
+
87
+ Sound brush aims to help users add the self sound of a drawn element (Figure 3D). Users can customize and specify when the self-sound should be played. For simplification, the sounds of movement and collision are played automatically once the element starts moving or a collision is detected.
88
+
89
+ < g r a p h i c s >
90
+
91
+ Figure 4: Example storyboard: aliens escape from a UFO before it collides with a space station (real-world objects replace beige elements).
92
+
93
+ § 3.1.4 SELECTING AND CREATING RELATIONSHIPS
94
+
95
+ Selection enables users to select the element they want to edit and animate among multiple elements. It is implicitly activated based on the proximity of the brush to an element, and the selected element is highlighted with a yellow outline, as shown in Figure 3E. Once an element is selected, users can perform the animation on it using the designed brushes. In addition, to enhance the expressivity of the system, EnchantedBrush also supports inter-object animation by creating dependency relationships. For example, in Figure 3E, the UFO element is first selected, and a customized trajectory is provided for it. Users can then use the trajectory as a timeline and draw another element (i.e., the cow) along the line as if inserting a keyframe in the timeline. The cow is treated as a dependency of the UFO, and its appearance depends on the movement of the UFO. Moreover, as illustrated in Figure 3F, the sketched virtual element (i.e., the UFO) can interact with the physical surroundings (i.e., the table). Users can use physical objects for object models and backgrounds in their story which simplifies the story creation.
96
+
97
+ § 3.2 COMPOSING ANIMATION
98
+
99
+ In this section, we demonstrate how EnchantedBrush visualizes the story in Figure 4 using the interaction concepts introduced above. Figure 5 illustrates the design process.
100
+
101
+ We start with sketching a UFO element (Figure 5A) with Sketch Brush. Since we want the element to move in a specified behavior, we switch to Path Brush and draw a trajectory to make it collide with the shelf (the purple line in Figure 5B). The alien escapes by parachute before the accident happens. The position of the appearance of the parachute depends on the movement of the UFO, so we compose a relationship between the UFO and the parachute by selecting the UFO (Figure 5B). When the escape happens, the UFO makes an alarm sound, and we can draw this by using Sound Brush (Figure 5C). We then switch back to Sketch Brush and draw the parachute element where the escape occurs (Figure 5D). Now the story is ready to be animated, so we switch to Motion Brush and animate the story using motion lines (Figure 5E). The movement sound is played automatically when the UFO starts moving, and the collision sound is played automatically when it crashes into the space station, i.e., the shelf (Figure 5F).
102
+
103
+ § 4 PROTOTYPE IMPLEMENTATION
104
+
105
+ We developed a prototype for the proposed concepts. In this section, we describe the implementation overview and detail the important components of our system including spatial mapping, sketch recognition, and object automatic behaviors.
106
+
107
+ < g r a p h i c s >
108
+
109
+ Figure 5: Steps to visualize a storyboard using EnchantedBrush: A) Sketch the main element of the story. B) Provide a customized trajectory (purple line) and select the animated element. C) Add the sound effect of an alarm. D) Sketch the dependency element (the parachute below the UFO). E) Animate the storyboard. F) Auto-play the movement sound and the collision sound.
110
+
111
+ § 4.1 SYSTEM OVERVIEW AND SETUP
112
+
113
+ Our system requires two hardware components: Oculus Quest 2 as the Head-Mounted Display (HMD) and ZED Mini as the mixed-reality camera. The ZED camera is mounted on top of the Oculus HMD so that the virtual world resides in the real world. We use Oculus controllers as the input device.
114
+
115
+ Similar to SymbiosisSketch [2], we configure the setup of En-chantedBrush based on the bimanual design of painters in real life - painters use their dominant hand to hold the paintbrush, and the non-dominant hand to hold the palette. In our case, the controller held in the dominant hand acts as the main paintbrush while the other controller in the non-dominant hand acts as the palette which is a set of switchable brushes (Section 3.1). The system is implemented in the Unity engine by $\mathrm{C}\#$ .
116
+
117
+ < g r a p h i c s >
118
+
119
+ Figure 6: Sketch recognition process: the 3D strokes are first projected onto a 2D best-fitting plane to get a flattened image. Flattened images are then converted to normalized bitmaps. The recognition model offers a list of predictions with confidence based on the bitmaps. Sketch A is an airplane, and Sketch B is a basketball.
120
+
121
+ < g r a p h i c s >
122
+
123
+ Figure 7: Storyboards used in the evaluation sessions (real-world objects replace beige elements): A) A ball bounces on the table and makes a bouncing sound. B) An airplane flies around with some engine sound. C) A police car moves forward and collides with a wall. An engine, a siren, and a collision sound are playing in the process. D) A poor cow is abandoned on an island by a UFO. An ambulance then takes the injured cow to the hospital. The sounds of the cow, the UFO, and the ambulance are playing in the process.
124
+
125
+ § 4.2 SPATIAL MAPPING
126
+
127
+ We mark the location of physical objects in order to achieve interaction with the physical world. By using the built-in spatial mapping function of ZED Mini, we scan the real-world environment and model it in the 3D triangle mesh format. After the system stores the mesh of the real-world environment, we make it invisible to the users.
128
+
129
+ § 4.3 SKETCH RECOGNITION
130
+
131
+ In order to achieve the "enchantment" of EnchantedBrush, we implement automatic sketch recognition capabilities for our system through a sketch classification neural network. Due to the lack of publicly available dataset of multi-category $3\mathrm{D}$ sketches, we train the neural network using a 2D sketch dataset, Quick Draw Dataset from Google ${}^{1}$ . We then convert the 3D sketch into a 2D sketch before passing them to the sketch recognition network. To achieve this, we first project the $3\mathrm{D}$ points onto a $2\mathrm{D}$ best-fitting plane, then render the projected points into a normalized image. The neural network returns two candidates of prediction according to its confidence in the accuracy of recognition. Figure 6 demonstrates this sketch recognition process. Once obtaining the recognition results from the neural network, we display the two results to the users and ask them to select the desired one.
132
+
133
+ § 4.4 AUTOMATIC OBJECT BEHAVIORS
134
+
135
+ If users do not provide a customized trajectory, the sketched element will be animated following its automatic behavior. A question raised here is: how do we decide what the automatic behavior of a given object should be? Before answering this question, we need to first define what "automatic behavior" is. Within the scope of EnchantedBrush, we define the automatic behavior of an object to be the common motion that is widely associated with the object. For example, the automatic behavior of a ball would be bouncing, while for a car, the automatic behavior would be running forward. To enhance the scalability, we experimented with the powerful language model Generative Pre-trained Transformer 3 (GPT-3) developed by OpenAI [5] to generate a motion verb for a random object and use the verb as its automatic behavior. Once we obtain a common verb associated with the given object, we transform the verb into an animation effect. The translation from a verb to a motion path is implemented as changing the position vector of the object in Unity. Take an airplane as an example, we first get the verb fly from GPT-3, then we transform it into a flying behavior: we define the flying motion as a circular path and the rotation around the y-axis in degrees per unit time.
136
+
137
+ Upon the recognition of a sketch, the corresponding sound effects are automatically assigned to the sketched element. We use local pre-downloaded audio resources in the prototype. In detail, for each supported object category, we downloaded self, movement, and collision sounds. When the system is notified of the object identity, it retrieves the sound effect locally and assigns it to the sketched element.
138
+
139
+ § 5 EVALUATION
140
+
141
+ We conducted an exploratory user study with both general users and professional artists. The first goal was to evaluate the usability and interaction techniques of our prototype; the second was to identify limitations and potential applications of our interface.
142
+
143
+ § 5.1 PARTICIPANTS
144
+
145
+ We recruited 12 participants (eight females) aged 23 to 34 to evaluate our system. Three participants are professional artists (P7, P11, and P12). Participant P7 has eight years of professional experience in illustration and sketching, P11 has 11 years of experience in architecture and urban design, and P12 has six years of experience
146
+
147
+ ${}^{1}$ The Quick Draw Dataset: https://github.com/ googlecreativelab/quickdraw-dataset
148
+
149
+ Table 1: Results for the questionnaires of user preferences (Median and Interquartile Range). in animation and design. The other participants have little to good experience with sketching and storytelling and minor to moderate experience with animation.
150
+
151
+ max width=
152
+
153
+ Statements X Median (IQR)
154
+
155
+ 1-3
156
+ 3|c|I felt...
157
+
158
+ 1-3
159
+ 1. it was easy to create a story. X 9 (3)
160
+
161
+ 1-3
162
+ 2. it was easy to sketch an object. X 7.5 (2.25)
163
+
164
+ 1-3
165
+ 3. it was easy to trigger a sound. X 10 (1.25)
166
+
167
+ 1-3
168
+ 4. it was easy to create a trajectory. X 10 (1)
169
+
170
+ 1-3
171
+ 5. it was easy to start a story using motion lines. X 9 (1.25)
172
+
173
+ 1-3
174
+ 6. the interface was simple and easy to use. X 10 (1.25)
175
+
176
+ 1-3
177
+ 7. the real-world environment supported the story. X 9 (1.25)
178
+
179
+ 1-3
180
+ 8. the EnchantedBrush made storytelling easy. X 9 (1)
181
+
182
+ 1-3
183
+ 9. the EnchantedBrush made animation authoring easy. X 9 (2.25)
184
+
185
+ 1-3
186
+
187
+ § 5.2 TASKS
188
+
189
+ Participants were given four tasks to work on using EnchantedBrush. The first three tasks were simple animated scenes which aim to familiarize participants with the system's main features. The last task was to recreate a short story using EnchantedBrush once participants felt confident about using the system. The four tasks are the following:
190
+
191
+ Task 1. (Figure 7A): participants were required to draw a bouncing basketball on the top of a physical table. The basketball was animated using its automatic behavior. Participants would hear the auto-played bouncing sound. Sketch Brush and Motion Brush were used in this task.
192
+
193
+ Task 2. (Figure 7B): participants were required to draw an airplane that flies around. This task used the automatic behavior to animate the airplane element. Participants would hear the jet engine sound while the airplane was flying. Similar to Task 1, Sketch Brush and Motion Brush were used in this task.
194
+
195
+ Task 3. (Figure 7C): participants were asked to create a car crash story. A physical curtain replaced the wall that the car collides with. Sketch Brush, Motion Brush, Path Brush, and Sound Brush were all used in this task.
196
+
197
+ Task 4. (Figure 7D): participants were asked to create a more complicated short story. One day, a UFO flew over an island, and as it passed over the island, it abandoned a poor cow that had been kidnapped. Fortunately, there was an animal hospital on this island. An ambulance arrived in time for the injured cow and took him to the hospital. In addition to the four brushes, Sketch Brush, Motion Brush, Path Brush, and Sound Brush, Selection was also used to select the UFO so that the user could sketch the cow as a dependency of the UFO. Existing physical objects, such as a table and flowers, were used as the island and the animal hospital, respectively. Thus, participants could focus on the main story plot, saving their time making models of islands or hospitals.
198
+
199
+ § 5.3 PROCEDURE
200
+
201
+ The study was conducted in our lab. Pieces of furniture and objects, including a table, a shelf, and a flower, were used to set up the study space, so that participants could use them while creating a storyboard. The example storyboard for each task (Figure 7A-D) was displayed on a separate monitor, and participants could refer to them whenever needed.
202
+
203
+ Our evaluation session consisted of three steps. First, participants filled out a background questionnaire. The facilitator then introduced the user interface and core concepts and demonstrated the functionalities of each brush by going through Task 1 to Task 3. Participants then took time to familiarize themselves with the interface and interaction methods. In the second step, participants were asked to perform the four tasks independently. The facilitator provided light guidance if the participant had trouble using the tool. There was no time limit on task completion, so participants could pay full attention to the tool's usability without pressure on completing a task on time. Finally, participants were asked to fill out a usability questionnaire and participate in an interview discussion to collect more in-depth insights about our system. Sessions lasted approximately 60 minutes, and participants were compensated with 20 CAD.
204
+
205
+ § 6 RESULTS AND DISCUSSION
206
+
207
+ Our evaluation suggested that participants enjoyed using Enchant-edBrush to create storyboards and animate their ideas. Participants appreciated the simplicity of the interface and the easiness of animation authoring. Besides the system usability, we also analyzed the potential applications of EnchantedBrush according to the discussions with the participants. Figure 8 shows the photos taken during our user study, and Figure 9 shows four sample results produced by our participants.
208
+
209
+ § 6.1 QUANTITATIVE METRICS
210
+
211
+ The results of the usability metrics are summarized in Table 1. We used a set of 1-10 Likert scale questions for the measurement (1 $=$ strongly disagree to ${10} =$ strongly agree). Overall, participants found it easy to use EnchantedBrush for creating a story (Q1). They were satisfied with the interaction techniques, including triggering a sound effect (Q3), making an object follow a given path (Q4), and using motion lines to animate an object (Q5). At the same time, we found the easiness level of sketching an object was lower than the other features (Q2). Through discussions with the participants, they suggested they were not used to the precision and control over sketch strokes when drawing in 3D space. This finding echoes the discussions made by previous works $\left\lbrack {2,3,{24}}\right\rbrack$ that it is a general challenge for humans to draw accurately in the air due to the inability of ergonomics. In terms of the user interface, participants all agreed on the simplicity of the interface and suggested that it was easy and straightforward to use (Q6). Participants felt that interacting with the real-world environment made it easy to tell a story (Q7). All participants were confident that EnchantedBrush made storytelling and animation authoring easier (Q8, Q9).
212
+
213
+ < g r a p h i c s >
214
+
215
+ Figure 8: Participants working on the tasks.
216
+
217
+ The questionnaire results demonstrated the usability of our tool and showed that EnchantedBrush provides users with an easy sketch-based interface to create storyboards and communicate their ideas. Sound components, motion lines, customized trajectories, and interactive mixed-reality environments were appreciated for their power and effectiveness.
218
+
219
+ § 6.2 QUALITATIVE FEEDBACK
220
+
221
+ We conducted guided interviews and open-ended discussions with participants and collected qualitative feedback and insights.
222
+
223
+ Overall, participants showed great excitement about our tool and how they could create stories in the real world. For example, P5, an amateur artist who studied animation for two years and has experience sketching for about ten years, appreciated how they could use physical objects for animation which freed them from making models of objects. P12, a professional animator, also liked the mixed-reality environment and pointed out that seeing the physical world contributed to the magical senses of the tool.
224
+
225
+ P5: "Normally, when I animate things, it's all digital. If I want a wall for the car to crash into, I have to make the wall. I have to put it in the right spot and figure out how to make it part of the story. But since the wall is already there, I think using it as part of the story is clever. Same with the table. I felt like if you're animating, you need to figure out the contact and the rigid body if you're making a ball down, so it's cool that it already knew how to use the table."
226
+
227
+ Professional artists (P7, P11, P12) commented that Enchanted-Brush provided a simple and interactive interface and believed that EnchantedBrush made dynamic planning, idea presentation in front of a team, and communication more effective.
228
+
229
+ P7 (pro): "I had a lot of fun using it [...] It'll be so easy to draw a thing that's interactive, and your whole team can see it and brainstorm a lot easier [...] You are being able to do a dynamic plan and brainstorming with your whole story-boarding team. I think it'll make the connection between the authors (people making up the story) and people making it come to reality... And it'll be easier, quicker, more organic."
230
+
231
+ P12 (pro): "I think that it was quite amusing and inspiring to see things that you draw come up to life [...] You can test certain scenarios that you're trying to build for [such as] animation, like just straight out very quickly, and see how it's actually like, without any dialogue or any other function. Just by the movement, you can tell a story."
232
+
233
+ They also expressed strong interest in having such a tool in their career work.
234
+
235
+ P11 (pro): "It was very easy and fun to use. I sort of wish I had [this when] I worked in After Effects as an architectural designer. It would be amazing if I had something similar to this for the purpose of presenting initial animation ideas [...] Having a tool like this would really help the concept formation, or the storyboard formation phase [and] will save everyone a lot of time on work. [An example is like] I want you to show the buildings in relationship to the public space [...] I would spend a week or two weeks working with other people who were doing the modeling, trying to assemble the thing, but then when we showed it to our supervisor, it was not what he wanted, so we had to do it all over again [...] What to show, what elements to show, and what components to show at what angle is really important, so I see work like this is a really good future in saving people's lives about not doing repetitive work."
236
+
237
+ < g r a p h i c s >
238
+
239
+ Figure 9: Sample results created by participants in the task sessions. From left to right and from top to bottom are: a bouncing basketball (Task 1), an airplane (Task 2), a car with a siren sound (Task 3), and an abandoned cow (Task 4).
240
+
241
+ § 6.3 POTENTIAL APPLICATIONS
242
+
243
+ All the participants believed in the potential benefits of Enchant-edBrush for different applications and use scenarios. Animation industries are one of the fields that our participants mentioned. "In the animation industry, you need to actually draw every frame. But in this case, even if you don't draw every frame, it sort of copies it (i.e. frames) over and has a moving effect, so I think I can see some potential." (P1). Seven (out of 12) participants stated that such a tool would be appealing to children, so it could be useful in educational fields such as student engagement, concept demonstration, and class teaching. P3 pointed out that the power of quick storytelling of EnchantedBrush could help teachers reproduce a story scenario for children, so children could not only comprehend the story from oral descriptions but also experience the story themselves to enhance their understanding and creativity from visualizations. Another point suggested by P3 is that children could learn about different physical properties (such as gravity, materials, and sounds) of an object using EnchantedBrush based on the interaction and visual effects between virtual objects and real-world environments. Besides, P11 commented that EnchantedBrush would also be useful in social media and video making. "When people go on Instagram live or TikTok live, and they tell a story, and then people get bored about just listening to them talking about the story and showing their face without anything going on. I see this thing as like when you listen to profs giving lectures, sometimes you get bored. That's why they draw stuff on the blackboard and why they used animation slides [...] But if they could use your tool and actually draw out the story, I'm pretty sure you will attract so many more listeners." (P11).
244
+
245
+ § 6.4 SKETCH RECOGNITION PERFORMANCE
246
+
247
+ The sketch recognition model was trained on $2\mathrm{D}$ drawings, so the accuracy of recognizing $3\mathrm{D}$ sketches is not perfect. The overall accuracy rate of $3\mathrm{D}$ sketch recognition in our study is ${68}\%$ on average. We also analyzed the performance based on each task: Task 1 has an accuracy rate of ${100}\%$ , Task 2 and Task 3 are ${58.3}\%$ , and Task 4 gets an accuracy rate of ${63.89}\%$ . We found that the model worked perfectly on recognizing a basketball, but the accuracy of other objects varies significantly from person to person due to various sketching skills, styles and the deformation caused by conversion between 3D sketch to 2D sketch. Since sketch recognition itself is not one of our main focuses, and to allow users to evaluate our interaction approach in a better way, the false results were corrected manually during the sessions.
248
+
249
+ § 7 LIMITATIONS AND FUTURE WORK
250
+
251
+ Although we demonstrated that EnchantedBrush enables users to create storyboards and animate ideas easily, there are a few limitations along with opportunities for future work and research.
252
+
253
+ In our current prototype, the sketch recognition neural network is trained by a 2D sketch dataset. This limits the sketch recognition accuracy of our system. With the emergence of large-scale datasets of multi-category $3\mathrm{D}$ sketches in the future, we believe sketch recognition accuracy could be improved. Meanwhile, the current sketched elements are relatively flat, as sketching 3D-shaped objects in mid-air is generally challenging for users. The MR design space could be augmented so users could sketch more 3D-shaped elements more easily. Besides, the number of object categories that our current prototype supports was small since we focused on validating our concepts and accessing the usability of the implemented features. This restricts users to limited storytelling examples without freeform exploration. Expanding the scale of the object categories could support users' creativity and further leverage the interaction techniques of EnchantedBrush. Lastly, the retrieval of sound effects could also be expanded to use a text-based audio retrieval method from the internet rather than using pre-downloaded audio files.
254
+
255
+ In our current design, we focused on assisting users to easily control the animation of the sketched elements. It would be also interesting to support the deformation of sketched elements according to their unique physical properties or materials in reaction to contacts and collisions. One possible direction is to explore how to assist users to create this casual physics-based deformation in MR. This could allow users to achieve more realistic scenes.
256
+
257
+ Another research direction is to further automate our proposed interaction techniques by automatically interpolating users' intended animation effects. In our approach, we designed various brushes for motion and audio effects. Future work could be to leverage human knowledge and more visual languages in comics design and storytelling to help machines understand the design intent of users. This could save users from the explicit use of brushes and thus lead to a more powerful and free-form tool for animation authoring.
258
+
259
+ § 8 CONCLUSION
260
+
261
+ We present EnchantedBrush, a novel mixed-reality sketching interface for animating storyboards in real-world environments with automatic sound effects. We propose a mixed-initiative interaction paradigm for motion and sound effects based on the semantic nature of sketched elements, which fills the gap in the existing works. The proposed interaction paradigm allows users to quickly create visualizations and storyboards without spending intensive time on manual effect specifications. EnchantedBrush also supports storyboards to interact with the physical surroundings which simplifies the creation process. A user study demonstrates the usability and effectiveness of our system. User feedback also suggests a variety of potential applications of our approach.
papers/HRI/HRI 2022/HRI 2022 Workshop/HRI 2022 Workshop VAM-HRI/BSrx_Q2-Akq/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # When And Where Are You Going? A Mixed-Reality Framework for Human Robot Collaboration
2
+
3
+ Shubham Sonawani
4
+
5
+ sdsonawa@asu.edu
6
+
7
+ Arizona State University
8
+
9
+ Tempe, Arizona, USA
10
+
11
+ Heni Ben Amor
12
+
13
+ hbenamor@asu.edu
14
+
15
+ Arizona State University
16
+
17
+ Tempe, Arizona, USA
18
+
19
+ ![01963dfb-8c61-7599-9cf5-d353d802ef78_0_194_608_1424_632_0.jpg](images/01963dfb-8c61-7599-9cf5-d353d802ef78_0_194_608_1424_632_0.jpg)
20
+
21
+ ## Figure 1: Experiment Setup: Shadow Mode (outlined in red) and Highlight Mode (outlined in black) of Intention Projection framework
22
+
23
+ ## Abstract
24
+
25
+ Fluency and coordination in human-robot collaborative tasks highly depend on shared situational awareness among the interaction partners. This paper sheds light on a work-in-progress framework for Intention Projection (IntPro). To this end, we propose a mixed-reality setup for Intention Projection that combines monocular computer vision with adaptive projection mapping to provide information about the robot's intentions and next actions. This information is projected in the form of visual cues into the environment. A human subject study consisting of a generic joint sorting task is proposed to validate the framework. Here, visual cues about the robot's intentions were provided to the human via mainly two modes, namely a) highlighting the object that the human needs to interact with
26
+
27
+ and b) visualizing the robot's upcoming movements. This work hypothesizes that combining these fundamental modes enables fast and effective signaling, which, in turn, improves task efficiency, transparency, and safety.
28
+
29
+ ## KEYWORDS
30
+
31
+ Mixed-Reality, HRC, Human Subject Study
32
+
33
+ ## ACM Reference Format:
34
+
35
+ Shubham Sonawani and Heni Ben Amor. 2022. When And Where Are You Going? A Mixed-Reality Framework for Human Robot Collaboration. In . VAM-HRI 2022, (Virtual) Sapparo, Japapan, 4 pages.
36
+
37
+ ## 1 INTRODUCTION
38
+
39
+ For humans and robots to effectively work together in close proximity, they need to have a mutual understanding of each other's intentions and actions. In traditional human-human interaction, the involved partners can learn to anticipate each other's actions through body language or timing. In human-robot teams, such an approach can lead to dangerous situations since robot movements are often hard to predict. Besides motion, other modalities can be used for signaling intent, e.g., visual or auditory modalities.
40
+
41
+ ---
42
+
43
+ Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.VAM-HRI '22, March 7th, 2022, (Virtual) Sapporo, Japan © 2022 Association for Computing Machinery.
44
+
45
+ ---
46
+
47
+ ![01963dfb-8c61-7599-9cf5-d353d802ef78_1_215_232_1371_444_0.jpg](images/01963dfb-8c61-7599-9cf5-d353d802ef78_1_215_232_1371_444_0.jpg)
48
+
49
+ Figure 2: Overview of Intention Projection Framework
50
+
51
+ In this work, we focus on projecting visual cues via a mixed-reality approach. Building upon the concept of intention projection [1] we visualize the intention and future actions of the robot ahead of time. With regards to visual modalities, prior work focuses on discrete visual signals for providing robots intent. The work in [2] uses visual cues in the form of expressive lights on a Roomba robot to provide a complex interpretation of the robot. $\left\lbrack {3,4}\right\rbrack$ uses combinations of color and intensities of light signals to provide information about the robot's state given the current task and environment. To provide information in a continuous and feature-rich manner, projection mapping can be used to convey robot intent. [5] uses an onboard projector on a mobile robot to visualize navigation paths. The work in [6] provides a survey on the validity and informativeness of providing a robot's direction and velocity in the form of visual cues to the human during social navigation. [7] uses Nav-Points, Arrow, Gaze to communicate the robot's intention via head mounted display which showed a performance improvement in a human robot interaction tasks. Similarly [8] uses head-mounted mixed-reality setup to provide visualization of robot motion with respect to the user's frame of reference. This work compares 2D display versus head-mounted way mixed reality method backed up by user studies. Also, Situational awareness in proximal human-robot interaction was provided by [9] via augmented reality setup. Furthermore, prior work [10] uses object-aware projection technique to allow humans to collaborate effortlessly with the robot.
52
+
53
+ In this paper, we extend prior work on intention projection and investigate both discrete and continuous methods for communicating robot intent. In particular, we introduce a shadow mode in which a simulated robot (projected using mixed-reality) performs the intended robot actions ahead of the real, physical robot. In turn, the human user can visually anticipate the upcoming motion of the robot. We contrast this mode to a simple highlighting mode, in which the robot's target object is highlighted using a visual cue. We hypothesize that the visualization of a continuous motion (as performed in the shadow mode) provides a clearer and easier interpretation of the robot's intention when contrasted with a discrete highlighting of the target object. In this paper, we describe our setup for intention projection and the shadow mode and describe a planned human subject study to validate our hypothesis.
54
+
55
+ ## 2 SYSTEM OVERVIEW
56
+
57
+ The overall system, as shown in Figure 2 consists of a combination of simulation and real-world setup for obtaining a final result of the IntPro framework. A monocular vision sensor combined with a structure light sensor such as a projector is used as a hardware setup to obtain information about the environment. Calibration sequence as per [11] is used to obtain an extrinsic matrix of projector with respect to camera and intrinsic parameters of the projector; for camera's intrinsic parameters, calibration is done separately to obtain more accurate parameters. This calibration is used by 3D-graphics rendering block to render real-world projection accurately. Simulation is used to render an image of the ur5 robot that mimics the same joint angles as the real-world ur5 robot. This rendered image is obtained from the bottom view of the simulated robot.
58
+
59
+ In order to dynamically change the rendering of highlighted objects, we leveraged an off-the-shelf pose detection framework [12], which can be expanded for multiple objects. Also, object pose information is fed to the motion planner, which calculates the inverse kinematics solution in the form of a trajectory of joint angles. These joint angles are provided simultaneously to the 3D graphics rendering block and real-world ur5 robot. Based on the delay set in shadow mode, the execution of the trajectory occurs on real-world and simulated ur5 robots. 3D graphics renderer keeps running and provides queried information based on the nature of the experiment.The set of experiments are designed considering the human subject and a combination of shadow and highlight mode. Detailed explanation about the proposed modes are given below:
60
+
61
+ - Highlight Mode: A 3D plane is rendered with the texture of a semi-elliptical disc and transformed onto the image frame with respect to the projector's frame of reference by using the detected object pose. Once projected onto the tabletop, the user can see the highlighted object in the form of a randomly colored elliptical disc in front of the object. This information can be provided simultaneously for all the objects with detected individual poses. Given real-time pose detection, highlight mode does not need objects to be static in the environment and projection can be adjusted to perturbation and change in the object's pose. Finally, This mode provides continuous information about the object of interest, which keeps the user updated even when not looking at the object directly.
62
+
63
+ - Shadow Mode: Simulation rendering is leveraged to obtain the mirror effect of the robot onto the tabletop, which acts as the shadow of the robot. In order to provide shadow, the transformation between the tabletop and the camera frame is obtained by detecting the fiducial marker's pose. Since simulation and real world ur5 have the same joint angles, shadow mode dominantly shows the lateral trajectory of the robot, which helps the user understand which part of the work space is easy to engage in the task.
64
+
65
+ ## 3 HUMAN SUBJECT STUDY DESIGN
66
+
67
+ In order to validate the efficacy of the framework, a human subject study will be conducted with generic sorting tasks. A user and robot will collaborate in a sorting task where the user will sort lightweight objects while the robot will sort heavyweight objects. These objects are 3D printed cubes with ${10}\%$ (lightweight) and ${50}\%$ (heavyweight) infill densities. Prior to the task, the user will be instructed about rules and conditions about the task. In the first part of the experiment, A user will be given instructions about the lightweight cubes' pattern on the display screen. The motivation behind these patterns is to reduce the complexity of the sorting task while keeping ambiguity as the sorting task progresses. These patterns can be created as shown in Figure 5. Now, as the robot starts a movement, the timer count for the sorting task begins, and the user should start sorting the objects. Once the human and the robot sort all the objects, the timer count stops.
68
+
69
+ In the second part of the experiment, explicit information about the pattern in which lightweight cubes are placed is not available to the user. However, details about highlight and shadow mode are given to the user. In only highlight mode, as shown in Figure 3 , the user sorts the objects based on information provided via explicit and discrete visual cues onto the tabletop via projector. With only shadow mode, as shown in Figure 4, a user is implicitly informed about possible simple sorting patterns using the display screen. Here, the robot's trajectory information via shadow helps the user decide the safe part of the workspace to engage in sorting tasks. Furthermore, a delay of low, high, and no duration will be introduced between the shadow and the actual robot's movement in different sets of experiments with multiple users. This delay is hypothesized to help the user understand future robot's actions and plan the sorting task accordingly to reduce overall task execution time. Lastly, the sorting experiment will be performed with highlight and shadow mode. It is anticipated that the execution time of the proposed task will reduce significantly with the combination of shadow and highlight mode.
70
+
71
+ ## 4 FUTURE WORK
72
+
73
+ This work aims to identify novel interaction and communication mechanisms for human-robot collaboration. We aim to develop visual mechanisms and languages that allow robots to project information about their state and the task into the environment. As a result, the physical world becomes a canvas. For many tasks, such visual communication could lead to a faster, more efficient, and more apparent introspection of the robot's beliefs and intentions. We will study how to best leverage such visual projections and which projection conventions are more fruitful than others. Our specific study mentioned in this paper focuses on the differences between continuous, animated projections versus discrete, static projections of information about the robot's next goals. We hypothesize that continuous projections lead to a faster and broader understanding of the current situations, whereas static projections may remain ambiguous. This hypothesis will be carefully investigated in a user study as described above.
74
+
75
+ ![01963dfb-8c61-7599-9cf5-d353d802ef78_2_980_235_612_340_0.jpg](images/01963dfb-8c61-7599-9cf5-d353d802ef78_2_980_235_612_340_0.jpg)
76
+
77
+ Figure 3: Highlight Mode: A user collaborating with the robot while information about the next object explicitly provided via projections in the form of semi-elliptical disc.
78
+
79
+ ![01963dfb-8c61-7599-9cf5-d353d802ef78_2_969_730_600_252_0.jpg](images/01963dfb-8c61-7599-9cf5-d353d802ef78_2_969_730_600_252_0.jpg)
80
+
81
+ Figure 4: Shadow Mode: Simulated version of the robot starts executing the intended motion before the physical robot. The simulated robot is projected into the table using our intention projection framework. This allows the human partner to preview the next actions and helps avoid collisions and increase transparency.
82
+
83
+ ![01963dfb-8c61-7599-9cf5-d353d802ef78_2_1067_1261_426_353_0.jpg](images/01963dfb-8c61-7599-9cf5-d353d802ef78_2_1067_1261_426_353_0.jpg)
84
+
85
+ Figure 5: Two different (A and B) patterns of 10 objects placement shown to the user prior to the experiment. Each square block in A) and B) signifies an object to be sorted, whereas block with the " $\mathrm{H}$ " mark needs to be sorted by the user and the rest by the robot
86
+
87
+ ## REFERENCES
88
+
89
+ [1] R. S. Andersen, O. Madsen, T. B. Moeslund, and H. B. Amor, "Projecting robot intentions into human environments," in 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 294-301, IEEE, 2016.
90
+
91
+ [2] S. Song and S. Yamda, "Effect of expressive lights on human perception and interpretation of functional robot," vol. 2018-April, Association for Computing Machinery, 42018.
92
+
93
+ [3] E. Cha, T. Trehon, L. Wathieu, C. Wagner, A. Shukla, and M. J. Mataric, "Modlight: Designing a modular light signaling tool for human-robot interaction," pp. 1654- 1661, Institute of Electrical and Electronics Engineers Inc., 7 2017.
94
+
95
+ [4] K. Baraka, A. Paiva, and M. Veloso, "Expressive lights for revealing mobile service robot state," in Robot 2015: Second Iberian Robotics Conference, pp. 107-119, Springer, 2016.
96
+
97
+ [5] R. T. Chadalavada, H. Andreasson, R. Krug, and A. J. Lilienthal, "That's on my mind! robot to human intention communication through on-board projection on shared floor space," pp. 1-6, Institute of Electrical and Electronics Engineers (IEEE), 22016.
98
+
99
+ [6] T. Matsumaru, "Mobile robot with preliminary-announcement and display function of forthcoming motion using projection equipment," pp. 443-450, 2006.
100
+
101
+ [7] M. Walker, H. Hedayati, J. Lee, and D. Szafir, "Communicating robot motion intent with augmented reality," in Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 316-324, 2018.
102
+
103
+ [8] E. Rosen, D. Whitney, E. Phillips, G. Chien, J. Tompkin, G. Konidaris, and S. Tellex, "Communicating and controlling robot arm motion intent through mixed-reality head-mounted displays," The International Journal of Robotics Research, vol. 38, no. 12-13, pp. 1513-1526, 2019.
104
+
105
+ [9] A. Boateng and Y. Zhang, "Virtual shadow rendering for maintaining situation awareness in proximal human-robot teaming," in Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, pp. 494-498, 2021.
106
+
107
+ [10] R. K. Ganesan, Y. K. Rathore, H. M. Ross, and H. B. Amor, "Better teaming through visual cues: How projecting imagery in a workspace can improve human-robot collaboration," IEEE Robotics and Automation Magazine, vol. 25, pp. 59-71, 6 2018.
108
+
109
+ [11] D. Moreno and G. Taubin, "Simple, accurate, and robust projector-camera calibration," Proceedings - 2nd Joint 3DIM/3DPVT Conference: 3D Imaging, Modeling, Processing, Visualization and Transmission, 3DIMPVT 2012, pp. 464-471, 2012.
110
+
111
+ [12] J. Wang and E. Olson, "AprilTag 2: Efficient and robust fiducial detection," in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2016.